willtheorangeguy commited on
Commit
d05108d
·
verified ·
1 Parent(s): f9fc970

add all 2023 transcripts

Browse files
Bare metal meets Talos Linux (the K8s OS)_transcript.txt ADDED
@@ -0,0 +1,477 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** I think that a new year is a natural time for new beginnings... And as our listeners know, we used to run all of changelog.com on Kubernetes until April 2022, when we move to something simpler; something that's all our team can be comfortable with. What I'm most comfortable with is bare metal hosts. I'm also comfortable with Kubernetes, and it just so happens that I have some bunch of droplets lying around, \[unintelligible 00:01:36.19\] with various workloads, mostly PHP and MySQL, that are due a refresh. \[unintelligible 00:01:42.21\] the five years are coming up, and I wanted to try something else. So given a bunch of bare metal hosts with fast, local SSD disks, how would I convert them to a production setup running Kubernetes? That was my starting point. And I've been preparing for this since episode 25, October 2021. Andrew, welcome back to Ship It.
2
+
3
+ **Andrew Rynhard:** Thank you. I'm glad to be here again.
4
+
5
+ **Gerhard Lazu:** How are you?
6
+
7
+ **Andrew Rynhard:** I'm doing well. It's been -- wow, that was a year ago? It feels like five years ago.
8
+
9
+ **Gerhard Lazu:** Lots of things happened, yeah...
10
+
11
+ **Andrew Rynhard:** A lot has happened since then. But yeah, I can't say all of them were bad. Most of them were good, so... Yeah.
12
+
13
+ **Gerhard Lazu:** Okay. Well, we will dig into that. I won't press too hard right now, we're just getting started, but we'll dig into that. Steve, thank you for making the time to join us.
14
+
15
+ **Steve Francis:** Yeah, my pleasure. Good to be here.
16
+
17
+ **Gerhard Lazu:** I really appreciate all your help in the Talos community Slack. I had so many questions, and you answered some of them so well; it was super-helpful. Thank you.
18
+
19
+ **Steve Francis:** Yeah. My pleasure. It's actually fairly unusual that I get to answer the technical questions, because --
20
+
21
+ **Gerhard Lazu:** I know, right?
22
+
23
+ **Steve Francis:** ...I'm not really the technical person in the company... \[laughs\]
24
+
25
+ **Gerhard Lazu:** So what is your role within Talos, by the way? Because our listeners don't know.
26
+
27
+ **Steve Francis:** Yeah, so I'm the CEO. I've been with the company about two years. Before this, I founded logicmonitor.com, a SaaS-based data center monitoring service, which is where I've worked with Andrew before.
28
+
29
+ **Gerhard Lazu:** Okay. And now you're full-time with Talos.
30
+
31
+ **Steve Francis:** Full-time with Talos. I was one of the initial seed investors in Andrew's company, and it's just because I -- before LogicMonitor I used to run data centers myself,; I ran data centers for Citrix Online, and ValueClick, and some other big companies... So what Andrew is doing with Talos, Linux, I think is the most innovative thing I've seen in operating systems in the 25 years I've been doing it.
32
+
33
+ **Gerhard Lazu:** Wow. Okay. I'm sure it was more than just the tech that attracted you to Talos?
34
+
35
+ **Steve Francis:** Yes. Yes. I mean, Andrew, when I started, basically the company was Andrew and Spencer, who was basically the co-founder... And Andrew had just started. He was the lead engineer. So it was a very small company. But yeah, it was working with Andrew, and the innovation that he's bringing, and the approach.
36
+
37
+ **Gerhard Lazu:** So what was the hook, Andrew, from your perspective? What is your side of the story?
38
+
39
+ **Andrew Rynhard:** I don't know... I mean, I think Steve is just a nice person. \[laughter\] I thought the idea was crazy. It's a brand new Linux distribution, no Bash, no SSH... So really, Steve introduced me to another one of our angel investors, Saïd Ziouani. He was the CEO of Ansible...?
40
+
41
+ **Steve Francis:** He was founder and CEO of Ansible. Now he's the founder and CEO of Anchore.
42
+
43
+ **Andrew Rynhard:** That's right.
44
+
45
+ **Gerhard Lazu:** Wow. Okay.
46
+
47
+ **Andrew Rynhard:** So Steve put me in touch with him, and Sayid - he knows his stuff when it comes to open source; super-smart guy. And everyone just said "Okay, Sayid thinks this is a good idea", and Steve is a friend of mine, so he gave me that opportunity to talk to these people. He could have said, "No, you're just a gym rat jujitsu guy, and I don't want to put you in front of my friends." \[laughs\] But he did that, and so... Yeah. Now, I'm fortunate enough to say I'm the CTO of a company.
48
+
49
+ **Gerhard Lazu:** Okay. So just to make it clear, this was not the jujitsu winner joins, or winner -- it wasn't one of those things, like whoever wins...? \[laughter\]
50
+
51
+ **Steve Francis:** It was a challenge match, but I lost, so I got to be CEO. \[laughter\]
52
+
53
+ **Andrew Rynhard:** Yeah, loser is CEO. \[laughs\]
54
+
55
+ **Gerhard Lazu:** Okay. So I don't remember exactly where I've seen this, but apparently you, Andrew, have something in common with MMA. Is that true, or are those just rumors?
56
+
57
+ **Andrew Rynhard:** Oh, yeah, that's true.
58
+
59
+ **Gerhard Lazu:** Okay...
60
+
61
+ **Andrew Rynhard:** \[05:53\] So I was competing in mixed martial arts. I was training in San Jose, California. That's what I thought I was going to do. And long story short, I ended up deciding that I'm going to go back to school, and I got into UCSB for physics. And so that's what actually brought me out here to Santa Barbara. And I didn't do jujitsu for some time, I was kind of out of Mixed Martial Arts, and it was still very much a big part of me. I started dealing with a bit of depression, because your identity as a fighter - it's a big thing. You see a lot of fighters when they actually decide to not do it anymore, they don't know who they are. I felt that way a little bit. And so I've found a gym called Paragon out here in Goleta. It's a really world-class and renowned, well-known gym, and there was this tall, lanky, really strong Australian guy...
62
+
63
+ **Gerhard Lazu:** Called Steve?
64
+
65
+ **Andrew Rynhard:** Steve. Yes. Called Steve. \[laughter\]
66
+
67
+ **Gerhard Lazu:** Wow... Okay, I just guessed it. I've only seen the head, by the way. Nothing else. Wow, okay...
68
+
69
+ **Andrew Rynhard:** And so -- yeah, from there, I kind of got away from mixed martial arts, just because I thought that if I'm going to do things like with technology and whatnot, getting hit in the head probably isn't a good thing. In fact, in college I did a study on it. That was one of my reports, or whatever... And believe it or not, getting punched in the head over and over again can have long-term serious health effects.
70
+
71
+ **Gerhard Lazu:** Right. Who would have thought that?
72
+
73
+ **Andrew Rynhard:** Yeah, I don't know. I don't know. So I decided I wasn't going to necessarily compete ever again in mixed martial arts. I have three kids... It doesn't make sense. It's probably not responsible of me. I still do very much love the sport, and I still very much train for it, but... Yeah, competition days are well past.
74
+
75
+ **Gerhard Lazu:** I see. Okay. Okay. So tech it is. That sounds like a very sensible choice to me. Okay. So I'm just wondering, when there's an argument, do you ever settle it by jujitsu, I mean between the two of you, ever? Has it ever happened? \[laughter\]
76
+
77
+ **Andrew Rynhard:** It would be fun to say yes, but actually, we've never really had an argument, to be honest. I'm not even lying here. we get along very well. Again, going back - jujitsu, it teaches you a lot. I think martial arts in general teaches you a lot. It teaches you how to be confident, how to avoid confrontation... And Steve's a brown belt, I'm a brown belt... Getting a brown belt in Brazilian jujitsu is no easy task. It's very, very difficult, and along that path, you learn a lot of human social skills, or at least you're supposed to, in my opinion. And so yeah, that has helped us sort of navigate how to be friends, and also run a business together. And I think we do a pretty good job of keeping the two separate, and not letting one interfere with the other.
78
+
79
+ **Steve Francis:** Yeah. I mean, I would summarize the thing that jujitsu teaches you in a short term is you respect everyone but you're intimidated by no one. Whether they're above you in a hierarchy, in a business sense, you can respect them, but you don't get intimidated. You still speak your mind, or whatever. And that's what we want to embody in the company.
80
+
81
+ **Andrew Rynhard:** Yeah. That's great. I've never been able to -- Steve is like my translator oftentimes. He's much better with words. That's perfect. Exactly.
82
+
83
+ **Gerhard Lazu:** Okay. Well, I find it's fascinating how from jujitsu sparring partners, or sharing the same gym, you know...
84
+
85
+ **Andrew Rynhard:** Yeah, you could call it sparring partners. Yeah.
86
+
87
+ **Gerhard Lazu:** Very nice. Okay. Now, speaking about partners, I want to give a shout-out to two people that helped me navigate Talos OS. When it was ready, they were there. And this has been, as I mentioned, years in the making. Now, Georgie... How do you pronounce his surname? I'm not sure.
88
+
89
+ **Andrew Rynhard:** That is a great question. I'm not sure I ever said it out loud... \[laughter\]
90
+
91
+ **Gerhard Lazu:** Alright... Frezbo! Frezbo!
92
+
93
+ **Andrew Rynhard:** Yes, there you go, Frezbo. \[laughs\]
94
+
95
+ **Gerhard Lazu:** \[09:55\] Alright, so his surname is Frezbo. So Noel, thank you very much for all your help in the Slack. I mean, some of those answers were spot on. And of course, Andrey Smirnov, because he's everywhere, right? So Andrey is everywhere. So thanks, guys. I really appreciate it.
96
+
97
+ **Steve Francis:** I mean, all our staff are amazing. We have an amazing team. They're all really good. But yeah, Andre is -- I don't know how he does all the engineering work he does, because he is also extremely helpful in the community Slack.
98
+
99
+ **Gerhard Lazu:** Yeah. I think it really helps to feed that... Being close to your users, it helps you figure out what is missing. I mean, that's such a great approach. Okay.
100
+
101
+ **Andrew Rynhard:** Yeah. And it's just about being genuine too, I think. if you're gonna do open source, you should have, you should be genuinely concerned about the people that are using your product. Otherwise, open source becomes theatrics, in my opinion.
102
+
103
+ **Gerhard Lazu:** That's right.
104
+
105
+ **Andrew Rynhard:** We do open source because we want to be helpful. Of course, as a business we want to also make money...
106
+
107
+ **Steve Francis:** \[laughs\] Which is not really Andrey's view from a few years ago. A few years ago Andrey was like "No, we're pure open source. We're never gonna charge for anything."
108
+
109
+ **Gerhard Lazu:** I have noticed that, by the way. I have noticed that. That was very interesting.
110
+
111
+ **Andrew Rynhard:** Yeah. There has been a bit of a transition there, of course... But yeah, it's just about being genuine, and I think those two -- to Steve's point, our whole team is really good about that... But those two in particular seem to be more public about it. We're all genuinely wanting to make a really great product for all of our users, and to your point, it's about being there for them. So, yeah...
112
+
113
+ **Gerhard Lazu:** So back to my conundrum - again, first-hand experience; this was not set up... "One day I decided I'll go for this, and I'll see what happens next", which is one of my favorite approaches. So given a few bare metal hosts, with fast local SSD storage, the quickest way for me to get Kubernetes was Talos. I tried a few other things, but there is nothing simpler than booting the right image and running three commands. I'll start with the first one. Let's see how tech-savvy we are among the three of us. The first one, talosctl gen config. What happens next?
114
+
115
+ **Andrew Rynhard:** What does that do, is that the question?
116
+
117
+ **Gerhard Lazu:** No, what is the next one? There's three commands to run to get a node after it has booted to have Kubernetes on it. The first one is to gen the config. The second one is...
118
+
119
+ **Andrew Rynhard:** Apply config.
120
+
121
+ **Gerhard Lazu:** Correct.
122
+
123
+ **Steve Francis:** --insecure.
124
+
125
+ **Gerhard Lazu:** Yes, always. \[laughter\] Why? Why is it insecure? That's a very interesting point.
126
+
127
+ **Andrew Rynhard:** Well, at that point, we have no PKI.
128
+
129
+ **Gerhard Lazu:** Hmm, okay.
130
+
131
+ **Andrew Rynhard:** We have no certs on them. The configuration files that you just generated - they are not present on Talos, so we don't know how to secure the API yet. And so it's just sitting there, saying, "Hey, give me a configuration file. And once you give it to me, I'll secure myself on these ports."
132
+
133
+ **Gerhard Lazu:** Okay. That's a great one. The last command. It starts with a B.
134
+
135
+ **Steve Francis:** Bootstrap.
136
+
137
+ **Gerhard Lazu:** That's the one. That's it. Three commands.
138
+
139
+ **Steve Francis:** Bootstrap etcd. That's it, yeah.
140
+
141
+ **Gerhard Lazu:** Apply config bootstrap. And that's it. That's all it takes to get Kubernetes on a bare metal node. And by the way, this is open source. There's nothing to pay. Anyone can do this. I was so impressed, like --
142
+
143
+ **Steve Francis:** A lot of people do. \[laughs\]
144
+
145
+ **Gerhard Lazu:** And a lot of people do, exactly. So as simple as this sounds, I'm sure there's like a big story behind it to get to this simplicity. Who would like to start?
146
+
147
+ **Steve Francis:** Well, this is Andrew's story.
148
+
149
+ **Andrew Rynhard:** Yeah. Well, I guess the question is "Where do I start?" Where we're at today has really been the vision for me personally, where I wanted Kubernetes to be. It was a lot of fun learning Kubernetes using kubeadm, doing Kubernetes the hard way; it was fun. But that fun very quickly dies, and never returns when you're doing this in production. And so the goal has always been to make it that simple. But along the way, we've had to make a lot of big decisions.
150
+
151
+ \[13:57\] We started off with kubeadm, but kubeadm just fundamentally wasn't designed with the idea of a Linux distribution that is purely API and configuration-driven. So there was some weirdness in trying to shoehorn that into our paradigm.
152
+
153
+ So then we decided to go down to a project called \[unintelligible 00:14:13.13\], which was formerly a CoreOS project. It was self-hosted Kubernetes. So it would spin up a temporary control plane using static pods, and then using that temporary control plane you'd actually apply your control plane, which would be backed and stored within etcd. Then you'd tear down the old pods and then you have Kubernetes sort of managing itself. And that's as scary as it sounds. I thought it was really cool, but in practice, it was very much a pain, and so we decided to go away from that entirely.
154
+
155
+ We just really embraced the fact that Talos - yes, it is a Linux distribution, but really, at its core, it is a Kubernetes bootstrapping or whatever kubeadm and bootkube qualify themselves as. That's what we are. And so we just said, "Okay, we tried to be good citizens within the open source world, but the paradigm shift that we've made - it is pretty drastic, and we think for the better... But it also means that existing tooling doesn't work very well with us." And so we rewrote everything from the ground up. PID 1 is rewritten completely in Go, it is specifically built for this purpose. It's got the whole controller pattern within it, very much like Kubernetes read-only file system...
156
+
157
+ So it's been a long road to kind of get to where we're at today. Like you said, there's a lot going on under the hood to make it that simple. In fact, in our demos that we do for potential customers, we have to feel bad about how good and fast the demos go, because it's like - yeah, you've got a Kubernetes cluster on bare metal right now, running BGP with VIP for HA control plane... Okay, cool. You just did that in like three minutes. But that's the beauty of it.
158
+
159
+ **Gerhard Lazu:** Where's the rest? \[laughter\]
160
+
161
+ **Steve Francis:** Yeah, exactly. Unless they know how Kubernetes works and how complicated that is to achieve, if they're new to Kubernetes, they're like "Alright, well, this looks pretty simple..."
162
+
163
+ **Andrew Rynhard:** Yeah, exactly. They don't really know what they're looking at. So we're kind of -- yeah, it just puts us in a weird place. My demos are literally like "I know that's short, but that's the magic of it."
164
+
165
+ **Gerhard Lazu:** You've made it too good, Andrew. That's the problem. You've made it too good. Steve comes along, "What the hell? This is just too simple." \[laughter\] No, no, seriously, seriously... I mean, that is exactly where you need to start, because there's so much more that needs to lay on top of it. And you need some very solid fundamentals on which to build. And having the operating system redesigned from scratch to be Kubernetes... I mean, there is a separation, obviously, between Kubernetes and the operating system... But that is so nice and clean that you almost don't even see it.
166
+
167
+ I mean, being able to talk to your operating system through CLI only... Okay, it has an API - sure, you can talk to an API. But the CLI is there to talk to that API... That's it. I think that's what everyone -- like, why package managers? Seriously.
168
+
169
+ **Andrew Rynhard:** Exactly.
170
+
171
+ **Gerhard Lazu:** I mean, that's what we used to do 20-30 years ago; surely we have moved on since then, right? \[laughs\]
172
+
173
+ **Steve Francis:** We don't... We just want to build out VMs for everything. \[laughter\]
174
+
175
+ **Gerhard Lazu:** Exactly, yeah. And you just throw away so much complexity; even when it comes to networking, there's so much stuff happening just in that stack, never mind everything else - storage, securing whatever boots... It's just like, it's never-ending. And good luck configuring all of that. It doesn't matter what configuration management system you use. There is a lot of complexity there. Okay...
176
+
177
+ **Andrew Rynhard:** \[17:52\] Yeah. It's ten different files, and depending on which distro, it's network manager, or it's just good old \[unintelligible 00:17:59.20\] files... To your point, I think at that layer of operating -- like, the operating system layer, like you said, this is a 20, 30-year old way of managing this. If we're going to get beyond things like climate change, and these types of things, fix real problems, we can't be sitting here worrying about building RPMs and doing package managers. The Linux distributions and the fundamental way we run technology needs to be just forgotten about; it needs to be "That's the way it is." And the best way to do that, in my opinion, is to make it so simple that it doesn't even matter. It doesn't really -- it's not really a thing.
178
+
179
+ But when you allow a human to get onto a machine, we have a tendency to love everything that we can interact with. We want to make these servers special, and name them after Lord of the Rings characters... But by just simply getting humans off of the box, we've already kind of cut that emotional tie, and it allows us to start thinking about the next layer of things that we really need to solve, to really do things at the scale we need to land on Mars, or something like that. I don't know.
180
+
181
+ **Gerhard Lazu:** What is your perspective, Steve? ...because you've seen data centers from every which angle. How does this fit in that world?
182
+
183
+ **Steve Francis:** Oh, this is the way it should be. I mean, I started with configuration management tools way back in the day of CFEngine, when it was a small open source project, and I've been through all the Chef and Puppet and everything. One of our large enterprise customers who has many fleets of servers - I'm always asking them, "Why run on Talos?" and his answer was basically this; it was because even when you run a configuration management tool, and you have set everything you want in a Linux server, it's controlling resolv.conf, and ntp.conf, and everything else, there's always going to be something that you haven't thought of to control, that some sysadmin is going to come through and change, that is going to superficially work and then break on the next upgrade. So in his case, they were talking about the fact that they have one particular cluster, and someone came through and set the RAID controller to caching mode. And it wasn't supposed to be, because it was supposed to be a highly available system, with redundancy, and they wanted the data to persist on the disk. But that wasn't managed by their configuration system. And so that caused an outage, and they lost data, and bad things happened. So that was one of the systems that they've moved on to Talos/Kubernetes. And now it's like "Oh, you want to change the caching controller, you kind of SSH into the box and use the RAID admin tool." So that whole avenue of people making things special snowflakes is just cut off... But it improves their system reliability a lot y basically keeping humans out of the equation. Humans are the ones that mostly break things.
184
+
185
+ **Gerhard Lazu:** Yeah. And at least you should have a trail, like "Why did we do this change?" Can we track it via version control in a way that everyone sees, everyone understands? Can we maybe add some pictures to the thing? That's like the human side of things, rather than an admin changing something somewhere, not telling anyone, and his job is secure, because only he knows how the thing works...
186
+
187
+ **Andrew Rynhard:** \[laughs\] Yeah. Right.
188
+
189
+ **Gerhard Lazu:** Okay... So what is part of the operating system? Because the operating system is really, really small.
190
+
191
+ **Steve Francis:** Well, our operating system is really, really small. Talos Linux is.
192
+
193
+ **Gerhard Lazu:** Talos is really small, yes.
194
+
195
+ **Steve Francis:** A generic Linux... Systemd is basically an operating system. \[laughs\]
196
+
197
+ **Gerhard Lazu:** Oh, my goodness me. Oh, that was like one of the things. I think it's the second thing. The first thing was SSH. Not having SSH is such a good thing. You don't even have to worry about securing something that doesn't exist. Like, that's just the best. Okay.
198
+
199
+ **Andrew Rynhard:** But yeah, the operating system really is just the PID 1, and a Linux kernel. There's some magic you've got to do with the initial init that the kernel loads, and then you switch route after setting up base pseudo file systems like dev, and proc, and whatnot. But to your point earlier about networking being a big part of how you manage Linux - honestly, that's where most of the complexity within Talos exists. Otherwise, it's pretty simple.
200
+
201
+ \[22:18\] It's funny, because a big reason why Talos Linux exists today is because I was learning Linux from a project called Linux From Scratch. Basically, it's exactly what it sounds like - you build a Linux distribution from scratch. And in that process, I learned that Linux is actually very, very simple, at the end of the day; it's very, very simple. But Linux distributions have made it complex, and they've made it sort of tribal by having a love for a certain package manager. And that's really the only difference between any two Linux distributions. So really, Talos throws all that out. And so it's just as close to the Linux kernel as we can get as possible. And the init system just gives you mechanisms or knobs and buttons that you can turn and push in order to configure the kernel, ultimately - because that's what Linux is - but in a well structured way, instead of free-handed, and between any two files you have tabs, or spaces, or comma delimited... This is a nice, structured way of doing it.
202
+
203
+ So that is Talos Linux, that is our operating system, is just putting some structure in front of Linux with an API, and a more complex networking stack, that's interpreted, or at least directed by the configuration file. And that's the bulk of Talos. And of course, there's some operational knowledge too baked within Talos, like protecting you against doing stupid things with etcd. So imagine you're trying to upgrade two of your control planes at once, and that means -- well, let's assume you have three; that means etcd is going to be down because it doesn't have quorum... It will stop you from doing silly things like that. So there's some operational knowledge baked into it as well, that makes it a little bit unique, but it's simple at the end of the day, really. It just took a lot of work to get here.
204
+
205
+ **Steve Francis:** Yeah, very minimal. One thing I like to throw out there is Talos Linux I think on it has something like 32 binaries installed on the whole operating system, most of them to deal with file management, loading file systems...
206
+
207
+ **Andrew Rynhard:** A lot of them hard links, too.
208
+
209
+ **Steve Francis:** That's true. So they're duplicates. A typical Ubuntu install has like over 3,500 binaries, executables installed. So that's a lot more things that can attack, and be misconfigured, and need to be secured. Just things to go wrong. The less code there is, the less to go wrong.
210
+
211
+ **Break:** \[24:39\]
212
+
213
+ **Gerhard Lazu:** We talked about simplicity, we talked about networking, and this surprised me in the best possible way... Not initially. Initially it was like a WTF moment for me... But I was thinking "How the hell do I cluster these things?" And, okay, Omni has something to do with it, and we will leave this for slightly later... But Talos has KubeSpan. And KubeSpan just blew my -- I didn't realize it was that simple. I was like "What am I missing? This can't be it..." And it's a piece of technology... Spoiler alert, it's WireGuard behind the scenes, which I love... Like, having dealt with OpenVPN, and IPsec, and a bunch of other things, I was like "Oh yes, please, let it be WireGuard." So it was like a Christmas wish for me. Like, if I have to deal with it, just let it be WireGuard. And the way nodes cluster is incredibly simple. I wasn't expecting it to be that simple... So I was like "I must be missing something."
214
+
215
+ So when you first mentioned it to me, Andrew - this was, again, October 2021. You only had just released it; it was like a new thing.
216
+
217
+ **Andrew Rynhard:** Yeah, that was really new.
218
+
219
+ **Gerhard Lazu:** Yeah. And I was like "Wow, this is amazing." And now, having it experienced via Omni, that felt like magic. It just didn't feel real. So do you want to tell us a little bit about KubeSpan now? Can you expand on what you told us in October? Because I'm sure you remember what you told us a year ago... \[laughs\]
220
+
221
+ **Andrew Rynhard:** I'm sure, I'm absolutely certain that I don't. But let's see if I do. Let's see if I can be accurate. So yeah, KubeSpan, as you've already said, is really built on top of WireGuard. But the harder part of WireGuard is just doing like key distribution, and making other nodes aware of other nodes, and really orchestration. But WireGuard is incredibly fast, secure... It's really, really great.
222
+
223
+ So KubeSpan just really is orchestration for how do nodes discover other nodes, and how do we automatically configure WireGuard, and how do we do key exchanges? And so there's this lightweight discovery service that we run, where Talos will actually encrypt a blob of data, which ultimately just teaches other machines about itself, and they all sort of send their information there, congregate there, get information about each other from there, decrypt it using their keys, and now they know all the IP addresses of those machines, and they can go and communicate with them using WireGuard. It's actually really, really simple. It's just really taking WireGuard and trying to make that simplified for people. But WireGuard really is the magic, and it's literally just a boolean flag within Talos. It's no "Generate this key, and put this in etcd, and then run this daemon." It's just KubeSpan enabled = true. And then you're done. It's really great. It does feel like magic.
224
+
225
+ **Steve Francis:** Works across networks, behind firewalls... It's pretty slick.
226
+
227
+ **Gerhard Lazu:** That's in itself a piece of magic. WireGuard in itself is not as complicated as OpenVPN or IPsec to configure, but it still has its complexity. But KubeSpan - I mean, as Andrew says it, it's just a boolean. I mean, I was thinking "Surely, there must be more to this." There wasn't. I was like "Where's the rest, damn it? The docs don't have anything... I don't know. Does it work?" And everything just worked. So that was a very nice surprise.
228
+
229
+ **Steve Francis:** Were you running in multiple locations that you needed KubeSpan?
230
+
231
+ **Gerhard Lazu:** Yeah. So this is it. So I told you before we started recording that we have four guests here... This is the moment.
232
+
233
+ **Andrew Rynhard:** Alright.
234
+
235
+ **Gerhard Lazu:** Alright, everybody...
236
+
237
+ **Andrew Rynhard:** Wow. Wow, that looks like some hardware right there... What is that? I mean, it's a computer, I think... \[laughs\]
238
+
239
+ **Gerhard Lazu:** It is. Okay... So one of my Christmas presents - or shall I say two of my Christmas presents - were an open bench table...
240
+
241
+ **Andrew Rynhard:** To yourself, right?
242
+
243
+ **Gerhard Lazu:** To myself, of course... No. My wife got it for me. She knew exactly the color to get. Black. \[laughter\] Okay, so open bench table, fanless Sea Sonic PSU. Is that the right way up? It is.
244
+
245
+ **Andrew Rynhard:** It is.
246
+
247
+ **Gerhard Lazu:** Fanless so... A very old EVO 870... No, this way. SSD. That's for storage. There's an SD card... And this motherboard - it's one of my first supermicros. I'll have it forever. It's an X9SCA-F. It's 11 years old. Well, it will be 12 years old by the time this episode comes out. It was also a Christmas present, by the way. It was one of the first Xeons, the E3s, 1230... Max it out, 32 gigs of RAM... And this baby got Talos.
248
+
249
+ **Andrew Rynhard:** Nice!
250
+
251
+ **Gerhard Lazu:** So this is one of the Talos hosts. Three network cards. An IPMI, and two 1 gigabit ones. It's a beauty. So yeah, I'm a proper hardware nerd, as you can see... And I have it for a long, long time. So that's one of the hosts. Let me just put it down.
252
+
253
+ **Andrew Rynhard:** Is that for your home's use cases?
254
+
255
+ **Gerhard Lazu:** That is, yeah.
256
+
257
+ **Andrew Rynhard:** Okay.
258
+
259
+ **Gerhard Lazu:** \[31:40\] Ask me about my NixOS afterwards. I have another completely fanless system, AMD build, crazy NVMe drives, whatnot... Anyways, that's another story. And that is one of my Talos nodes. The other one is, again, a bare metal host, running in a data center... And I'm still missing a third one, to create my quorum... Which is where a lot of my issues started, because I was starting with a single node, which was a control plane that had to schedule workloads. And that's where a lot of the help came... And "Yeah, you can do this, you can do that", and there's like a few gotchas... For example, you have to boot... So you have to apply the config to the control plane first, if you want to configure it to run workloads. Because once you apply it, and then you apply it again, it won't fully do it. Again, it's like me doing things that were not anticipated. But it's possible; like, all of those things I worked out, and whatnot... So that was really, really good.
260
+
261
+ Now, some people use Raspberry Pi's for this. I don't think many people use bare metal hosts. But what do you see the typical workload where Talos shines?
262
+
263
+ **Andrew Rynhard:** It's certainly bare metal, and fast becoming edge. You've already spoken about Omni, and I'm sure we'll get into that at some point... But in particular, Talos in combination with Omni, edge is starting to become really, really powerful. And in fact, I think Talos, in those two use cases, bare metal or on-prem, I would - say not necessarily bare metal; that could be VMs - and edge, and in combination with KubeSpan, there are some architecture designs that simply people would never have thought of doing before, because of the complexity and potential things that go wrong. But with Talos Linux, it's just there. It's possible. And so it starts exposing limitations in other projects out there in the world, but we can get into that later. But certainly, I would say bare metal, on-prem VMs, and edge. Which is a shame, because Talos works just as well in the cloud. I mean, it's literally the same image; you can get the same exact benefits of Talos within the cloud, but it's just -- it's not quite as popular as the aforementioned places that Talos is really popular.
264
+
265
+ **Steve Francis:** Yeah. I mean, there's not so much of a compelling use case. If you're running in the cloud, you're probably running on EKS, or something like that. We certainly do have customers that say, "Alright, I'm in all the clouds, and I want to unify my management across them, and so I'm going to use Talos." But most people that are in the cloud, they just use their native cloud provider, which is usually the right thing to do.
266
+
267
+ **Andrew Rynhard:** Yeah, it makes sense.
268
+
269
+ **Steve Francis:** But the other thing - you alluded to this before - Raspberry Pi's, we have lots of people in the Kubernetes at home community that run Talos on Raspberry Pi's, and other small SBCs.
270
+
271
+ **Andrew Rynhard:** And they're a great community, too. They're a really great community; they give us great feedback all the time... I'm in their Discord; they're probably like "Oh, great. Why is this guy watching over everything? We can't talk about it..." I hope they do talk about it; that's where I get a lot of inspiration. But yeah, the Kubernetes at home users are a really great group of people.
272
+
273
+ **Gerhard Lazu:** Why do you think they use Raspberry Pi? Why is that so popular?
274
+
275
+ **Steve Francis:** Well, it used to be cheap... \[laughs\]
276
+
277
+ **Andrew Rynhard:** Yeah, exactly what I was gonna say. I would say affordability, some small footprint... They're not noisy... I have a supermicro in my closet, and I can't sleep with that thing on it. It's loud.
278
+
279
+ **Gerhard Lazu:** I know what you mean.
280
+
281
+ **Andrew Rynhard:** I love having the power, and it's fun, but I also like my sleep.
282
+
283
+ **Gerhard Lazu:** Yeah. Fanless.
284
+
285
+ **Andrew Rynhard:** So something like a Raspberry -- yeah, fanless... And Raspberry Pi's are really, really great for that. And it's just kind of fun knowing that this small little board is running... Like, for me in particular too, just knowing it's running Talos Linux, a single Go binary and a kernel, and it's spinning up Kubernetes, and it's just on this little thing that's in the palm of my hand... It's really, really fun. People build these cool little stacks, you stack them on top of each other and stuff like that, and put fancy LEDs and whatnot... So yeah, I think there's an element of fun, and it used to be more affordable as well, for sure.
286
+
287
+ **Gerhard Lazu:** \[36:11\] Yeah. I mean now, if you want to get like a decent one, they're crazy expensive, by the time you add all the things.
288
+
289
+ **Andrew Rynhard:** Yeah.
290
+
291
+ **Gerhard Lazu:** I mean, it was cheaper for me to get like a fanless PSU, and an openbench, that to get all like the Raspberry Pi equivalent... And this thing aged really well. Again, it's going on 12 years, and it can still run pretty much anything.
292
+
293
+ **Andrew Rynhard:** That's awesome.
294
+
295
+ **Gerhard Lazu:** You know, eight cores... Okay, they're hyper-threaded. 32 gigs of RAM... Okay, it's DDR3, slightly slower, but put a fast SSD on it and you have two one-gigabit cards... I mean, there's no Raspberry Pi that has two -- so you can use two networks at the same time, which of course I would have, because... You know, you want two fully redundant networks in your house, of course; and two fiber lines, and all that. All that to run Talos... \[laughs\] No, no, I have big plans for it. But anyways, let's see how it goes.
296
+
297
+ I mean, the beginning was important for me. The beginning was taking something that was meaningful to me, and taking it into production, like my production, which right now, as I said, it's like nine Digital Ocean droplets. All of those can be collapsed in a single bare metal host. But you can't have just one, right? You need to have another one, which acts as a backup. And this brings me to the next point. What is the CSI that you typically see being used with Talos? What is the storage interface, and how is storage exposed to Talos?
298
+
299
+ **Andrew Rynhard:** Yeah, so the typical one that we recommend is Rook CEPH, largely because it's battle-tested, and we have some familiarity with it as well. We also recommend a couple of projects from OpenEBS, their Mayastor project, and their Jiva...
300
+
301
+ **Steve Francis:** I pronounce it Jiva, but...
302
+
303
+ **Andrew Rynhard:** Okay.
304
+
305
+ **Gerhard Lazu:** Like JIRA. JIRA, but there's like a V instead of an R. Yeah, JIRA.
306
+
307
+ **Andrew Rynhard:** It's definitely Jiva for me now.
308
+
309
+ **Gerhard Lazu:** Okay, great. \[laughs\] I'm glad we settled that one.
310
+
311
+ **Andrew Rynhard:** Anyways, those three in particular, but probably in that order, I would say. Actually, Jiva is probably becoming more popular than their Mayastor. So yeah, Rook CEPH... And there is a lot of people that think that because of the way Talos is designed, and its restrictions and whatnot, that storage is just not going to work. But at the end of the day, our goal is to get out of the way to allow you to do the things that you need to do, and storage is obviously one of the most important things that you need to do. And so if you can run a CSI within standard -- I don't want to say standard Kubernetes, but non-Talos Kubernetes, by and large you can run that within Talos. But there is one place where it's a bit of a caveat, and there's a couple of \[unintelligible 00:39:00.16\] CSIs that just simply won't work with Talos, because they make some really big assumptions about what they can do. And in my opinion, really bad practices. They assume that they can actually escape out of their container, literally NS entering into PID 1's namespace, and going so far as figuring out what operating system am I running on, so they can figure out what package manager to use... And doing a yum, or dnf, or apt install to install whatever they need in order to use their CNI. That's just a big gaping hole in security that I'm not comfortable with...
312
+
313
+ And so there are some of them that do make these assumptions, that there's Bash, they can break out of their containers, but we're working to stop those, and we will never support those, in my opinion. So yeah, again, our goal is to get out of the way, and by and large, the CSIs will work.
314
+
315
+ **Steve Francis:** Out of the way, but not to sacrifice security.
316
+
317
+ **Andrew Rynhard:** Yeah, exactly. Yeah.
318
+
319
+ **Gerhard Lazu:** \[40:04\] I know that security is a big deal in Talos... Can you tell us a bit more about that, Steve?
320
+
321
+ **Steve Francis:** No. But Andrew can. \[laughter\]
322
+
323
+ **Gerhard Lazu:** Okay, alright. That's great. Okay, so no, hang on, we have to find --
324
+
325
+ **Steve Francis:** I mean, Andrew can tell you all the drivers and parameters and everything, and when it comes to me, I can tell you it's very secure. \[laughter\]
326
+
327
+ **Gerhard Lazu:** Correct. Okay. So it's a very high-level -- it's very compressed; that's great, right? Because when you talk to other CEOs, that's what they want to know. "Is it secure?" "Yes." "Great! Next point..." \[laughter\] Alright. Okay, so Andrew, you had a slip there, and I'm glad that you did... You mentioned CNI in the context of CSI. Let's talk about CNI.
328
+
329
+ **Andrew Rynhard:** Oh. I did. Sorry.
330
+
331
+ **Gerhard Lazu:** So Flannel, as far as I know, is the default CNI in Talos. Why have you chosen Flannel?
332
+
333
+ **Andrew Rynhard:** Really, it's simple. I mean, it's gonna have the most coverage out of the box. It's just kind of works, and it doesn't have a lot of bells and whistles, and that's a good thing for the standard or default experience with Talos. Cyllium is also a very, very popular combination with Talos. If you're the type of person who finds Talos interesting, you're kind of naturally going to be the person who finds Cyllium interesting, because it's --
334
+
335
+ **Gerhard Lazu:** Absolutely. "How do I replace Flannel with Cyllium?" is my next question... \[laughs\] Let's skip to that part.
336
+
337
+ **Andrew Rynhard:** We can. Yeah, that's simple. I mean, really, you just teach Talos "Hey, don't install Flannel, and instead use this URL to install the CNI, at the point in which CNI is required to be installed." And so you just host your manifest somewhere, Talos will pull them, install them, and basically replace Flannel that way.
338
+
339
+ We are thinking about having more native integrations in the future, but it's not on the near-term roadmap... Cyllium being a high, sort of a really interesting CNI that we could hopefully partner with even, but offer a native, out-of-the-box experience and just have it say "CNI = cyllium". And we have baked in manifests to do that.
340
+
341
+ **Gerhard Lazu:** Okay, that's a great question. What about -- again, still specifics, but we are very close to going high-level again... What about Metal LB? What about the load balancer? Because that's typically used in the context of bare metal. So what are your thoughts there?
342
+
343
+ **Andrew Rynhard:** We recommend that all the time. I think MetalLB is a really great project. I love it. It's simple, it's well built, well designed, and we recommend it all the time with Talos.
344
+
345
+ **Gerhard Lazu:** Okay.
346
+
347
+ **Andrew Rynhard:** Yeah, we talk about it all the time; I have nothing bad to say about it. It's just "Use it. It's great."
348
+
349
+ **Gerhard Lazu:** Just go for it. That's very nice. Okay.
350
+
351
+ **Andrew Rynhard:** Yeah.
352
+
353
+ **Steve Francis:** Talos delivers vanilla Kubernetes at the end of the day, so you can run whatever your choice is of any of these capabilities. We will probably have easy defaults, so it's like the default install; unless you say otherwise, we'll install MetalLB on bare metal, and maybe let you configure a different CNI that includes storage... But right now, it's just vanilla Kubernetes, done really simple, really securely.
354
+
355
+ **Gerhard Lazu:** That is a great starting point. And again, everything that I've tried so far, it worked really well. And I wasn't expecting it to be that simple and straightforward. Now, there's a lot of blanks to be filled... And that's on purpose, right? Because you can't know what CNIs people will choose. And if anything, you maybe mention "Hey, this person or this use case, we are aware of this being used." Again, I think it's usage, people talking about it... That's something which I'm hoping to do a bit more as I'm continuing on my Talos journey, sharing how did I change my CNI, and why did I pick one versus the other... Because it's a real production, with real needs, that the majority will have. What about MetalLB? How did that work, and what does it look like in practice? So are there things that you typically see your customers install on Talos? ...like some common, common things.
356
+
357
+ **Andrew Rynhard:** \[44:14\] Yeah, definitely. We've kind of touched on them already. MetalLB is definitely common. Rook CEPH is definitely common. What are some other ones? Definitely ingress controllers, obviously. The NGINX one is great. That's the one I've used. It works very, very well.
358
+
359
+ **Steve Francis:** The usual monitoring and logging...
360
+
361
+ **Andrew Rynhard:** Yeah. Prometheus, Loki, Grafana... Sort of the cool crowd of all those day two operations tools. Those work just fine on Talos, and they're definitely popular with Talos.
362
+
363
+ **Gerhard Lazu:** Okay. So again, continuing on this high-level trend... What use cases is Talos known to work very well for? And if you have a couple of specifics, go for it. Steve, maybe this is something that you can share with us, some use cases that you know, that are okay to be public, maybe...
364
+
365
+ **Steve Francis:** Yeah. Bare metal... So people like bare metal often for either latency of performance, or geographic latency. So Talos is extremely low \[unintelligible 00:45:14.17\] because it's such a small operating system, it itself uses very minimal resources, and leaves the rest for the workload... So we have some video game companies running on top of Talos, because they want the most performance they can, and they want to run it on bare metal; we have some defense work going on for similar reasons... The military has lots of money, but a lot of their compute resources are running on things older than the board you introduced us to... \[laughter\] So they need to run modern software on old hardware, because their procurement times and deployment times are so long... So they want the most effective use of the resources they do have. And edge deployment has become quite big, approximately in the last six months; that's really taken off. A lot of that's due to Omni, but also a lot of it is just people running Talos on very small form factors, very simple... It's got to be kind of appliance-like running on an edge, where they don't have skilled IT resources to go and do whatever... And it's got to be, I'd say, in a hostile location, kind of, so they want to keep it secure, and just "Alright, if something goes wrong, turn it off and turn it on. We guarantee it will be back to the state it was before." So those are kind of the big ones.
366
+
367
+ **Gerhard Lazu:** Okay. Okay. We've mentioned Omni a couple of times... I think now it's the moment to talk about it. Steve, do you want to continue?
368
+
369
+ **Steve Francis:** Yeah. Well, so Talos does make setting up a Kubernetes cluster really simple. Omni makes it next-level simple, where it's really -- so Omni is our SaaS service for the installation of Kubernetes. So the way it works is you log into your Omni account, your SaaS portal, you download the installation media for a Raspberry Pi, or an ISO for a bare metal, that you can put onto a USB or whatever, or an AMI for Amazon, or Google, or Oracle, or wherever you want to run your compute resources... So basically, you boot your machine wherever it is off that image, and that's basically it. So that image that you've downloaded has built-in WireGuard endpoints, and joint tokens, so as soon as the machine boots, it registers with your Omni account, and it shows up as an unallocated machine in your web portal.
370
+
371
+ And then, all you do is you go into your web portal, if you want to make a new cluster, you go to your unallocated machines and say "This one's a control plane. This one's a control plane. This one's a control plane. Worker, worker, worker, worker, worker. Go. Create cluster", and that's it.
372
+
373
+ And WireGuard gets deployed, KubeSpan is configured, Talos is installed, Kubernetes is installed, the cluster is bootstrapped... But that all happens automatically; you get a nice management GUI where you can see performance, and nodes, you can run upgrades... It's really simple, really slick.
374
+
375
+ **Gerhard Lazu:** \[48:01\] Yeah. That is the one thing which, again, I was expecting... Because I started with the open source one, and I was expecting there to be more things to do... But once I realized that "Hang on, the image which I downloaded, that is mine, was generated for my own account", and as soon as that image boots, it's ready to go. It will show up in the UI. It's all configured, it's ready to literally just set it up. And I was expecting the Talos CTL port, the 50,00 to be open... Nope, no such thing.
376
+
377
+ **Steve Francis:** No, because all the authentication is done through the SaaS account. So it ties into your authentication provider, Google, or GitHub, or whatever. So you can have multiple users going in. And if someone leaves your company, you don't have to lock down all your Kubernetes clusters and take away their tokens. It's just like they can no longer authenticate through the account, and they can't connect directly to the machines... So we're good to go. Security is kind of paramount in this design.
378
+
379
+ **Gerhard Lazu:** Yeah. That was the first thing which got me. I was thinking "Why does my Kube CTL not work? Why does my Talos CTL not work?" And for the Kube CTL, it was like really simple. Just a matter of \[unintelligible 00:49:09.18\] install the plugin, and off you go. Because it needs to authenticate with Omni. And that was like a small gotcha for me there. But once I figured that out, it was very easy. Okay, so are you supposed to manage those nodes with Talos CTL? ...once Omni manages them. How does that work?
380
+
381
+ **Steve Francis:** In general, no. Because Omni is going to be the source of truth, so it's going to reconcile the state of machines to the state that it knows about. But if you do something kind of out of band using Talos CTL and change the configuration of the machine, Omni is going to reconcile it differently, and override, which is why we don't allow that level of direct access to machines through Talos CTL.
382
+
383
+ **Gerhard Lazu:** So you're the second person telling me that. It is definitely true, because Andre told me exactly the same thing. I was like "Hey, Andre, what's going on here?" And he said, "Yeah, I mean... Omni manages that." So yeah... Even when you specify a node, like -n, you need to figure out which one to specify, and then certain commands don't work... So that was by design.
384
+
385
+ **Steve Francis:** Yeah.
386
+
387
+ **Gerhard Lazu:** Okay.
388
+
389
+ **Steve Francis:** But you can get all the information from your nodes, for sure.
390
+
391
+ **Andrew Rynhard:** Yeah, we wanted to make it -- you can still debug, of course, with Talos CTL; that's still really, really important. But when you're managing Talos nodes with just Talos CTL, you're forced to think about them as individual nodes still, where with Omni, it gets us a centralized place to think about these things in more broad strokes. So you could just say, "upgrade my cluster to this version of Talos", and we have the logic to roll that out in the same way, instead of you rolling that yourself and writing Ansible playbooks, or whatever your poison of choice is.
392
+
393
+ **Break:** \[50:55\]
394
+
395
+ **Gerhard Lazu:** So there was KubeSpan in 2021, there was Omni in 2022... By the time this comes out, it'll be 2023; the first episode for 2023. What can you tell us about the things that you're thinking about for 2023?
396
+
397
+ **Andrew Rynhard:** Wow. We literally just had a meeting about this this morning; like, that was literally 20 minutes before this.
398
+
399
+ **Gerhard Lazu:** So timely! So timely. Great timing. It's meant to happen.
400
+
401
+ **Andrew Rynhard:** I don't know if I've had enough time to really digest what I should say publicly... But I definitely think things like making Talos even more secure is high on our priority list, secure boot being one of them; looking at things like integrity measurement architecture, where we can actually remotely attest to every single file that is Talos... And you can cryptographically attest that this is this version of Talos, and there's no way it can be otherwise. And having like a way to attest to that in Omni, and having a green little checkbox, and when you click Download Isolation Media, it's all encrypted with the key that's unique to your account... And just securing it all the way up through the workload. Workloads are obviously another -- they're another beast entirely, because we don't have control over those types of things... But we want to give people the ability to extend it to their workloads as well.
402
+
403
+ So I would say just security in general is always a thing that's on our list. Steve, I don't know if you have anything that stuck out to you in that list?
404
+
405
+ **Steve Francis:** No. I mean, security is the main thing, including \[unintelligible 00:52:38.06\] now, but it's not as smooth as we want it to be, or some of our customers want it to be... But in general, Omni will certainly be our focus for the next six months. It's still in beta right now. It'll be GA hopefully by the end of this month. Yeah, there's some features to roll out into \[unintelligible 00:52:59.17\] a new product there's always going to be a whole bunch of new things we'll be adding on in the short term, as we find new use cases and customer requests... But it's going really well. I mean, it's still in beta, we've already got a couple of contracts for it... So we're very pleased; it's been very well received.
406
+
407
+ **Andrew Rynhard:** Yeah, Omni is definitely in the 2023 list. I'll just add one more thing that I think is exciting for our users, for our old-school users... And that is that we're going to be looking at breaking up the Talos config into multi-doc YAML. So if you need to configure an interface, it will be of kind interface, or something to that effect, and you can push that to Talos and manage those independently of each other; have it completely be reactive, being able to do some of those things while still in maintenance mode, like configuring network, so that you can even start to think about how to join a cluster... So making the configuration management story easier, stronger, better, faster, is something that I'm personally really excited about, and that's something I think we'll definitely do in 2023.
408
+
409
+ **Gerhard Lazu:** Wow, that is super exciting. That's the one thing which I was wishing for, but there's like a couple of other things... Because you're right, they're like individual components; it's starting with desks. So you don't always have to apply the whole machine config just to modify the disk. That makes so much sense. And even though it's great to just have YAML to work with, and you can see the diffs and whatnot, being able to target only a subset of your system is super-powerful. And then if you do make a mistake, it's only within that specific thing; it's not like everything.
410
+
411
+ **Andrew Rynhard:** Exactly.
412
+
413
+ **Gerhard Lazu:** If there's a failure in here, and was it applied... So yeah, it's a lot more atomic, and that makes a lot of sense.
414
+
415
+ **Steve Francis:** Yeah, you said it's great to work in YAML with a straight face. So that was -- \[laughter\]
416
+
417
+ **Gerhard Lazu:** Yes. Now, now, now... I have a love/hate relationship going back about 10 years... So I've been through all the cycles. So I've been down there, I've been like in a ditch, in the hole, and I'm back up again, and like through the plateau of disillusionment, and all of that... So I'm where I need to be; it's just YAML at this point; it's like Bash.
418
+
419
+ **Andrew Rynhard:** Yeah. Right, exactly. I was just gonna add that it does open up some interesting opportunities as well for our users, where you could build controllers that Talos could load up very early on in the boot process, and they contain business-specific logic, and almost like CRDs, you can have a configuration file that that controller knows how to take care of, and you just submit it to Talos... Talos doesn't necessarily -- core Talos doesn't need to know how to handle it; that controller that you've embedded into your custom version of Talos could. And that could be whatever you imagine you want that to be. A controller for your BGP configuration... Who knows?
420
+
421
+ **Gerhard Lazu:** \[55:52\] Yeah. Okay. Now, the one thing which I should say is that I did manage to upgrade from 1.27 to 1.30. And again, there's like a theme here, because I didn't realize just how simple the whole process was going to be. The only gotcha was that I had a single node. And there's a safety feature to prevent a single-node etcd from going down. That was it. Like, once I had that part figured out, it just nicely rolled through.
422
+
423
+ So upgrades, which tends to be a very complicated thing, was fairly simple now. I didn't have many workloads, so maybe when there's more workloads... But you have like a nice, graceful shutdown, I could see all the steps it was going through... It's really well thought through; it's as if you've been doing this for more than two or three years. \[laughter\] Okay...
424
+
425
+ **Steve Francis:** Andrew's been working with Kubernetes for a long time. At LogicMonitor, he was the one that spearheaded our move onto Kubernetes...
426
+
427
+ **Gerhard Lazu:** Really?
428
+
429
+ **Steve Francis:** ...so that was probably what Kubernetes 1.10, or something like that...?
430
+
431
+ **Andrew Rynhard:** That was 1.7... And I think that was like my first official job in software, even though I was like studying software on my own for 10 years before that...
432
+
433
+ **Gerhard Lazu:** Wow...
434
+
435
+ **Andrew Rynhard:** I just loved Linux, and I think I was like six months into my journey there... And so for better or worse, I was put in charge of Kubernetes there. But it ended up actually working out really well.
436
+
437
+ **Steve Francis:** And you got hooked.
438
+
439
+ **Andrew Rynhard:** Yeah.
440
+
441
+ **Gerhard Lazu:** Wow. Can you imagine not using it? Can you imagine not using Kubernetes, using something else?
442
+
443
+ **Andrew Rynhard:** I can, yeah. Absolutely. I think it's dangerous when you start to put anything in life as the ultimate answer to everything. I think Kubernetes certainly has its pitfalls and downsides to it... I do think it's the best thing we have today. I also don't think it's for every use case out there. That being said, I don't live by that, and I definitely just use Kubernetes anyways, because it's what I'm familiar with... Again, going back to humans doing what they love, and avoiding all reason and logic because they love it... But yeah, I could definitely see a place without Kubernetes, and dare I say a Talos version without Kubernetes... I don't know. We'll see.
444
+
445
+ **Gerhard Lazu:** Yes, that's exactly where I was going with this. Okay. Okay. Okay. So let's talk about this after we stop recording. And listeners go "Noooo...!! Keep that in!"
446
+
447
+ **Andrew Rynhard:** That's how you're gonna keep them hooked for the next episode.
448
+
449
+ **Gerhard Lazu:** Yeah, exactly. There will be a follow-up, okay? That's a promise. That's a promise. Okay. Okay. What would you like from your community? What would you like to see from your users? Is there anything that you want to share with them, for those that are listening?
450
+
451
+ **Andrew Rynhard:** \[58:36\] I mean, first of all, I just want to say thank you. I vividly recall - and this is a big thing to say this, because I don't remember yesterday... I think from getting punched in the head for all those years, my memory is not great. But I vividly recall the day that I decided I was gonna put Talos out into the world. I was sitting at my old house, on the couch, it was a Thursday night, I think it might even have been like Valentine's Day, and it's probably not what I should have been doing on Valentine's Day, coding on a Linux distribution... I have a wife...
452
+
453
+ **Gerhard Lazu:** You had to get it out there. You had to get it out of your system so you could focus on other things.
454
+
455
+ **Andrew Rynhard:** Exactly. Exactly.
456
+
457
+ **Gerhard Lazu:** That's a legit reason.
458
+
459
+ **Andrew Rynhard:** There you go. Thank you. And so I made a Reddit post, and I went to bed, and the next day I wake up with all kinds of notifications. It's on the front page of Hacker News, and it's just like "Wow!" I genuinely thought people were gonna say, "This guy is out of his mind. Why would he create this? This is the stupidest thing I've heard of." And then literally, two months later I'm founding a company. So first of all, thank you to the people who -- we actually dove into what makes us kind of, I guess, special, and we paid someone to help us with this, and they found out that it is, by and large, the philosophy and just the way of thinking behind Talos that our users identify with. And it really, really resonates with them.
460
+
461
+ So first of all, just thank you to everybody for making that happen. And also, our community has been really great on Slack. I think I've only had to kick one person out, ever. And that was relatively recently, and we've had the community going well for four years now. Everyone's helpful, they want to help each other... It's just a really -- it's a fun little community to be in, and so I'm just really appreciative of that. And I think that's really what it's about. I don't know, Steve, if you have anything you'd want to add?
462
+
463
+ **Steve Francis:** No, I'm super-appreciative of the community. We seem to have reached the point where there's enough of a community now - there's 1,400 people on Slack, or whatever, and they help each other out. They give really good, detailed answers, and they take the time, and there's a lot of people that have done a lot of use cases that we haven't tried. Someone was asking about \[unintelligible 01:00:53.23\] running on Talos. And someone else answered, and said what they did, and what issues they ran into, and how they got around them... And it's just like, we've never even looked at that. So the community is coming along really good. They should just tell their friends to spread the word.
464
+
465
+ **Andrew Rynhard:** Yes.
466
+
467
+ **Gerhard Lazu:** Yeah. Well, that was part of the reason why we're doing this, because I thought you were onto something... And it took me awhile to make time to dig into it, and then take my time to properly look into Omni, go through my \[unintelligible 01:01:27.23\] a few nights. That was really hard. On macOS Monterey, and... Oh, my goodness me. And NixOS... Anyways, I go through, I now have my cluster, and it's bare metal, and it's glorious, and I can hardly wait to add more workloads to it... And then share it with everyone else. Like, why did what I did, why I did it, and see if it's helpful to anyone. I love that game. I really do.
468
+
469
+ **Andrew Rynhard:** Yeah. We appreciate that.
470
+
471
+ **Gerhard Lazu:** Thank you, Andrew, thank you, Steve, for taking the time, for sharing the philosophy of Talos, some of the stories, what is coming next... And I cannot wait, I cannot wait to see the next 6 months, the next 12 months, and just to see how far my own bare metal cluster running Talos gets. Thank you.
472
+
473
+ **Andrew Rynhard:** I'm looking forward to it. Thank you.
474
+
475
+ **Steve Francis:** Hopefully, it'll last as long as your board does. \[laughter\]
476
+
477
+ **Gerhard Lazu:** Yeah. Well, we will tell the story. Let's see what happens. Thank you both. Have a great start to the new year, because it is the new year when this comes out... I mean, this is -- we're actually recording this before Christmas... A little bit like backstage info. Merry Christmas to everyone, but again, by the time -- I mean to Andrew and Steve, definitely. But to everyone else, have a great new year. Alright, thank you all
Human scale deployments_transcript.txt ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Hey, Lars. What is new?
2
+
3
+ **Lars Wikman:** Oh hey, Gerhard. I don't know that there's a lot that's new for me.
4
+
5
+ **Gerhard Lazu:** Okay.
6
+
7
+ **Lars Wikman:** I'm mostly doing the same things I was doing the last time I was on the show.
8
+
9
+ **Gerhard Lazu:** Right. So what is new in the world of development, in your world of development, since June 2021, which was the last time that you were on the show? ...on Ship It. Not kaizen. Kaizen is a special.
10
+
11
+ **Lars Wikman:** Yeah. When I was on Changelog and talked about ID3 tags, that's a little bit different than Ship It... But yeah. All in all, I haven't changed much of sort of my operational stuff. So you did a lot of work to try to get me into k3s, and ArgoCD, and things... And it was very interesting. It didn't change what I was doing.
12
+
13
+ **Gerhard Lazu:** Very interesting. I think it's interesting that it didn't change. Okay. Okay. So in June 2021 we had episode seven. The title was "Why Kubernetes", and we did a follow-up, where I joined you on your stream, on your YouTube stream, and we went through k3s and ArgoCD, deploying with k3s and ArgoCD. And you even wrote a blog post - thank you very much for that - with the video embedded.
14
+
15
+ **Lars Wikman:** With the amount of effort you put into your presentation...
16
+
17
+ **Gerhard Lazu:** I know, right? It took a while.
18
+
19
+ **Lars Wikman:** ...I was obliged.
20
+
21
+ **Gerhard Lazu:** And that's something which caught my attention. So you wrote, "After the show, I primarily have more nuanced feelings about the whole thing. I see advantages to this approach, but I would say I still see too much mystery and magic in it for my taste. Things are doing stuff and I have no idea what's what." So now the ideas had time to crystallize, you slept on them for many, many months... Actually, even more than that, right? That was like a year, or two. Anyways, where do you stand with the whole Kubernetes landscape, cloud-native...? What is your take?
22
+
23
+ **Lars Wikman:** I would definitely dig into it if I felt I had workloads that required it. So if I was managing hundreds of nodes, I don't know that there are a ton of other tools worth looking at. Because most of the effort in the world of ops is going into wrangling things with that particular ecosystem and toolset. And the last time I looked at sort of, "Oh, I want a decent, standalone CI/CD kind of deal..." I'm "Oh, what are popular options right now?" Tekton, ArgoCD - they assume Kubernetes; at least I believe Tekton does, and I assume Argo does as well.
24
+
25
+ **Gerhard Lazu:** Yeah, ArgoCD as well. Both. Yeah.
26
+
27
+ **Lars Wikman:** that's not generally what I'm looking for when I'm looking for a specific tool, because that's not what I'm running. But there are cases where I would certainly reach for it. And I think the k3s kind of option is the closest one that I might reach for for something I want to run. If I have more complex needs, or if I need more elasticity, I guess, and so, workloads... And I really don't. Generally, I believe more in setting up a dedicated host or two, and just cranking performance and cost per watt out of that, essentially... Because you can get more performance in that regard. you're not paying the overhead of managed services and stuff.
28
+
29
+ **Gerhard Lazu:** I find that really interesting, because it is a world where I used to be a long time ago, and I always thought that this is what improvement looks like. But what I didn't realize is that a lot of people saw things differently. And people that even I know. And that's why I thought this conversation would be a good idea, because you see things differently, and they work for you. And that is really important. It works for you, you're comfortable with it... And I imagine that the people that you work with are also comfortable with it. So what does your production look like? if you had to pick production right now, what are you most comfortable to pick for production, in terms of operating system, in terms of packages, in terms of CI/CD... What does that look like?
30
+
31
+ **Lars Wikman:** It's a tricky one. I think I'm still a bit in exploration there... Because what my current day to day production looks like is pretty dominated by what my current client is. So I do consulting, and run a team for a product that's being developed at one of my clients. And they are we run things on Fly. I picked Fly because we -- we were doing an Elixir project, and we want to reduce the amount of ops we have to do, and just focus on mostly development.
32
+
33
+ \[06:19\] And I will say, I've been pretty happy with Fly. It has been a mixed bag, because this -- it's still an early company, it's still an early platform. So definitely sort of a mixed experience. But they essentially do what a Kubernetes type solution would do for me. They do platform engineering, so I don't have to; that's kind of the idea of platform as a service. But I still have to fiddle around with a bunch of YAML, and CI/CD pipelines, and all of that... And currently, that runs in GitLab, because the client had GitLab when I came there.
34
+
35
+ So I rolled with whatever was there, and made some choices based off of that, based on what I see... Sort of "Oh, the team's experience level is about yay-high. Okay, we should not spend a ton of our time on the server. Someone else should deal with most of the ops."
36
+
37
+ If I needed to get something off the ground on a budget, or if I built my own SaaS, I think I would probably set up a dedicated server for it, potentially to have the failover. It depends a little bit on the service. Not everything needs to be highly available, really... And in that case -- right now I'd probably pick up Debian, or Ubuntu, and I'd be slightly - not thrilled with that choice, because it's not ideal, but it's what I know well enough. Nix seems like it would be cooler; I'm not sure how convenient it would, be because I haven't explored Nix yet. There's, of course, nice things about immutability. But for me, I like to try to package as much of the deployment aspects of the app into the app itself. I run Elixir applications that can provision their own SSL certificates, for example. And whether I would sort of include NGINX, or a specific load balancer, would depend on sort of "Do I need high availability? And in what way do I think I could conveniently provide that?" Sometimes you can load-balance with DNS, sometimes that's not really appropriate. Sometimes you need something in front of your application, sometimes you don't. So there's always those trade-offs, but I like to boil away as much of the layering as possible, as many of the layers as possible... At least when I don't feel I need the layers.
38
+
39
+ And there's a big difference between doing Elixir and when I was doing Python. Because if you were doing Python, and you set up an app server, you absolutely should put NGINX in front of it, because that app server was never intended to meet the world. But when you're dealing with the Erlang VM, and well-established servers, it's "Yeah, no, they're fine." I've seen a lot of people set up Cowboy, which is the common Erlang and Elixir web server, and be "Oh yeah, we had cowboy, and then we had NGINX... And we had an outage on the first big day, because we had a misconfigured NGINX." It's "Okay..." Both NGINX and Cowboy can, of course, handle a ton of load, and the more layers you have, the more you sort of have to make sure that they're all playing nicely. That's sort of what I want to avoid.
40
+
41
+ **Gerhard Lazu:** \[09:45\] I really like this -- it's not minimalistic, it's almost like it's a very simple approach to running something in production, that you know can handle the load really well. And I think that's really important. I think Go can be a little bit like that, for people that run Go. it can just terminate the SSL perfectly well, it can serve a lot of connections, it scales really well on the CPUs, on memory, all that stuff. Really, really good.
42
+
43
+ Other programming languages are a bit more complicated. You mentioned Python. I know Ruby, from personal experience, is one of those two.
44
+
45
+ **Lars Wikman:** Node...
46
+
47
+ **Gerhard Lazu:** Node.
48
+
49
+ **Lars Wikman:** Node is super-fast, but it is fast in a very particular way.
50
+
51
+ **Gerhard Lazu:** Oh, yes. Oh, yes.
52
+
53
+ **Lars Wikman:** And scaling Node can be challenging, and there are footguns with regards to CPU-bound loads in Node, that Go and Elixir have designs to prevent. Or at least as far as I know, Go has that.
54
+
55
+ **Gerhard Lazu:** Oh, yes.
56
+
57
+ **Lars Wikman:** I know Elixir does. Or Erlang.
58
+
59
+ **Gerhard Lazu:** And there's like another up and coming one, Rust, which is even more efficient from a memory perspective, from a CPU perspective. So there are certain languages that are encouraging a simpler operational model. And I think that is something important that many people miss, because they are wondering, "Why do we have to do all these things?" Well, maybe it is your language. Nothing wrong with the language, it's just the trade-off of it. And you find it the easy way or the hard way, but you will, eventually. And you can try and resist it and say "No, no. It can do this." No, actually, not all languages can do all the things. And again, it sounds a bit simple, I suppose, but... There isn't much to it.
60
+
61
+ So trying to close this loop, I'm wondering, how much do you think is your choice of Elixir in production down to the simplicity that you can pick?
62
+
63
+ **Lars Wikman:** I certainly think my choice of Elixir influences how I want to set things up a lot. Whenever I explore sort of doing multi-language ecosystems, or it's like "Oh, yeah, I need this tool from this language, and this tool from this language, because it would be infeasible to reimplement", then the ops start shifting. The shape of the plan starts changing. It doesn't have to change much sometimes, but it's like "Oh yeah, you need this Node server thingy standing up here." "Alright." " You probably want to put NGINX in front of that."
64
+
65
+ There's also -- depending on the work your machine is doing, Elixir has a machine learning project now. And if you're doing machine learning, of course, that really affects your ops, and it could be a good reason to use this sort of Elastic Cloud Service, because if you want to train models on really fast hardware, you probably want to rent that, rather than pay $10,000 for one tensor card. It's just sort of cost-prohibitive to set that up, and there's a lot of sort of shipping data back and forth... And I can see why people do that in the cloud a lot of the time. And Elixir and Erlang, as you know, are not ideal for number crunching. And this is something I'm probably going to do a small write-up about soon, but Python is not ideal for number crunching.
66
+
67
+ **Gerhard Lazu:** Yup.
68
+
69
+ **Lars Wikman:** But it is the de facto language for machine learning and AI, and that has nothing to do with Python as a sort of implementation, because everything, every bit of it ends up going to C or C++ to perform.
70
+
71
+ **Gerhard Lazu:** Exactly. Yeah.
72
+
73
+ **Lars Wikman:** Or Rust, I guess... I don't know if people write bindings in Rust under Python these days.
74
+
75
+ **Gerhard Lazu:** Have you looked into Rustler?
76
+
77
+ **Lars Wikman:** I'm familiar with it. Since I don't write Rust, I have no reason to poke it further. But yeah, Rustler and Sigler allow you to do Rust and Zig in Elixir. Good stuff.
78
+
79
+ **Gerhard Lazu:** So what does your deployment artifact look like? When you push code into production, what are you actually getting out there?
80
+
81
+ **Lars Wikman:** \[14:04\] So my ideal is the Erlang release... So as close as you can get to a Go binary, because I think Go does a better job of that than Elixir and Erlang. But there are reasons why Elixir and Erlang do it the way they do. And that has to do with a lot of interesting sort of operational capabilities that Elixir has, that essentially nothing else has. Hot/cold updates, which no one does. Almost no one does. But they can, and that changes the shape of things.
82
+
83
+ But yeah, something pretty static that you can just ship over to the machine - I'm not a super-fan of containers. They're super-important if you want to treat everything the same, and if you want to sort of package for a larger ecosystem. It makes good sense to use them then. But shipping a Go binary inside of a container seems odd to me somehow. It's so much overhead, even if it might, in reality, cost nothing... Because the overhead is quite low. The real overhead is quite low performance-wise. Complexity-wise, I think it adds up.
84
+
85
+ **Gerhard Lazu:** Okay. So if you're not using containers to get those Erlang releases out there, what do you use?
86
+
87
+ **Lars Wikman:** Oh, you know, FileZilla, copy-paste... No.
88
+
89
+ **Gerhard Lazu:** \[laughs\] Rsync? Come on... You have to use Rsync.
90
+
91
+ **Lars Wikman:** No.. WinSCP, you know...?
92
+
93
+ **Gerhard Lazu:** Oh, yes. I remember.
94
+
95
+ **Lars Wikman:** No, it would vary... And right now, I don't ship a ton of things to my own servers, but generally, it's just SCP when I do. The things I run for myself, for low-scale production, that's just SCP over And as I mentioned, for clients, I'm currently doing Fly, some Fly deploy command, which does ship containers...
96
+
97
+ **Gerhard Lazu:** Yep. I was thinking about that. So do you build a container with Fly, or do you let the Fly CLI just figure it all out? Use build packs, and...
98
+
99
+ **Lars Wikman:** Our CI/CD builds the container.
100
+
101
+ **Gerhard Lazu:** Okay.
102
+
103
+ **Lars Wikman:** So we have -- that's something that I grabbed from our conversation about k3s and your demo of ArgoCD. I really like trying to pin down sort of "This is what we're running on this environment, and this is what we're running on this environment." I haven't quite gotten it to the point where it's all defined in code, and there are no sort of manual steps to a release. There are a few reasons for that that are practical and annoying. But all in all, it's like, I want to know which hash we're pushing. So in the end, I get a container image that has the hash of the backend system and the hash of the frontend system that are baked in, and that's what we push.
104
+
105
+ **Gerhard Lazu:** Yeah, that's right. Yeah. To be honest, containers are, in my mind, great as a distribution mechanism. It is a standard distribution mechanism. If you had to ever deal with Deb packages, or RPMs, or tar.gz's - whatever, it doesn't really matter - it is a standard distribution mechanism for code, and you can put more things than just code. And it's easy to push, it's easy to pull those artifacts... And the way the layers are structured, and the way you can reuse layers, it really helps in terms of like - the operating system hasn't changed. There's like some extra layers which we added on top, and it knows how to do that really well. The tooling knows how to do that really well. So that is something which I find very, very convenient. Okay, what about things going wrong? So if things go wrong, for example, in your production, what do you do?
106
+
107
+ **Lars Wikman:** Swear a lot...
108
+
109
+ **Gerhard Lazu:** Okay, we have to start with that, right? "Dammit!" Table flips...
110
+
111
+ **Lars Wikman:** \[18:01\] Yeah. So generally, the most important parts are to make sure to have a backup strategy in place, and some kind of disaster recovery. And one thing I've tended to do with my backups that I set up on my ad hoc servers, it's like "Oh, I need to set this up, and set this thing up" etc. It's like, my sisters need a website, and I set up Ghost for them, and I don't want to lose their data at some point, so I set something up to run a regular backup, and shove that off to an S3, because S3 buckets are the way of just putting files somewhere and not having to care about them... Especially when the files are small. But then I tend to also script reading back the backup, shoving it into a table, and verifying that it's roughly what I expect.
112
+
113
+ **Gerhard Lazu:** Very important, right?
114
+
115
+ **Lars Wikman:** So that's sort of the simplest disaster recovery approach. And it's similar to what I would do for a production project where I'm running dedicated infrastructure, and sort of having serious customers, and all that. But there would be more things for more serious projects. For example, I need to find out when drives are full, when OM killer strikes, that kind of deal. And in those cases, right now I'd lean towards Grafana, and things, because you can get those and set them up on your own, and there's good tooling for it in Elixir. Thank you, Alex.
116
+
117
+ But I haven't been thrilled with Grafana. I think my best sort of APM-ish experience was when I worked on a product that used New Relic, and this was a number of years ago... But just because it really did give good insights, and then they charged out the nose for it. I think Datadog is probably on the list of what I'd look at today for a more serious install. Honeycomb has come up enough times that I would definitely take a look... But some kind of tooling like that. And then that kind of tooling, I'd rather not run myself, I think. Or it would be a separate server, just to make sure that when that one goes down -- like, I want an either/or. I don't want an and.
118
+
119
+ **Gerhard Lazu:** Yeah, for sure.
120
+
121
+ **Lars Wikman:** "Monitoring went down, and then... Production went down."
122
+
123
+ **Gerhard Lazu:** No, no, "Production went down. Let's check monitoring. Oh, dammit. Monitoring is down as well!"
124
+
125
+ **Lars Wikman:** Yeah.
126
+
127
+ **Gerhard Lazu:** Okay.
128
+
129
+ **Lars Wikman:** So now you need monitoring for your monitoring. And that sort of loops forever. And that's just infinity servers. That's no good.
130
+
131
+ **Gerhard Lazu:** Yeah. I'm still hung up on the CP thing that you mentioned, like how do you get those Erlang releases out there - just CP them. So it sounds very manual to me.
132
+
133
+ **Lars Wikman:** Yes. And that's not a part I love about it. So I've been looking at different tools that might sort of fit the trade-offs I like... Because it's not -- like, I need to find a tool that can do this. There are infinitely tools that can do this. It's just like getting a file to a thing; you could do it with the Git and WebHooks, you can do it with sort of GitHub's WebHooks, or GitLab's WebHooks, or you could do it as part of your CI/CD, or you could do this, or you could do that... And I would probably initially just set it up so that the CI/CD makes the call and shoves the release.
134
+
135
+ \[21:43\] Now, if you're on a dedicated server, how do you do a nice blue/green deploy, a rolling deploy? It gets a little bit more tricky then. And if you have two dedicated servers, like - okay, yeah, then you can do blue/green in sort of a traditional way. Something I want to explore is how to do a nice blue/green deploy on a single machine, minimally; and ideally, the application itself knows how to wrangle it. And I think I have two approaches that I'd like to explore, that I have not yet... One is straight up that the application tells IP tables "No, no. Route that port to me now." And if it fails sufficiently, it will hand it back, or the other app will sort of see error rates and hand it back, or steal it back, I guess... Or just manually, I can switch it back by telling the app "No, you're the boss. That one failed." It depends on how sort of automated you want to be about it, but the point being, you have multiple versions of the application on the server, so you don't overwrite your previous one; that seems unnecessary. Being able to stand up an entirely new one, let it settle in, and then let it start taking on traffic, and maybe even taking on a subset of traffic.
136
+
137
+ Another one, which is even sort of moving it one step further, is that I believe there are socket options you can use to share a socket...
138
+
139
+ **Gerhard Lazu:** Yep. 'REUSEPORT'-- 'SO_REUSEPORT' that's the one, yeah.
140
+
141
+ **Lars Wikman:** So the new one will simply start getting the traffic, and the old one can be faded back into the background.
142
+
143
+ **Gerhard Lazu:** I think at that point you are writing your own orchestration layer in the app, right? Because that's what ends up happening. Like, how do you orchestrate a new release? And even before that, you still have to run your tests, you still have to get the dependencies, you still have to do a bunch of things, right? Maybe there's assets, static assets that you have to digest, and put them in the release.
144
+
145
+ **Lars Wikman:** Yeah, but that's on the build side, right?
146
+
147
+ **Gerhard Lazu:** Right.
148
+
149
+ **Lars Wikman:** So let's say you're using a CI, whatever CI you are, to do all these things; you end up with an artifact that's okay to get out to production. And then that's where the CP comes in. CP, as you say - okay, you could do that in the CI, to get it out there. You have a single host, so then you don't have to worry about having multiple hosts to get this file out. And if it doesn't get out to a bunch of hosts, how do you -- do keep retrying? What happens if you consider it failed? What happens then? And then, when it's out there - well, what happens with that artifact? Maybe all you have to do is put it in a certain place, on disk, and there's something else which is watching, if there's like a new file, or a new directory, whatever the case may be - but most likely a new file - and then something else needs to happen. And if you are using hot code reloads, then it gets even more interesting, right? Because you have to have code that upgrades from whatever version is running to the new version, and that is not an easy -- you have to be very disciplined, is what I'm saying. It's not an easy thing to do.
150
+
151
+ So - okay, let's say the new version is running, and - what else needs to be aware of this new version being out? You may need to notify something. Again, before you know it, you have like a whole orchestration layer, which is split between your CI and whatever this thing is; some code in your app, for example.
152
+
153
+ **Lars Wikman:** The whole notify something else is probably what I'd consider -- when you hit that point, like "Oh, but there are other services, and they need to be notified when this goes out", and yada-yada-yada. Then you are probably not a monolith anymore. And my approach definitely is aggressively monolithic.
154
+
155
+ **Gerhard Lazu:** I think I think that's a good one. But again, a monolith - I think it's a good idea, and I can see a lot of premature optimizations; people going to microservices, people going to like even serverless... That in itself has like a whole load of things, operational concerns that people need to be aware of. And there is no free and easy lunch. You have to earn it, one way or the other. And a monolith has certain trade-offs, but it has a lot of things to like about it.
156
+
157
+ \[26:19\] We have been successfully running a monolith for a long, long time. However, there are external systems that the monolith needs to interact with. Your monitoring, your logging, your exception system. The monitoring is both of itself, and an external system which monitors it. So do you notify there's a new deployment? There's a CI, there's like so many components. So even though you're a monolith, there's systems around it which enable the monolith. A CDN perhaps, maybe? I mean, that has its own concerns, and then how do you encode that knowledge? And I think that's where you had a very good blog post, "Fundamentals and deployment", that made you think about those things.
158
+
159
+ **Lars Wikman:** Yeah, I think that was one I was influenced by your conversation with Kelsey Hightower about, right?
160
+
161
+ **Gerhard Lazu:** That's the one, yeah.
162
+
163
+ **Lars Wikman:** You just want to bring up that you've spoken to Kelsey, that's what you want to do... \[laughs\]
164
+
165
+ **Gerhard Lazu:** No, no, no, because I think... I'm trying to get to the readme, and there's something that you wrote, which I really liked; you wrote "Human-scale deployments." And I think that's a very good way of putting it. Because even though the system is complex, it's not crazy complex. A human, a normal human can understand it, and a normal human can operate it. You don't need a team of humans to run this service.
166
+
167
+ **Lars Wikman:** Yeah. That's generally what I aim for. Some people really, really get excited about trying to solve problems at scale. I really, really don't like what I see of systems at scale. All in all, it tends to be sort of a big challenge of making layers upon layers upon layers of people, and tech, and bureaucracy interact in a somewhat useful way. And there are certain things you cannot do at a small scale.
168
+
169
+ For example, the post office system, or the power grid - there has to be large-scale coordination in place, and then there also needs to be a lot of smaller systems that play nicely within that larger one. But I'm not interested in solving like a 100-engineer or a 200-engineer problem, in general. I like small teams, I like small organizations, and I trust smaller organizations more. There's a lot of idealism in what I do, and I also optimize for my own enjoyment, which is why I'm not at like a FAANG, or whatever. I don't think I could be bothered to pass those interviews anyway, but... I'm looking for things that I think can work at a particular scale.
170
+
171
+ And sometimes small teams can run large things. For example, WhatsApp is a pretty good example of that. Now, I bet they had a lot of orchestration going on, because they had to, because they were at an immense scale. But they also did a lot of things that are not commonly done. For example, \[unintelligible 00:29:29.23\] code updates all the time. So I think there are atypical ways of doing almost anything, and you can make it work, and you can probably make it efficient. And I don't think you can find sort of a competitive advantage compared to other more general-purpose organizations. Choosing Kubernetes today is probably not a competitive advantage, because it's so common. Doing well with Kubernetes, and sort of having a good org, and a good team, and all of that - that's a competitive advantage compared to companies that are doing Kubernetes poorly. But almost everyone is doing Kubernetes, so I guess there's no advantage to be found there.
172
+
173
+ \[30:12\] It's a little bit like -- I consider Elixir a competitive advantage for many companies, or a potential, at least, competitive advantage, compared to, for example, all the companies that run Java. You cannot win a competitive advantage by choosing Java, because that is not an outlier; it has no opinion, it is the most general choice. So there's not a lot of advantage you can glean there. But if you go sort of off the beaten path a bit, either because you go sort of "Oh, we're going to own all these details ourselves", or "We're just not going to bother doing half the work that everyone else considers critical." There's advantages to be found there.
174
+
175
+ For example, Apple likes to ship half-finished features and services... "No, no. We just removed a lot of buttons, and it's so simple, and so straightforward." Yes, but you could also add some options, so it's more flexible. But they don't. And I think that's part of their plan for sort of shipping more things, even though they're an incredibly large organization.
176
+
177
+ If you make decisions, and you sort of take chances and go in particular directions, I think that's where you can find interesting things. I don't have what I consider a complete plan for my operations. I don't have all the tools figured out that I would like to. I've got some recommendations for some nice tools for sort of picking up WebHooks and just running commands off of it... It was like "Oh, this is written in Go. It's going to be one binary. I could set it up with systemd, and it would run there." That could probably be what picks up my final artifact from CICD and puts it on there, and then I have some scripts then to manage the deployment. But I don't have a final idea that I'm like "This. This is how it has to be done." And right now, often it's done manually, for my personal needs, because that's good enough. I script the most annoying parts.
178
+
179
+ **Gerhard Lazu:** Would you use something like fly.io for your own stuff?
180
+
181
+ **Lars Wikman:** Sure. I wrote a newsletter just recently about opposing ideas. I can find cloud deployments and sort of the whole cloud-native space interesting, but also be more attracted to bare metal, dedicated server, keep it as simple and lean as possible. And I can't do both. I can never do both. I can try both, but I can't do both in the same thing and make any kind of reasonable progress. I can only go in one direction at once.
182
+
183
+ And similarly, if I'm launching a business venture, like if I'm building a product, there are different schools. It's sort of, "Oh, do you do the whole biggest camp thing, and like build a really thoughtful product, and you host it very carefully, and you run it in this particular way, and you design it very deeply, and you think about it a lot?" A lot of Mac software is sort of like that as well, where it seems like this has thoroughly worked. And then there's the other side, where it's like "No, launch first. Build the product later."
184
+
185
+ **Gerhard Lazu:** \[laughs\] Okay...
186
+
187
+ **Lars Wikman:** And if I was going for "I want to launch fast", I would probably pick Fly right now, for that kind of launch, because they feel like pretty much the new Heroku in that regard. I don't think they have quite as polished a system as Heroku, but they also are a lot more featureful than Heroku was. So they are making slightly different trade-offs. But that's sort of where I would pick them probably, where my concern is to get the product out more than exploring something technical about deployment, because I know enough about Fly deployments to just do one.
188
+
189
+ **Gerhard Lazu:** \[34:11\] And if you were to explore, what would you pick?
190
+
191
+ **Lars Wikman:** I have what I consider to be an art project I would like to try... And I think this one would be bare metal, but it could also be done in sort of an elastic-cloudy way. But that would be exploring Erlang hot code updates. I would like to build a system that has no persistent data store; as I mentioned, an art project, not a production type of relied-upon project...
192
+
193
+ **Gerhard Lazu:** Yeah... Reality has data. Data is a pain to manage.
194
+
195
+ **Lars Wikman:** But the thing is, I want data in the system. I want the system to be incredibly stateful, and I want people to join, and contribute, and everyone just have to deal with the fact that there is data in the system, and that mistakes cost. Content, data... It would be an interesting way of building something Twitter-like, or sort of fediverse style, where the system is up as long as it's up, and if we really screw it up, everything's gone. That's something I would like to explore, partially just because I want to figure out how hard is it really to do the hot code updates thing... Because everyone says, "Oh, don't go there. Don't go there. It's terrible."
196
+
197
+ **Gerhard Lazu:** Yeah.
198
+
199
+ **Lars Wikman:** But of course, it's a very interesting thing.
200
+
201
+ **Gerhard Lazu:** You just need a lot of discipline. You just need a lot of discipline. You have to write those transformations. "How do I go from this state to the new state?" And it's not just putting the new thing out there, it's like the transformations of whatever is running, it needs to migrate. The function calls, the message passing; all that stuff needs to be accounted for.
202
+
203
+ **Lars Wikman:** Yeah. You've done a decent amount of that...
204
+
205
+ **Gerhard Lazu:** Actually, no.
206
+
207
+ **Lars Wikman:** Okay...
208
+
209
+ **Gerhard Lazu:** But I worked with someone that did. I think this was episode nine, Jean-Sébastien Pedron, we talked about release engineering, and I think in that context he shared - maybe; I don't remember whether we recorded this, but he was saying how he used to work on a team where they did use Erlang, and they did do hot code reloads, and it just required discipline. Because without that, it's like not writing the database migrations. Or writing poor database migrations, that for example you can't roll them back. If you screw up, that's it. You're done for. Now, obviously, the data in transit is very different to data which is persistent in a database. But still, at scale, you have a lot of data flowing through the system. And that's why it's easier to drain the nodes, and then start provisioning at the new capacity, especially if everything works as expected. And then you have the canary model, and then you scale out, and then you have like a transition period where you have two systems running effectively; it's a longer blue/green. But even then, draining things can take a really long time. So how long do you want your updates to take? If you have a lot of data flowing through the system, it can be a while. Then you have different strategies, and you can get creative anyway. So it's not an easy problem, and if the system is small enough for it not to need it, then don't have it. Don't have hot code reloads.
210
+
211
+ **Lars Wikman:** This sort of pins down some of the reasons I don't want to go into the whole Kubernetes land, and also why I'm probably never going to be doing hot code updates for a real project. Usually, you can just keep it simple, a lot simpler than sort of the recommended practices, perhaps; or recommended at scale. That's the tricky thing. It's like at a particular scale - yes, you should automate all the things, you should have tooling for everything... People should not be able to just poke about, and set custom things up as one-offs. That's not how things should run at large scale. But in many cases -- maybe your application can be down while you're doing an update. Maybe that doesn't matter at all.
212
+
213
+ \[38:07\] Most applications I've built have had times in the day when no one is using them, because people go to sleep... And they've been national; so they've been limited to one country. And it's like "Okay, yeah, maybe we call some downtime for some person that's currently in Thailand and wanted to check something." Limited scope, limited impact.
214
+
215
+ And I think one of the reasons why we generally do sort of a high-availability approach, and like blue/green deploys, and all that, is that that is near the level of comfortable trade-off. It's not that hard to keep the system up while performing an update, if sort of all of your state is source of truth in a single database anyway, and all of that.
216
+
217
+ **Gerhard Lazu:** Yeah. Yeah. I know for me - and you've seen the first episode of this year, where we've been talking about Talos OS and the experiment which I'm running with it, is I'm choosing a different starting point. And I have been doing packages enough, and updating packages, and things getting messed up, and all sorts of issues, operational issues of that nature, where I'm choosing to have a different starting point. I'm choosing to have an API. I'm choosing to have external CTL or control tools that you run CLIs that you run, and they interact with the system, rather than you being on the system and performing actions which are almost like one-offs. Now, that does not remove the need to have good, clear documentation about how things fit together. And that's something that we changed recently, where even like for Changelog we have a new infrastructure MD document in the repo, which explains how all the pieces fit together, what the pieces are... And again that has no automation. It's literally text, some diagrams, and some links, so that anyone can understand that.
218
+
219
+ And then there's also the contributing MD, which we've added things around how to set up everything locally, so that you can do development. No automation, like no Docker, nothing like that. Just the plain description of what the components are... "This is how you would install them manually, on a Mac. And by the way, as a Linux user, please contribute your way of doing it. But here is the manual way." And then how we choose to automate that is a tangent to the actual thing.
220
+
221
+ So going back to that, do you have something similar that describes how your systems are set up manually? What are the components? How do they interact? What to do, where...?
222
+
223
+ **Lars Wikman:** My last efforts towards something like that was when I was trying to sort of "This is how I want to set up all of them." And then the idea was mostly like "Oh, Bash scripts are pretty close to just the documentation itself."
224
+
225
+ **Gerhard Lazu:** That's what I thought of makefiles, by the way. I changed my mind... \[laughs\]
226
+
227
+ **Lars Wikman:** Someone came on the show and changed your mind.
228
+
229
+ **Gerhard Lazu:** Exactly. Yeah, he connected a couple of dots. Also, I've been running them long enough to understand the trade-offs which I'm making, so it's a combination of things.
230
+
231
+ **Lars Wikman:** That's the tricky bit, I think... Because to me, if something is implemented in an Elixir app, so the app sort of manages itself - that's easy to read for me, that's easy to reason about. I know how Elixir apps work. Bash scripts, I'm okay with; makefiles, I have a hard time reading. They don't flow like typical scripts do. And then there's like - oh, trying to get started with k3s and sort of reading the YAML required... Ooph. It's just something that, if you're doing it day in day out, you have no real problems with; that's absolutely learnable. But building with the tools you know is probably what I'd recommend most people do. It's like "Oh, you're in the JavaScript ecosystem. Learn how to do WebPack." I don't know how people deploy things in JavaScript land... It's like, throw it on Vercel, or something. I don't know.
232
+
233
+ \[42:25\] When I was in the Python ecosystem, it was like "Oh, I want to talk to a server." Well, Ansible is actually written in Python, so there's some synergies there. I don't get super-confused when some Ansible package breaks. It's like, "Yeah, yeah, Pip... Let's Pip. Just Pip things." And similarly, when I need to talk to a server, Fabric is a decent way of doing that in the Python space.
234
+
235
+ And if you're doing Go and you're not into-cloud native, I don't know what you do, because it seems like there's tons of tooling for everything written in Go, but it's all for the cloud-native space. But I think staying pretty close to whatever culture you're in, or whatever your comfort zone is, is a decent way of keeping things understandable. But when you need to transmit the knowledge, it's like, can you write it down? I think it's better if there is an implementation in the language of the system, than if there's like "Oh, and here we have this entirely separate language that is only used for the deployment bits", because most people won't be poking the deployment bits all the time. That's actually a project that Saša Jurić of Erlang and Elixir fame explored a bit, trying to build the CI/CD tooling in Elixir.
236
+
237
+ **Gerhard Lazu:** Interesting. Do you remember the name?
238
+
239
+ **Lars Wikman:** I think the project is called CI, under his GitHub, but I don't think it has continued. I think he just ran out of bandwidth, and it potentially became apparent as well...
240
+
241
+ **Gerhard Lazu:** Yeah. That changes things a lot, for sure. Okay.
242
+
243
+ **Lars Wikman:** That's actually one of the reasons I got curious about what you were saying about Dagger, and having sort of this base support for building SDKs for different languages... Because I would like to write Elixir for my CI/CD.
244
+
245
+ **Gerhard Lazu:** Interesting. Okay.
246
+
247
+ **Lars Wikman:** Because I'm very proficient in Elixir, so...
248
+
249
+ **Gerhard Lazu:** Yeah. I think there's something really interesting there, because you're right, you need to have an interface that you're comfortable with, and I don't think YAML is it. I don't think YAML -- well, I mean, we just make do with it, and that requires... It really is a requirement, a declarative system. Because if you try to program in YAML - which, by the way, you can; you should never do that, but you can... It'll be a very different experience.
250
+
251
+ So YAML - great for declaring a state of the world, but then there's all sorts of transformations that need to happen, all sorts of functions need to be called at different points in time. That YAML basically has to be reconciled into something useful, which is what Kubernetes is, in a nutshell. Okay, I'm oversimplifying it, but you tell it what you want it to do, and as if by magic, it does it. And I understand the reluctance to trust that magic; because if it breaks - and sometimes it does - what the hell do you do? Like, I told it what to do, and it didn't do it, so... What happens now?
252
+
253
+ **Lars Wikman:** And if my job was managing a complex system day in, day out, and not mostly developing the system, if I could spend most of my time on the operations part, then Kubernetes might also make more sense, because then that's a tool that gives me a lot of capabilities, and I can spend my time learning to be very proficient in that. And eventually, I might run into a project where it makes sense for me to just learn Kubernetes. And after that, I might be one of those people that just, like, "Oh, I need to set up a static page blog. I'll do that with Kubernetes. Home Lab! Here we go!" But for now, I really like --
254
+
255
+ **Gerhard Lazu:** UCP.
256
+
257
+ **Lars Wikman:** \[46:19\] It all boils down to what I'm comfortable with. Like, I've done Linux since I was a teenager, so I know how to do Linux.
258
+
259
+ **Gerhard Lazu:** What about systemd? Are you okay with systemd?
260
+
261
+ **Lars Wikman:** I'm getting okay with systemd.
262
+
263
+ **Gerhard Lazu:** \[laughs\] I know... That's such a hard thing... runit, please. Can I get runit back? That was my favorite supervisor. It was so simple. That was like the pinnacle of supervisors for Linux systems. And then systemd came along.
264
+
265
+ **Lars Wikman:** It seems very capable, I'll say that. And perhaps in ways that would be hard to replicate with like in .d files, and scripting all that on your own. It seems very capable.
266
+
267
+ **Gerhard Lazu:** My systemd is your Kubernetes. Not gonna happen though... \[laughs\]
268
+
269
+ **Lars Wikman:** Yeah... But, I mean, systemd under Kubernetes makes very little sense, I think.
270
+
271
+ **Gerhard Lazu:** Yep.
272
+
273
+ **Lars Wikman:** It's like, no, your containers do not need systemd.
274
+
275
+ **Gerhard Lazu:** And there you have it. I wanted to avoid systemd so badly, that I switched to Kubernetes... \[laughs\] Because I was shocked by all the horrors that would happen in systemd. And good luck figuring out those units...
276
+
277
+ **Lars Wikman:** You wanted systemd at a multimachine scale. That's what you wanted.
278
+
279
+ **Gerhard Lazu:** Exactly, yeah. Exactly.
280
+
281
+ **Lars Wikman:** Have you poked around with other non-Linux operating systems, like the BSDs, and things?
282
+
283
+ **Gerhard Lazu:** Yeah... I used to run FreeBSD for the best part of the last decade. It's interesting. Jails were interesting. Solaris zones - I only worked on a project for maybe three or six months that was using it. They seemed very complex, solaris zones, like from the outside. There was like a lot of stuff that was like "Why do we need to do this?" And then containers came along, and that just basically solved a lot of those issues \[unintelligible 00:48:03.17\] in Linux. Cgroups, and containers, and then obviously Kubernetes, so scheduling...
284
+
285
+ To be honest, I understand the appeal of using something that you're comfortable with. Something that you're like on a trajectory, and you've been on that trajectory for a really long time. You mentioned Linux - it does most of what you need. Of course, some parts are not perfect, and you're not happy with, but is there any system that you're completely happy with? Not really. There's always like little things which are annoying. But with time, you get to live with them, and then everything is okay. So why would you change something that's working well for you?
286
+
287
+ **Lars Wikman:** Yeah. And for me, that's sort of not really a question, because I always explore new things. I don't really ever fix, like "This is how I do things." I do that for a project, for a time, like "Okay, this is how WE do things." That doesn't mean that's how I do things. I do those things in that context. But whenever I'm starting a new project, I probably have a new idea about how I want to deploy it. I try to stay close to the previous one, just so I can keep reusing some of the tools, I guess. But overall, it's like, I want to figure out new things, I want to learn new things, I want to try things, and regret them. Otherwise I don't learn.
288
+
289
+ One reason I asked about the BSDs is I've gotten some really good, fun input about operational stuff from one person that reads my newsletter, and he works on FreeBSD, so he contributes to FreeBSD, and I think he works on CouchDB as well. It might be someone you should have on the show, DCH Dave Cottlehuber. I hope I said that right. Because I believe he runs a ton of operational stuff for people. I think last time I spoke to him he also mentioned that one of his recent projects was saving a company from Kubernetes.
290
+
291
+ **Gerhard Lazu:** \[50:06\] Interesting.
292
+
293
+ **Lars Wikman:** So that could be an interesting conversation...
294
+
295
+ **Gerhard Lazu:** Interesting, yeah.
296
+
297
+ **Lars Wikman:** And I know he's super-comfortable with all of the networking that I rarely touch; he's done sort of operations at a scale I've never had to. And he chooses FreeBSD. I think one of the reasons is that it's simpler in many ways than Linux. I get that impression from people that choose the BSDs, that it's generally more understandable. I poked a few BSDs in my teens, and they were more well structured, I think, overall, and the tooling was generally more annoying. It's in many ways similar to whenever you go off the beaten path, it's like "Yep, here you've got to learn things."
298
+
299
+ **Gerhard Lazu:** Yep. CSH - Oh, my goodness me. I mean, that thing is just -- like the shell, the default shell on the FreeBSD systems that I was using... And it's just behaving in an unexpected way for someone that's familiar with Bash, or ZSH, or even Fish.
300
+
301
+ **Lars Wikman:** Yeah. I think going off the beaten path is good and useful, but I think it also adds up... So if you spend all of your time and effort way off the beaten path, you're gonna get yourself in trouble. Or you're gonna build absolutely brilliant systems that no one else can work with.
302
+
303
+ **Gerhard Lazu:** That's the problem, isn't it? Works of art, as you mentioned. It's a work of art. It's amazing, but no one knows about
304
+
305
+ it. NixOS, and PureScript, and Haskell all the way...
306
+
307
+ **Gerhard Lazu:** Yup. OCaml, don't forget about that. And a couple others. Zig...
308
+
309
+ **Lars Wikman:** It's like Fortran, where it needs to go fast...
310
+
311
+ **Gerhard Lazu:** Yeah. With a bit of COBOL to keep everything together. As we prepare to wrap this up, Lars, any one takeaway that you would like our listeners to have from our conversation?
312
+
313
+ **Lars Wikman:** I think don't worry too much about sort of the popular tooling right now. Use something that you know that you can make work, and mind how many new things you introduce... Because whenever you bring in new things, you are challenging yourself, and you probably should challenge yourself on a regular basis, but don't do all the challenges at the same time.
314
+
315
+ **Gerhard Lazu:** Excellent. Well, thank you very much for today. This was good. There's one more year that goes by, that we have similar conversations... And I'm very curious to see what happens next time.
316
+
317
+ **Lars Wikman:** Let's see if I change...
318
+
319
+ **Gerhard Lazu:** Yeah, exactly. Or if I change. It has happened... Until next time, Lars. Thank you.
320
+
321
+ **Lars Wikman:** Thanks for having me.
Kaizen! Embracing change üåü_transcript.txt ADDED
@@ -0,0 +1,839 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Change is constant, and the one thing, the one lesson which really helped me was to not fight it, but embrace it. Some may think, "Oh, this sounds very agile-ish, and I thought we are post agile", but this is one constant, right? Change will always happen. And if anyone has been paying attention to the world, things have changed so many times in the last couple of years. So that's the one thing that will always be constant - change. So with that in mind, me embracing change and change being constant, I'll be taking a break from Ship It after this episode.
2
+
3
+ **Adam Stacoviak:** That's a gut punch...
4
+
5
+ **Gerhard Lazu:** Is a little bit... \[laughter\] But that's why I want to make it sound as positive as it can be, because it is. So if you remember when we started, I was experimenting so much, and trying so many things, crazy ideas, like "Let's use Kubernetes for Changelog." Remember that one?
6
+
7
+ **Jerod Santo:** I do recall. I do.
8
+
9
+ **Adam Stacoviak:** For sure.
10
+
11
+ **Gerhard Lazu:** And then Jerod came and said "No, let's use Fly", and we tried that as well. So we were experimenting quite a lot before Ship It, or I was experimenting quite a lot before Ship It. And then, Ship It was taking more and more of my time, to the point that I was rushing from one thing to another thing, to the next episode, the next episode... And I had less time to experiment. So I would like to do more of that.
12
+
13
+ **Jerod Santo:** More experimenting, less shipping of Ship It.
14
+
15
+ **Gerhard Lazu:** Less shipping off Ship It episodes, yes. That's right. But definitely shipping. So things will still continue changing on the Changelog side; the improvements will not stop. And if anything, a couple of other areas are already picking up, like Dagger, for example, for me, which means I need more of my headspace, and more of my A game for that thing.
16
+
17
+ **Jerod Santo:** Embracing the change. So the big Why, if we say why in general, it's because you were stretched too thin in order to do the experimentations that you love, and you need some headspace. Dagger taking off, taking over, and Ship It being very much your passion project, a side project for you... had some financial stability, but was never going to be - or at least in its current form, not going to be a full-time thing... And something had to give, because you were burning on both ends, and we don't want you to burn out. And so there you have it.
18
+
19
+ **Gerhard Lazu:** That's right. I was checking myself, basically... And it's really important to know when to stop and what to stop. And to know how to rearrange things. And everything is temporary. I think that's something that is worth emphasizing. Nothing will last forever, not even us.
20
+
21
+ **Jerod Santo:** Right.
22
+
23
+ **Gerhard Lazu:** But hopefully, we've had some great time together. More amazing things will come, because this is not the end of it. It's just a pause, and we don't know how it will continue, in what shape or form... I don't think that's the approach - nothing wrong with the approach. But we can improve on it some more. Some video would be nice... There's so many videos that we shot in the last two years since we had Ship It, but we published very few of those. Like working with various people, experimenting... But we never had time.
24
+
25
+ I remember episode 33, Merry Shipmas; recorded with the Upbound folks, recorded with the Dagger folks at the time, because I wasn't part of Dagger back then... And the third thing was Parca. We were profiling our app, and everything was running in Kubernetes at the time, to understand where the CPU time is spent. And Parca improved so much since, but we haven't installed it in the new world, which for us is fly.io. So that's maybe one thing worth bringing back. I don't know. We'll see. But I know that we have many more ideas of things to improve. So small bets; more small bets. More trying things out and see what sticks, and embracing change.
26
+
27
+ **Jerod Santo:** So this is episode 90. So you made it to 90 episodes before this hiatus, this pause, so congrats on 90 episodes. Most podcasts do not make it that far even. Unfortunately not 100, which would have been a coup de gras; it would have been perfect.
28
+
29
+ **Gerhard Lazu:** However, if it would have been 100, it would have felt more like the end. And this is not the end, right? So 90. Like, who stops at 90? Obviously, something else is going to come after 90. It's not a natural place to stop. 100 would be like "That's it. The book is done."
30
+
31
+ **Jerod Santo:** Right. We would call it a grand finale, and you would sail off into the sunset. Well, for me, I am a little -- of course, embrace the change. I'm a little bit sad. I know we have a lot of listeners who truly love this show. It's a unique show in our catalog, in Changelog's catalog. You talk about things that we don't talk about elsewhere, in ways that we can't talk about... And so, of course, we will miss it. For me, selfishly perhaps, my favorite episodes are divisible by 10. I like the Kaizens, maybe because I get to listen to myself... No, that's just a joke. I just enjoy catching up with you, and...
32
+
33
+ **Gerhard Lazu:** \[06:28\] Not a joke. \[laughs\]
34
+
35
+ **Jerod Santo:** No, I do like it. I'm starting to like it.
36
+
37
+ **Gerhard Lazu:** You have a nice voice, Jerod. That's what it is. Let's be honest.
38
+
39
+ **Jerod Santo:** It's not how I say it, it's how I say it. No, I'm really joking.
40
+
41
+ **Gerhard Lazu:** It's how you hear it.
42
+
43
+ **Jerod Santo:** Yeah. \[laughter\] It's not my voice that's great, it's the things I'm saying. That's the best. Just kidding. But I love our Kaizens. If the interviews never came back, I could get over it. If the Kaizens never continued, I don't think I could get over it. So we don't know exactly what's coming next, but I think Kaizen needs to continue to be a thing that exists in our world. And we don't know what form that's going to take; maybe it'll be on the Changelog, maybe it'll be on some show that doesn't exist yet... Maybe it'll just be a show called Kaizen. I don't know. But we don't want to lose you entirely, Gerhard. We want you to continue to experiment, and push forward our operations here, our platform, pushing us into new things so we can learn along the way, and sharing that, at least the navel gazing part of Ship It. What do you think?
44
+
45
+ **Gerhard Lazu:** I love it.
46
+
47
+ **Adam Stacoviak:** Yeah.
48
+
49
+ **Gerhard Lazu:** If you remember, one of the ideas for the show titles before Ship It was Kaizen.
50
+
51
+ **Jerod Santo:** Right.
52
+
53
+ **Gerhard Lazu:** That's how -- it's so embedded within me... I mean, I never see myself stop doing that. And the fact that we can talk about it - I think it's great. The cadence makes sense. It fits with everything.
54
+
55
+ **Jerod Santo:** Right. And in fact, your idea to us, your pitch for this show was basically just the Kaizen stuff. And I said, "Nobody wants to listen to us every week talk about our platform every week. We need to mix in some interviews." And so that became Ship It. It was the interview shows, and then I thought you picked a pretty good cadence, of every ten, every two and a half months... Almost quarterly, but using the episode numbers brilliantly to map out a Kaizen episode that made sense. I think if we would have come out and done a weekly Kaizen with us three, I don't think it'd be the show that it has been. And so I think that was a good collaboration by us, to realize that, but also, you were definitely on to something in terms of just an enjoyable format that people do like to follow and say "These crazy guys just air their dirty infrastructure laundry, right here on the air, for us to learn from." And I think that's cool.
56
+
57
+ **Gerhard Lazu:** Yeah, I think so, too. And I really liked the new GitHub discussions... I mean, we had the one for Kaizen eight, now we have for 40, which is discussion for Kaizen nine, which is this episode... And it captures all the things. I think that works really, really well. You have the written format, you have it in GitHub, you have pull requests, issues, all things connected... I think it's something worth celebrating. And while we don't ship once every two and a half months, because that would be crazy, we do talk about the highlights. And I think that is a nice forcing function to always keep moving forward. Always keep improving. It keeps reminding us of what we've accomplished.
58
+
59
+ **Jerod Santo:** Adam, do you wanna chime in here? You've been nodding along, but you haven't said anything.
60
+
61
+ **Gerhard Lazu:** I think he's too sad.
62
+
63
+ **Adam Stacoviak:** I am a little too sad, honestly. I was having trouble coming up with words, because you know, ending is always challenging. I guess pausing is a little easier. But it's bittersweet for me, because there's a lot to like about it, obviously, and there's a lot that came from our deeper relationship, and everything... But I'm also about quitting when it makes sense. The Dip from Seth Godin was, by far, one of my favorite books in terms of like self-development. And that book isn't really about quitting necessarily (I guess it might be), it's about knowing the right time to quit, I suppose; or pause even something. And that's a challenge, because too often we'll push ourselves beyond our limits, and things break. Sometimes those things that break are really important to us, and that's called regret. And so none of us want to live with regret. I don't want you to live with regret. I want to do great things together, but not at the expense of the things that are important to you and to us. And I think from a listenership, I would love the listeners to come to this and say, "That's really awesome, to know when to pause."
64
+
65
+ \[10:38\] I mean, for a while there I had to pause Founders Talk, and other things that were way back in the day, to make sure that we can focus on the Changelog podcast. A couple years back Mireille and I paused Brain Science because it was just too fast of a clip for us; we were both really busy... We're still in the midst of bringing that show back, but we have great ambition and great plans... But you have to look at what you're capable of, and what you want to achieve, and kind of pair the two up, and say, "Is this sustainable?" And if it's not, be wise and put your no down. Because too often do we say yes when we should just say no.
66
+
67
+ **Gerhard Lazu:** 100%.
68
+
69
+ **Adam Stacoviak:** On the note of more video stuff though, and this experimentation, and this Kaizen, and some of it... It sounds like what we really wanted from this was the experimentation and the freedom, and then the cadence of the actual podcast... Which, I agree, a weekly podcast is incredibly hard to do. If you're listening to this right now, anybody who's shipping a show weekly, for years, they're not quite superheroes, but they're darn close, because it takes a lot to show up every single week, and do something that is worthwhile. And if you have a growing audience, like we've had... And this show has been part of that. That's a big, big challenge.
70
+
71
+ However, even like on today's topic, like DHH, and cloud, that conversation out there, like this backlash against the cloud... Like, I would have loved if -- that show was great, by the way. I loved that episode. But like in terms of experimentation and videos on YouTube, I would love to see -- because you don't have to have like a rhythm; you can just do it when you want... A deep-dive or a peek behind the veil of their non-cloud cloud; their own infra. Like, what does that mean, to stand up your own infrastructure? ...and just have a 20-minute DHH screen-share with you, and you guys just hammer it out for like 20 minutes. That'd be cool for me, every couple months. Like, nothing that's weekly; just something that's like "Show me behind the screen. Give me a peek at your infra. What are your choices, why'd you make them? How does it work?" etc. That'd be cool to me. And with no necessary cadence; just like whenever it makes sense. And that kind of fits into your desire to explore. Because you're an explorer, Gerhard, you know? You like to push the boundaries of you on the edge... But I think this show may have limited you from doing that, potentially.
72
+
73
+ **Jerod Santo:** Adam, you just said behind the screen. Was that a slip of the tongue, or are you workshopping a new title scheme? \[laughter\]
74
+
75
+ **Adam Stacoviak:** You know, always, Jerod. Always.
76
+
77
+ **Gerhard Lazu:** I like where this is going... \[laughter\] Behind the keyboard.
78
+
79
+ **Jerod Santo:** Have you done that on purpose, or...?
80
+
81
+ **Gerhard Lazu:** Not away from the keyboard; behind keyboard, behind the screen, behind the camera.
82
+
83
+ **Jerod Santo:** There you go. So that's the big news. That's probably a surprise to most, if not all, in terms of Ship It subscribers. A lot of these people are like - they listen to Ship It every week, and they just heard this, and they're like "Well, that sucks for me." Touchpoints - like, we're talking about potential experimentation; how can they stay plugged in with you, what you're doing, and maybe with the future of the show... Obviously, don't unsubscribe from your feed reader, unless you're a super clean freak, because there might be new things getting published into the feed. Just go ahead and let it go inactive, and if we ever publish here again, you'll just automatically get them. So I'll say that much myself, subscribe to the Changelog; it probably would be a good idea. But I'll just throw that in there as a shameless self promotion. But for you, Gerhard, how can people who want to stay connected with you personally, beyond Ship It, where should they go?
84
+
85
+ **Gerhard Lazu:** \[14:14\] Yeah. So I'm still on Twitter. It's still a thing. I'm on Changelog.social, even though I haven't tweeted anything yet, if that's a thing to do it...
86
+
87
+ **Adam Stacoviak:** Tooted.
88
+
89
+ **Gerhard Lazu:** I haven't tooted, there we go. Sorry.
90
+
91
+ **Adam Stacoviak:** You toot there.
92
+
93
+ **Gerhard Lazu:** See? I'm not up to date on all these things, so I think that's an area worth improving.
94
+
95
+ **Adam Stacoviak:** No one wants to be up to date with that word.
96
+
97
+ **Gerhard Lazu:** Yeah. I'm still very much on the Changelog Slack, on the Changelog GitHub... That's where I intend to spend more time, since this whole Kaizen thing behind the scenes for Changelog is not going to stop. We'll still be improving things, there's pull requests, there's issues, there's all sorts of things happening there... Maybe even discussions. I mean, we had this second GitHub discussion, where everyone is welcome to participate, where we're talking specifically about what we are going to improve about Changelog. So I'm not sure how Chris Eggert knew how to jump in and help out, and do that improvement, or Jarvis Yang, and there's a couple of others. Or Noah... How Noah Betson knew how to do this, and a couple of others. But this is still going on. We are still on GitHub; we're still doing things. We're still on Slack, on the Changelog Slack. So we're still there, it's just like the show, the cadence, the weekly cadence - we are pausing that until we figure out, or I figure out what comes next... Which would be still like with listeners, with people, as like -- I really like Adam's idea. It's closer to what I had in mind a couple of years back. And I'm craving for experimenting more, and only putting an episode out there maybe in a different format, when it's ready. It doesn't mean once a year, but it means less than once a week. So between once a week and once a year, that's somewhere the sweet spot, which I have yet to discover.
98
+
99
+ **Jerod Santo:** There you go. So not continuous delivery, but some sort of delivery...
100
+
101
+ **Gerhard Lazu:** Not of episodes, because there are so many other things, right? I mean, it has to be meaningful. I remember, for example, the Merry Shipmas, episode 33. That took a lot of early mornings, late nights and weekends. I have no idea how I could make time at that point for it. It was crazy. I no longer have that time now, which means that I no longer can do those things, which means that it's all in the episodes and the few hours here and there, which is just not making me happy. Anyways... We are improving that.
102
+
103
+ **Jerod Santo:** Right.
104
+
105
+ **Adam Stacoviak:** It might make sense to say how we got here, which I think if you listened to this show since the beginning, you know kind of how we got here... But how we got here originally was like you, Gerhard, was our SRE for hire, essentially. You helped us stand up our infrastructure way back in 2016, when --
106
+
107
+ **Gerhard Lazu:** That's correct.
108
+
109
+ **Adam Stacoviak:** ...when Jerod was exploring delivering and deploying an Elixir application to production. I'm paraphrasing the story, of course, but how we got here was by shipping, and we would talk about that once a year on the Changelog podcast. We liked doing that so much... We're essentially just regressing back to the original blueprint, essentially, right?
110
+
111
+ **Jerod Santo:** Not once a year, though. More than once a year.
112
+
113
+ **Adam Stacoviak:** Well, maybe less than once a year, but back to the blueprint of you're still working with us on our infrastructure; that's not changing. We're gonna still keep improving that; that's not changing. We'll keep developing partnerships. One of the ones we've just formed recently was Typesense. Behind the scenes Jerod and Jason Bosco are like hammering out some cool stuff with Typesense for our search, and that's so cool. But these things are gonna keep continuing, we're gonna pause the podcast, essentially. The extra is changing, and we're regressing back to the normality, essentially. The opportunity to put your explorer hat back on, put a smile back on your face, and leverage your time so wisely.
114
+
115
+ **Gerhard Lazu:** \[17:49\] Exactly. That's exactly right. And in a way, we are kind of going back to the beginning from the shipping side of things, because we have a huge improvement that went out in the last two and a half months... And there's even more amazing stuff coming out in the next two and a half months, so on like the next Kaizen, in the time period. And it means that I will have more time to do a better job of that; focus more, do more... And obviously, that means for me CI/CD as code. So we are going back to the initial idea of like "Hey, how do we get Changelog out there? How do we use --" for example, back in the days it was Docker, for deploying on Docker Swarm, running on Linode, set up with TerraForm. Or was it Ansible? I think it was Ansible.
116
+
117
+ **Jerod Santo:** It was Ansible and Concourse CI.
118
+
119
+ **Gerhard Lazu:** There we go. Concourse CI. Exactly. So in a way, we are back there, right? It's the continuation of Concourse CI, it's the continuation of that... There is a PaaS now, which is Fly... But again, it's going to be a lot more. Integration with services... And I know that Jerod is missing certain things... And stuff is coming, but for that, we need more time.
120
+
121
+ **Break:** \[18:59\]
122
+
123
+ **Jerod Santo:** So describe to us this big update, this big improvement that you did over the last two and a half months. I think we touched on it in Kaizen 8, but it wasn't finished... Now, this was a Dagger version 0.3, I believe... First of all, explain what the improvement is, and then you can get into what you had to do to pull this off, and where it's going from there.
124
+
125
+ **Gerhard Lazu:** So Merry Shipmas - I keep coming back to that, episode 33 - we introduced Dagger in the context of Changelog. What that meant is that we were migrating from Circle CI to GitHub Actions. Rather than trading one YAML for another YAML, I thought "Wouldn't it be nice if we had CI running locally first, and remotely next?" And remotely would be via a very thin interface. That interface with Dagger. You can run it locally, you run it in whatever CI you have, invoking the same command, and the same things will happen, because your CI now runs in containers. And I don't mean CI like the actual operations. That was November 2021.
126
+
127
+ Beginning of 2022 I joined Dagger. We did a lot of improvements, and end of last year, which was just a few months ago, we released SDKs, which means that you can write your CI/CD system, your pipelines, in code. Whether it's Python, whether it's Go, whether it's Node.js, it's no more YAML, it's no more weird things, weird configuration languages, that some perceive weird... It's the code that you know and love. So what that means is that now you can write proper code, that declares your pipeline, like all the things...
128
+
129
+ \[21:56\] And I say "declares" because it's lots of function calls. Sort of like lazy chaining, which eventually gets translated into a dag, hence Dagger, the name. And then, everything gets materialized behind the scenes. Some things are cached, naturally, other things aren't.
130
+
131
+ So that means that right now we are in the phase where, from Dagger 0.1, which is using CUE, we now have Go in our codebase. And I want to know how do you feel about that, Jerod? How do you feel about having your Elixir spoiled (hopefully not) by some Go code?
132
+
133
+ **Jerod Santo:** No, I feel good about it. I feel like a renaissance man. We have all these different things; we taste of the best Elixirs, and we also can just pull in some Go when we want to... I mean, that's diversity, that's inclusion... I'm happy about it.
134
+
135
+ **Gerhard Lazu:** That's amazing. So no more YAML...
136
+
137
+ **Jerod Santo:** Also happy about that...
138
+
139
+ **Gerhard Lazu:** No more CUE... No more makefiles.
140
+
141
+ **Jerod Santo:** I was going to learn CUE. I don't have to learn CUE now.
142
+
143
+ **Gerhard Lazu:** Exactly. You have to learn Go...
144
+
145
+ **Jerod Santo:** No more makefiles. Zero makefiles.
146
+
147
+ **Gerhard Lazu:** Yup.
148
+
149
+ **Jerod Santo:** Now you got me.
150
+
151
+ **Gerhard Lazu:** Yeah. The top one went, and the others will disappear as well from the subdirectories when we finish the migration. So there's no more top makefile.
152
+
153
+ **Jerod Santo:** Okay, so where do I go? I look for a .go file, it's in there somewhere, to look at what's going on.
154
+
155
+ **Gerhard Lazu:** So everything Dagger-related is in mage files.
156
+
157
+ **Jerod Santo:** Okay. And mage is Go's version of make, or rake, or like a task runner thing?
158
+
159
+ **Gerhard Lazu:** It's just like to invoke things, just to have like different entry points... So for example, right now we have three entry points. The first entry point is the Dagger version 0.1 legacy, where we can run the old pipeline. 0.1 is 0.3. That was one PR. So we had PR 446, where we run the Dagger 0.1 pipeline, the CUE one, and 0.3 using the Go SDK. So the entry point is Dagger version 0.1 :shipit. And that wraps the old pipeline.
160
+
161
+ There's also a new - again, this is like image, so it exposes... I mean, you can think of those like subcommands. It all bundles up in a binary, and it has like different subcommands. And if you don't provide any command, it'll show you "Hey, you can run these things." That's in essence what it is.
162
+
163
+ So we have image is a namespace runtime. So we can now build the runtime image using Dagger version 0.3. Not only build it, but also publish it to GHCR. And that is pull request 450. So now we are building and publishing the Changelog runtime image to GitHub Actions. Sorry, using GitHub Actions, or within GitHub Actions, using a very thin Dagger layer. And all it does is basically just go run. Go run, the main Go file, and the command is image runtime, and off it goes to GHCR. So if you go to GHCR.io/thechangelog/changelog-runtime, you will see our image in all its beauty. What does that mean? It has a very nice description; we're making use of certain labels that the open container spec has. So there's like a specific label to show the description in GHCR.
164
+
165
+ **Jerod Santo:** So GHCR - that's GitHub's deal, right? That's their registry.
166
+
167
+ **Gerhard Lazu:** GitHub's Container Registry. That's it.
168
+
169
+ **Jerod Santo:** Okay. I haven't used this before, so I'm a newb here. I'm used to Docker Hub. So this is like GitHub's version.
170
+
171
+ **Gerhard Lazu:** Exactly.
172
+
173
+ **Jerod Santo:** Oh, I'm looking at this Changelog runtime, and it has an emoji next to it...
174
+
175
+ **Gerhard Lazu:** How beautiful is that? \[laughter\]
176
+
177
+ **Jerod Santo:** Gerhard got some emoji in there... So you're already talking my language...
178
+
179
+ **Gerhard Lazu:** Elixir version 1.14.2, so you see the description... I mean, you can see the version that we use in the actual tag... And that's what we're using in production right now. That went out this weekend.
180
+
181
+ **Jerod Santo:** Okay.
182
+
183
+ **Gerhard Lazu:** So we're using that runtime image.
184
+
185
+ **Jerod Santo:** Okay. And this was built via Dagger, inside GitHub Actions?
186
+
187
+ **Gerhard Lazu:** That's right. Yup.
188
+
189
+ **Jerod Santo:** Okay.
190
+
191
+ **Gerhard Lazu:** And you can also run it locally, if you want.
192
+
193
+ **Jerod Santo:** When you run it locally, are you running it inside Dagger? What's the terminology here?
194
+
195
+ **Gerhard Lazu:** \[26:04\] Okay, so you're running it -- so it runs Go on the outside, it provisions a Dagger engine inside Docker... Because if you have Docker, it needs to provision like the brains, if you wish, of where things will run... So by default, if you have Docker, it knows how to provision itself. When the Dagger engine spins up, all the operations run inside Dagger engine. The really cool thing is, if anything has been cached, it won't run it again. So imagine our image, when you pull down our image... So when we build this runtime image, obviously we have to pull down the base one, which is based on the hexpm image, and that's from Docker Hub, then it needs to install like a bunch of dependencies... And by the way, all that stuff - I mean, if you look at... I have to show you the code. This is too cool, Jerod. Check this out. So if you go to the pull request 450, and if you look at image files, image, image.go, look at line 50 to 61.
196
+
197
+ **Jerod Santo:** 'build. Elixir(). WithAptPackages(). WithGit(). WithImagemagick().' So this is like a chain of function calls that you've named nicely...
198
+
199
+ **Gerhard Lazu:** That's it. And you can mix and match them in whichever way you want. So when, for example, we convert the rest of our pipeline to Dagger 0.3, we'll do build, we'll take Elixir, with packages, and whatever else we want. And when we want to publish the image, we can chain, again, the function calls however we want. For example, we do not want with Node.js when we publish our image, but we do want with Node.js when we build or compile our assets. So this way, we can chain all the functions, get all the bits from the various containers, various layers, assemble it, and make sure that all dependencies will be the same. Because with Node.js knows exactly which Node.js version we do; and it doesn't matter where you call it from. And because all the operations are cached, they won't rerun. Some of these can take a really long time, by the way... Anyway, so I'm super-excited about this. So this is -- and by the way, Noah, if you're listening to this, I'm very curious to know how much easier it is to bump our dependencies with the new approach.
200
+
201
+ **Jerod Santo:** I was just going to ask that, because I'm looking at line 16, it says elixir version equals, and then it's a string, 1.14.2.
202
+
203
+ **Gerhard Lazu:** That's it.
204
+
205
+ **Jerod Santo:** Can I just change that string?
206
+
207
+ **Gerhard Lazu:** That's it.
208
+
209
+ **Jerod Santo:** And that's it?!
210
+
211
+ **Gerhard Lazu:** That's it. Change the string, commit and push, and the CI will take care of the rest.
212
+
213
+ **Jerod Santo:** Whooo-weee!! Now we're talking.
214
+
215
+ **Gerhard Lazu:** Oh yeah, baby.
216
+
217
+ **Jerod Santo:** I've asked you for this for years. Like, can I go to one place in the code and just change the version, and it'll be done?
218
+
219
+ **Gerhard Lazu:** That's it. And there's like more and more stuff that we can add on top of that. For example, we can change the local files. You know, we still have, in contribute.md, if you go that -- by the way, that was updated as well to tell you how you change things. So that was updated to reference the new files. Those steps, we can start removing them, because we can automate more and more of that stuff. So we can, for example, go and update the Elixir version in the readme, in the contribute.md, wherever we have it. It's all code, at the end of the day. And it's not scripting.
220
+
221
+ **Jerod Santo:** Meaning it's only in the readme? Like, you could have it in the readme only?
222
+
223
+ **Gerhard Lazu:** Meaning that it will only be in the image go. That's it. When you bump it into image go, and the pipeline runs, it will update all the other places.
224
+
225
+ **Jerod Santo:** Oh, it'll update the readme for you.
226
+
227
+ **Gerhard Lazu:** Exactly.
228
+
229
+ **Jerod Santo:** I was gonna say, it'd be crazy if you actually just had that version in the readme, and it read it in the image go... Which you probably could do, because it's Go code.
230
+
231
+ **Gerhard Lazu:** It could do that. Yeah, it could do that.
232
+
233
+ **Jerod Santo:** That doesn't sound smart, but it just would be interesting.
234
+
235
+ **Gerhard Lazu:** \[29:45\] Yeah, no. You want it to be in code. You want it in code. And not to mention that when it's in code, by the way, we can have -- again, we still need to figure this part out, I suppose... But we could have things that automatically bump it. When a new version comes out, it bumps it in code, the pipeline bumps it everywhere... And because the pipeline runs, it checks if the new version works.
236
+
237
+ **Jerod Santo:** And then opens up PR and then we can just merge?
238
+
239
+ **Gerhard Lazu:** That's it, Jerod. That's it. That's it.
240
+
241
+ **Jerod Santo:** Okay.
242
+
243
+ **Gerhard Lazu:** See, it's stuff like this that gets me really excited. \[laughs\]
244
+
245
+ **Jerod Santo:** You're getting me.
246
+
247
+ **Gerhard Lazu:** Yeah.
248
+
249
+ **Jerod Santo:** Okay. So that's cool. How does that play into the other thing which happened recently, thanks to Chris -- and by the way, by the time this episode goes out, we will have shipped an episode of the Changelog with Brigit Murtaugh from the Dev Containers spec, from the VS Code team, talking all about this, in which Chris gets multiple shout outs. So he's probably getting sick of hearing us talking about him at this point. He opened up a pull request allowing us to run our codebase on Codespaces by adding a devcontainer.json. So thanks to him for that. He's using a Docker Compose file and a little bit of JSON, and you can just like say, "Open in Codespaces", and it's super cool. How do these changes affect his work, if at all, or what's the integration there? Because now we have like a dev environment, we have this image that you're changing the way it works...
250
+
251
+ **Gerhard Lazu:** Yeah. It all builds on top of it. This is brilliant.
252
+
253
+ **Adam Stacoviak:** This is brilliant... \[laughs\]
254
+
255
+ **Gerhard Lazu:** It is. And it's not me, it's the combination of people that came together, right? I wasn't expecting Chris to come along.
256
+
257
+ **Jerod Santo:** Nobody was.
258
+
259
+ **Gerhard Lazu:** That was great, it was amazing. So based on that - that was pull request 437 in our code base - I did a follow-up, 449, which basically changes the reference in the Dev Container with our runtime image, that is now pulled from GHCR. And because we're running GitHub Codespaces, that will be very fast. Much faster than if you'd pulled it from any other registry. So that was another reason to go to GHCR.
260
+
261
+ **Jerod Santo:** So that works currently?
262
+
263
+ **Gerhard Lazu:** That's how it works currently. If you go and open the file - come on, let's check it out.
264
+
265
+ **Jerod Santo:** Because I just did it last week in preparation for that conversation with Brigit, and one thing I noticed is pulling from Docker Hub, just the entire -- the first running Codespaces experience. I mean, it's probably five to seven minutes, you know...
266
+
267
+ **Gerhard Lazu:** That has improved. The pull request that I mentioned, 449 - it no longer builds it; it references the already built runtime image. If you check out in the Dev Containers directory, if you look at the Docker Compose file, line five, now it has the image reference. So the runtime image is no longer built; the runtime image reference is pulled. So it shouldn't take six, seven minutes anymore. It should be instant.
268
+
269
+ **Jerod Santo:** I'll try that again.
270
+
271
+ **Gerhard Lazu:** There you go. Let me know how it works. But if not, we'll work on it some more. And all this stuff, all these things, we can start templating. Once we get it in the pipeline, there will be a single place where we declare those versions. As soon as the image builds successfully, and because we go through the process in the pipeline, we can start modifying all these other places, then build the production image, try and deploy it, and if it works, we're done. Merge the PR... We're good.
272
+
273
+ **Adam Stacoviak:** Who else is doing it like this? How state of the art is this?
274
+
275
+ **Gerhard Lazu:** I don't know. I would say it's pretty cutting edge... Because we are redefining the CI/CD with Dagger. We really are. I mean, the CI/CD as code - forget like any weird languages... And some of the stuff that we have coming - I can't talk about all the things... But I'm like six months ahead, and I'm so excited to be there.
276
+
277
+ For example, last Friday - it was just a few days ago - we shipped services support. It's an experimental feature. If you're listening to this, you're not supposed to use it, so please don't, because it may be broken in a number of ways we don't know... But Changelog will be the first one to use the services support in Dagger. What that means is that we will be spinning up a PostgreSQL container that we need for our tests inside dagger, inside the Dagger engine, because it now has a runtime.
278
+
279
+ **Jerod Santo:** And what are the ramifications of that?
280
+
281
+ **Gerhard Lazu:** \[33:57\] Well, you spin up containers in code. Just as you write your code, you can say, "Spin me up a PostgreSQL container", and when it's spun up, connect it to this other container where the test will run. You can have the waiting -- I mean, we used to do nc, netcat, for heaven's sake, to wait for the PostgreSQL container to be available. There's like services support, there's like ugly YAML... All sorts of weird things.
282
+
283
+ **Jerod Santo:** Let's not knock on netcat, Gerhard. Come on. Sweet tool.
284
+
285
+ **Gerhard Lazu:** No, it's amazing. I love it. It is old school. It's amazing. But what's not amazing is that you have to -- you're forced to combine scripting and YAML.
286
+
287
+ **Jerod Santo:** To wait. Yeah, you're waiting for a service to be ready for you.
288
+
289
+ **Gerhard Lazu:** In a weird way. Exactly. Rather than doing it in code. Why wouldn't you do all these things in code? Because now we can start orchestrating containers. But orchestrating for the purpose of CI/CD. Let's be clear about that.
290
+
291
+ **Jerod Santo:** So we're going to be like a poster child for Dagger, aren't we? I mean, these people have to love us. We're using all the bleeding -- I mean, by these people, I mean you people.
292
+
293
+ **Gerhard Lazu:** I love you. I'm Dagger.
294
+
295
+ **Jerod Santo:** I know you are. \[laughter\] That's cool, man. I love that we're a testbed for cool new things. And we're definitely right there on the edge... I wonder how much bleeding we're gonna do. Well, we are defining it. Well, we'll find out... And by the way, you have the right person to fix it, who does the work. \[laughs\] Isn't that the whole point?
296
+
297
+ **Jerod Santo:** Yes. Alright, cool. Exciting times. I've always wanted to have one string in my codebase, in which I could update the version of Elixir.
298
+
299
+ **Gerhard Lazu:** It's there.
300
+
301
+ **Adam Stacoviak:** And then docs, too. That's so cool. Updating docs is a cool thing. Still docs suck; especially a readme. Like, when you go to the readme, it's like -- I've gone there recently with other things I'm working on... It's referencing the old release< for example, in the readme. It says in the installation instructions, which you go to immediately, but it's referencing an old release. But if you go to releases, there's like two new ones, for example. But the documentation is out date.
302
+
303
+ **Jerod Santo:** It could always be outdated.
304
+
305
+ **Gerhard Lazu:** Not anymore.
306
+
307
+ **Jerod Santo:** So is every -- so because we do basically master branch base deploying, is every push to master a release, effectively?
308
+
309
+ **Gerhard Lazu:** Yeah. That hasn't changed in years. Since I've been around, that hasn't changed.
310
+
311
+ **Jerod Santo:** Right. What about on PRs and branches? How does that work?
312
+
313
+ **Gerhard Lazu:** We don't deploy. So we now run tests, by the way... We didn't use to run tests in pull requests. Oh, dang it, I don't know how I overlooked that thing...
314
+
315
+ **Jerod Santo:** We just close them all, yeah. \[laughs\]
316
+
317
+ **Gerhard Lazu:** Yeah, yeah, yeah. So that was actually one of the first things, pull request 436. So since pull request 436, which by the way, happened in the same Kaizen, since Kaizen 8... We are now running tests for every pull request. And we do that by basically leveraging the built-in Docker engine in GitHub Actions... Which is a bit slow, and it doesn't have any caching... But it means that we are running all the pipelines, including building a runtime image, but not publishing it, because there aren't credentials to do that, with every pull request. So while we don't deploy on every pull request, we could...
318
+
319
+ **Jerod Santo:** Which would give us deployment previews, effectively.
320
+
321
+ **Gerhard Lazu:** We absolutely could. That's it. That's it, yup. And the nice thing would be - I think I'm very keen to try and do that in Dagger. The reason why I'm keen to do that is because of the services support. I'm pretty sure when they were designed no one thought about this, but we can have longer-running environments. So basically, we have a CI that is like one action which won't stop until you're okay with it. So how do we figure out routing? I don't know. I'm really keen to explore that.
322
+
323
+ We could run a very lightweight version of the Changelog in the context of the CI/CD, in the context of the pull request. Because it doesn't have to serve a lot of traffic, it doesn't need to be anything big... The CI/CD is already there. You have a VM where you're running the actual code for your tests. So why wouldn't you run a longer-running process that exposes Changelog?
324
+
325
+ **Jerod Santo:** You're blowing my mind, Gerhard. I'm not even --
326
+
327
+ **Gerhard Lazu:** \[38:00\] That's a crazy idea, right? No one has thought about that before. \[laughs\]
328
+
329
+ **Jerod Santo:** Alright...
330
+
331
+ **Gerhard Lazu:** See, I told you - six months from now. It's the future.
332
+
333
+ **Jerod Santo:** Okay. Well, that's exciting.
334
+
335
+ **Gerhard Lazu:** So when a pull request opens, basically, the GitHub runner that runs all the various checks, one of them, we basically keep it running for longer; or we don't even use GitHub runners at that point. So one of the things which we run - we spin up a Changelog, a preview one - we still need to figure out the data part - that will be accessible publicly. We get a random URL that you can hit, and then you can connect to that instance. And that instance runs within one of the CI workers. When the pull request is merged - I mean, one of the checks... Again, I still need to figure out how to do this, but one of the checks, basically, will not finish until the pull request is merged. And that check in GitHub Actions - that's the one where you can access the Changelog, the preview version.
336
+
337
+ **Jerod Santo:** Nice.
338
+
339
+ **Gerhard Lazu:** So literally, you're running a preview in CI/CD.
340
+
341
+ **Jerod Santo:** I'm going to need a new diagram...
342
+
343
+ **Gerhard Lazu:** Infrastructure.md is the place to go to our repo to see how everything wires together, and that's the one that I intend to update as we will have this new stuff. So infrastructure.md is fairly accurate right now. I think the only thing missing is GHCR, and the reason why it's missing is because I'm migrating the rest of the stuff to GHCR. And once that will complete, it will be weird to see both Docker Hub and GHCR. So we're in a transition period. Once the dust settles, the diagram will be up to date. But again, that's the only thing which is missing. Everything else is accurate. Fly, Honeycomb, Sentry... Everything.
344
+
345
+ **Jerod Santo:** Very cool. Very cool.
346
+
347
+ **Gerhard Lazu:** So what about you, Jerod? I know that you've had some improvements in mind. Some of them I think you've already done since Kaizen 8...
348
+
349
+ **Jerod Santo:** Yes...
350
+
351
+ **Gerhard Lazu:** Which ones do you want to talk about? There's many, I can tell you that.
352
+
353
+ **Jerod Santo:** So a lot of my time, Gerhard, as you know, has been spent on rotating all of our secrets, first of all.
354
+
355
+ **Gerhard Lazu:** Oh, my goodness me. There were so many. \[laughter\]
356
+
357
+ **Jerod Santo:** So LastPass, thanks for nothing... Well, thanks for a few good years; and then we've lost confidence. So we are 1Password users as a team now, which we talked about for a few Kaizens, and finally made that migration. And then we decided, because of the LastPass leak, and the fact that we're all on 1Password now, it's a great time to just go through and do a key rotation, right? Just rotate all of the things... Which was just a lot of things. Like, man, we've got a lot of secrets in there, lots of integrations... And mostly harmless. There's a few fallouts, as there tends to be, with just that many changes; things that went wrong because of that. The biggest one was our stats system went down for a few days, because AWS credentials existed in one place correctly, but the other place incorrectly, I think... And then secondly, Changelog Nightly actually stopped sending, because I didn't update the Campaign Monitor API key on Nightly, which is an old Digital Ocean box from way back; it still just runs dutifully, every night, on a Digital Ocean box...
358
+
359
+ So I updated our Campaign Monitor API key inside of our app, and in Campaign Monitor, but I didn't rotate it over on the other server. And so it failed to send. It was still generating the emails, just not sending them, which is key; it's a key part of it. So there was like a few nights where Nightly didn't go out until I realized it, and I was like "Oh, that one makes total sense." You and I also teamed up on a few things...
360
+
361
+ **Gerhard Lazu:** Oh, yeah.
362
+
363
+ **Jerod Santo:** ...which is always fun.
364
+
365
+ **Gerhard Lazu:** Issue 442 for anyone that wants to see all the things we have to go through. We had 79 tasks to complete. And some of the work quick, but just like untangling all that... We cleaned up a lot of stuff, and again, it was like almost like a spring clean; even though it was January, it was definitely like a spring clean for secrets.
366
+
367
+ **Jerod Santo:** \[42:13\] Yeah. You don't realize just how many service integrations you have until you go to rotate all your secrets. And then it's like "Holy cow. Slack. Campaign Monitor. GitHub. Fastly AWS. GitHub."
368
+
369
+ **Gerhard Lazu:** Notion.
370
+
371
+ **Jerod Santo:** Mastodon.
372
+
373
+ **Gerhard Lazu:** Yeah. GitHub twice, by the way. You said GitHub twice, because GitHub is used twice you have NPI token \[unintelligible 00:42:30.06\]
374
+
375
+ **Jerod Santo:** Same thing with Slack. There's like two different Slack APIs that we use. One's for the invites, which is like this old legacy thing that was never an official API, how you actually generate an invite. And then everything else is like for logbot, which is our Slack bot that does a few things. Yeah, there's just so many of them. And then it's just like -- it's just an arduous process. So this is why my personal private key is years old at this point, embarrassingly.
376
+
377
+ **Gerhard Lazu:** We have to rotate it again. You won't be able to SSH into things. Good thing is you don't need to SSH anymore. Isn't that a relief?
378
+
379
+ **Jerod Santo:** That is nice. We're getting better on that front.
380
+
381
+ **Gerhard Lazu:** Flyctl ssh console...
382
+
383
+ **Jerod Santo:** I do enjoy that, yes. So that was one big piece of work... The other thing - Adam, you mentioned it; it's in flight right now - we're swapping out Algolia for Typesense, which is a very cool C++ based search index, search engine, open source, that we had on the Changelog... Jason Bosco, we had him on the Changelog last year. I really liked the guy, got really interested in the product. Algolia has been kind of -- we were on the Algolia, and we still are on the Algolia open source plan, which sets us a limit... And so when we've hit that limit, and we've been putting new things into the Algolia index ever since, but it won't search them until we upgrade our plan... So we're happy to be replacing Algolia with Typesense. Of course, that's an open source thing, but we're working on a partnership with Jason and his team, so that we'll be using Typesense Cloud. All that's very close to at least being swap-out-ready, and then we're going to build from there and start to use some of the things that make Typesense interesting. So I've been coding that...
384
+
385
+ And then the third thing is trying to rejigger the way that our feeds are generated and cached and stored in order to get to this clustered world of multiple nodes running the apps, without having to change the way we use Erlang's built-in caching system, because I've just had some issues with that... And I just started thinking, "Why are we caching stuff if we have a very fast application, that can just run close to the user? Let's just figure out a way not to cache stuff as much." But we have these very expensive pages, specifically the feeds: Master feed, Changelog feed... I mean, the XML that gets generated is like 2.3 megabytes. It's not going to be fast on any system, unless it's literally pre-computed.
386
+
387
+ So I started thinking about different ways of pre-computing and storing files on S3, and fronting that... And there's just lots of concerns with publishing immediately; we like to publish fast. And we even had a problem - thanks to a listener who pointed it out - with our Overcast ping, because Overcast as a specific app allows you to ping it immediately on publish, and they'll just push notify, and people will get their things immediately... Which some people really like that. I'm always surprised - there's some listeners who listen like right when it drops, and there's others who listen like six months later. And that's all well and good, but for the ones who want it now - it's cool, we add the Overcast Bing. Well, there's an issue there, because Overcast pings, but we're caching our feeds for a few minutes, maybe just a minute. And so Overcast says there's a new episode, and so you click on it, and you go there, and there isn't a new episode. And then you refresh, it's not there, then you refresh, it's not there, then you refresh it and it is there, and it was like 60 seconds... Because we're caching.
388
+
389
+ \[46:14\] So I just turned that thing off and thought, "Well, people can just wait for Overcast to crawl us again, for now, but I would love to solve that problem..." And so then I started thinking, you know, we already have a place where we store data, that's a single instance, but is a service, so to speak, and it's called Postgres. And instead of adding like a memcached, or Redis, or figuring out these caching issues inside of the Erlang system, which was not trivial in my research, I was like "What if we just precompute and throw stuff into Postgres?" And I did a test run of that, the feeds; just the feeds. And just turn off all other caching, because I don't think we actually need any other caching. It's just like, I already had caching setup, so I cached a few popular pages... But what if I just did it on the feeds? And every time you publish, you just blow it away, rerun it, and put it in Postgres. And you just serve it as static content out of Postgres.
390
+
391
+ I did some initial testing on that locally, and it's like consistently 50-millisecond responses with like Apache Bench, it was not a problem. It's never super-fast, like what you get with Erlang, where it's like microseconds... Which I always like to see those stats. But that's not what we need, right? Consistently 50 milliseconds is great.
392
+
393
+ **Gerhard Lazu:** Yeah.
394
+
395
+ **Jerod Santo:** Without any caching layer. I mean, you're basically just pulling it out of Postgres and serving it. Very few code changes... It just felt "Okay, this is kind of a silly idea, using Postgres as a cache effectively, but what if it just works, and it's simple, and we don't have to add any infrastructure?"
396
+
397
+ So I want to test that sort of in production, I kind of want to roll it out and run it, and then easily roll it back if it's not going to actually work in production... But I don't really have the metrics, I don't have the observability. I have Fastly observability through Honeycomb, but I'm lacking the app responses \[unintelligible 00:48:10.20\] observability, which is really what we want. We don't want Fastly to be waiting on the app all of a sudden, and the app to be just bogged down on other requests. And so that's where I came back to you and said, "This is what I would like to see... Can we get Phoenix talking to Honeycomb in some sort of native fashion?" And then I found this OpenTelemetry thing, and I stopped right there. So I will let you respond after that long monologue.
398
+
399
+ **Gerhard Lazu:** No, no, I mean, that's exactly it. I mean, we knew we wanted to do that. It's like another experiment which I wanted to continue with... And I'm so keen to get back to it, to see how that integration could work. That was on my list for as long as I can remember, and I'm so excited to be finally doing it. We're finally in a good place to do that integration, and I'm fairly confident that we'll be able to talk about it at the next Kaizen.
400
+
401
+ **Adam Stacoviak:** Ha-ha! He said it.
402
+
403
+ **Jerod Santo:** \[laughs\] On the next Kaizen...
404
+
405
+ **Gerhard Lazu:** There you go. In the next Kaizen.
406
+
407
+ **Jerod Santo:** Okay, so we have it on record; there will be another Kaizen.
408
+
409
+ **Gerhard Lazu:** Oh, yes.
410
+
411
+ **Jerod Santo:** Not just a hope and a dream.
412
+
413
+ **Gerhard Lazu:** We just need to figure out where.
414
+
415
+ **Gerhard Lazu:** Right.
416
+
417
+ **Adam Stacoviak:** So if I understand this correctly, Jerod, you've done this work, but you haven't done it in production. So you need a way to test it in production, essentially, to see how it responds.
418
+
419
+ **Jerod Santo:** I spiked it out on a branch, and then it was just like "Okay, this is certainly feasible" And then I did some rudimentary benchmarking of that branch, just to make sure it's not crazy dumb... And then I'm like "Okay, this is feasible, and I know how to bring this into official code." I can definitely transition what I coded, or even just rewrite it in a way that's maintainable if we decide to do it. But I'd really like to know if it's gonna be really dumb, or just kind of dumb. I feel like it's just dumb enough that it just might work... And be so simple, and solve a problem in a way that's just awesomely dumb. But I don't want it to be so dumb that it's not gonna work... \[laughs\]
420
+
421
+ **Gerhard Lazu:** \[50:10\] That's the real spirit of Ship It. We literally have to get it out to see if it works. Like, what happens.
422
+
423
+ **Jerod Santo:** And then I was like "Well, what I lack is metrics." So I can observe it for a few hours, get some confidence, leave it in, or be like "Holy cow. It worked great in dev, but it's not going to work with a real load."
424
+
425
+ **Gerhard Lazu:** I have a question for Adam... So Adam, I think this may be the moment to tell us again about the benefits of feature flags.
426
+
427
+ **Adam Stacoviak:** I almost mentioned it there. I was like "I don't want to have egg on my face by mentioning feature flags..." Because I know Jerod has sort of been resistant to some degree against it... But there may be a simpler way to do this, but I think that that's essentially what you want to do. You want to test this in production, on a limited set of users. So it could be scoped to admins only, for example.
428
+
429
+ **Jerod Santo:** No, because I want to load-test it. I want the full load, is my issue.
430
+
431
+ **Gerhard Lazu:** But it could be like maybe 50% of the requests, and you can compare them. So 50% of the requests, 50/50...
432
+
433
+ **Adam Stacoviak:** A threshold.
434
+
435
+ **Gerhard Lazu:** ...going to the old one, 50 to the new implementation, and see how do they compare over the course of maybe a few days...
436
+
437
+ **Jerod Santo:** Yeah, we can do that.
438
+
439
+ **Gerhard Lazu:** So Adam, how do we get feature flags? What do you think?
440
+
441
+ **Adam Stacoviak:** Hm...
442
+
443
+ **Gerhard Lazu:** Where do you stand on that?
444
+
445
+ **Jerod Santo:** Well, if we're doing 50/50, can't we just do like an if statement, with like random divided by two? \[laughter\]
446
+
447
+ **Gerhard Lazu:** Sure. "If it's an even second, do this. And if it's an uneven second, do the other thing." \[laughs\]
448
+
449
+ **Jerod Santo:** If it's an imperial unit, or if it's the metric system... Is this the metric system, or which system are we going to use here?
450
+
451
+ **Gerhard Lazu:** Luckily, seconds only exist in one... \[laughs\]
452
+
453
+ **Jerod Santo:** I know Adam's been keen on feature flags, and I feel like this is his big moment to introduce some sort of subsystem.
454
+
455
+ **Gerhard Lazu:** I think so too.
456
+
457
+ **Adam Stacoviak:** I mean, I don't feel like I have a system to pitch here... \[laughter\]
458
+
459
+ **Gerhard Lazu:** No, I remember the conversation, Jerod. That's why I keep going back to it. Because we didn't have a good answer for Adam, and we were both against it. So maybe now it's coming back, and maybe now it's a yes, because it was a definite no back then.
460
+
461
+ **Adam Stacoviak:** We were premature. When I tried to pitch --
462
+
463
+ **Jerod Santo:** Feature flags?
464
+
465
+ **Adam Stacoviak:** The insider story here, listeners, is there was -- my initial pitch for us using feature flags fell on deaf ears, essentially, because we were premature. We just didn't have the need for it. We were trying to find a use for it, and if you follow Kaizen, and Ship It, and what we've done, then you know our application is pretty simple. We don't have a lot of developers developing on it, so there's not a real need for an immense feature flags feature and/or service to use. LaunchDarkly was our friends for a while there... I'd still say they're friendly, but they're not friends. We're not working with them directly anymore.
466
+
467
+ We do have a new sponsor coming on board, DevCycle, which is in the feature fly business, which - you know, if you wanted to use it for this one instance, I'm sure we could do something. So I mean, there is an opportunity there, but... That would be my pitch. I feel like if it's just this one off though, then the if statement probably works.
468
+
469
+ **Jerod Santo:** Well, I'll let you know when I get this far. What we need first, I think, is the observability. Because either way, if we do it 50/50, we want to see both results.
470
+
471
+ **Gerhard Lazu:** Of course.
472
+
473
+ **Jerod Santo:** And so right now I can't see any results, besides sit there and stare at the log files, and look at the request responses... Which was a side effect, actually, of one of our recent changes - our log files just stopped logging. I got it fixed, but that was funny. So I'm like "Wait a second, there aren't any logs."
474
+
475
+ **Adam Stacoviak:** How can the Changelog not log?
476
+
477
+ **Jerod Santo:** Right?
478
+
479
+ **Adam Stacoviak:** That's just like against the laws of nature, essentially.
480
+
481
+ **Jerod Santo:** Well, I'm not gonna git blame that one on the air, because I don't want to embarrass Gerhard, but... I fixed it.
482
+
483
+ **Gerhard Lazu:** That's okay, I can't get embarrassed. \[laughter\] I can't, because I'm going to learn something new out of this.
484
+
485
+ **Jerod Santo:** There you go.
486
+
487
+ **Gerhard Lazu:** So tell me the commit where this was introduced, so that I can understand my mistake. Seriously.
488
+
489
+ **Jerod Santo:** \[54:00\] So the code that fixes it is in commit f19c9cf, where I basically changed the application file to basically turn the logger back on. So I think you were overly aggressive when you were -- you were removing a few things... We removed PromEx, because we're not really using Grafana anymore... And you just deleted too much code. And the code that you deleted would, if we're not in IEx, turn on the default logger. But you deleted it, so there wasn't a default logger, and so it wouldn't log anything in prod at all...
490
+
491
+ **Gerhard Lazu:** I see.
492
+
493
+ **Jerod Santo:** ...and you didn't notice.
494
+
495
+ **Gerhard Lazu:** Yeah, that's right.
496
+
497
+ **Jerod Santo:** And I didn't notice, and so I just thought, "Well, I'll just go see what's going on in production", and there was no logs there. So I actually just put that code back in, that you had deleted, is all.
498
+
499
+ **Gerhard Lazu:** Right. So hang on, let me try and understand this code... That's what's happening right now. I'm trying to understand some Elixer code live, as we are recording this... I'm looking at the application.ex, line 32, 'unless Code.ensure_loaded?(IEx) && IEx.started?() do' Which of those two lines disables logging? The 33 or the 35 one? Oban telemetry attach default logger?
500
+
501
+ **Jerod Santo:** No, that's not the line. Look at endpoint.ex line 60. Plug.telemetry. That's the line where you basically remove the telemetry plug.
502
+
503
+ **Gerhard Lazu:** Okay, okay, okay. I see. So the telemetry plug logs.
504
+
505
+ **Jerod Santo:** Yes.
506
+
507
+ **Gerhard Lazu:** I see. Okay.
508
+
509
+ **Jerod Santo:** The logger uses the telemetry plug to do its thing.
510
+
511
+ **Gerhard Lazu:** Right, right. If it would have been plug log. I don't think I would have made that mistake.
512
+
513
+ **Jerod Santo:** Right. Yeah.
514
+
515
+ **Gerhard Lazu:** But yeah, cool. Okay. That's good to know.
516
+
517
+ **Jerod Santo:** So yeah, it was an easy mistake to make. And I know how it is when you're removing stuff. You're like "Oh, this we don't need. This we don't need." And I think it was just that one line...
518
+
519
+ **Gerhard Lazu:** That's it.
520
+
521
+ **Jerod Santo:** ...just turned that off, and we didn't notice because we weren't really looking at production. Now, had we been sending it over to Honeycomb and observing it, we probably would have seen the drop-off immediately, because Telemetry would have been turned off there.
522
+
523
+ **Gerhard Lazu:** Yeah, that's right.
524
+
525
+ **Jerod Santo:** So I think the Honeycomb integration will use this OpenTelemetry plug as well, when we do it. So that was the line that did it; it wasn't the other one. There was a few other things that you also removed, I put them back in, but that was like Oban stuff. Not a big deal. It was just over-aggressive deletion, which is totally normal when we're like "Let's --"
526
+
527
+ **Gerhard Lazu:** Probably. I deleted too much.
528
+
529
+ **Jerod Santo:** Yeah. When you're in like "Let's delete stuff" mode... I know how it is, because it feels so good.
530
+
531
+ **Gerhard Lazu:** Okay, okay. Okay, okay.
532
+
533
+ **Jerod Santo:** So there you go.
534
+
535
+ **Gerhard Lazu:** Cool. That's good to know. So who reviewed my PR?
536
+
537
+ **Jerod Santo:** \[laughs\] Uh-oh...
538
+
539
+ **Gerhard Lazu:** Do you see where this is going? \[laughter\] Cool, great.
540
+
541
+ **Jerod Santo:** Well, it wasn't me... Clearly...
542
+
543
+ **Gerhard Lazu:** I know...
544
+
545
+ **Jerod Santo:** I merged it, but I didn't review it.
546
+
547
+ **Gerhard Lazu:** I think I waited for a while and said, "You know what - I'm just gonna push this through", because that's how we roll.
548
+
549
+ **Jerod Santo:** There you go.
550
+
551
+ **Gerhard Lazu:** No, that's fine. That's fine.
552
+
553
+ **Jerod Santo:** No, even if I reviewed it, I must have not reviewed it very well, so... You know...
554
+
555
+ **Gerhard Lazu:** That's okay. Yeah, it was an honest mistake.
556
+
557
+ **Jerod Santo:** Totally.
558
+
559
+ **Gerhard Lazu:** On both our parts.
560
+
561
+ **Jerod Santo:** On both our parts.
562
+
563
+ **Adam Stacoviak:** I want to chase that rabbit down... I've got a question for you. So once we put this experiment into production, Jerod, what's going to happen? Can you come back to the beginning, where if we get this potentially smart Postgres feature out there... Let's say it's successful. What happens? What happens as a result of that being successful?
564
+
565
+ **Jerod Santo:** So what happens is every single request that goes to one of our feeds will be served live from Postgres, from what I call like a feeds cache inside our Postgres instance. So it's effectively -- it's as if it was reading off disk, but we don't have a disk, because we're in Fly land... But it's just on disk inside of Postgres. And so it goes out of Postgres, goes out live, so every request is immediate... And then every time that we change something that's going to change the feeds, we blow that one away, and we rewrite it, and so we recompute the feed. It's basically a cache inside of Postgres, because that's already our single source of data. Whereas if we did it anywhere else, we'd have to have a shared data source etc.
566
+
567
+ **Gerhard Lazu:** \[58:03\] I think what's more important is that this enables us to run more than one instance of Changelog.
568
+
569
+ **Jerod Santo:** Exactly.
570
+
571
+ **Gerhard Lazu:** Right now, because of how caching is done, we can only have one instance of Changelog. And we have been on this journey for quite some time now. Right? If you remember, we had a persistent disk. So we did have a local disk. But when we had that, it meant that we could only have a single instance, because all our media assets were stored on that one disk. So we pushed the media assets to S3, and now we could have more than one. But then the next thing was like "Oh, dang it. The caching." So once we solve the caching, we can run more than one instance, we can spread them across the world, we can serve dynamic requests from where users are, rather than everything going through the CDN, and the CDN really only caches the static stuff. And even then, it has to timeout. That's why we have also like the time, because the CDN and also caches for about 60 seconds.
572
+
573
+ **Jerod Santo:** Right. Yeah, the other thing lets us do is serve different feeds to different requesters. And so here's why this might be interesting... So Spotify specifically supports, allegedly - I haven't seen it working very much... They support chapters, if you put them as text in your show notes, using the YouTube style timestamps thing. So I just put it in for everybody at this point. But it's silly to put it into the show notes for listeners who have regular podcast apps that support chapters the way that you should, not because they're Spotify.
574
+
575
+ Well, we could just serve from using this system. We could have two different versions of the feed, both put into Postgres, use the request header to identify Spotify, because it has a standard request, and serve a slightly different feed to Spotify than we serve to everybody else, and give them those timestamps. So you get the chapters over there, but you don't clutter up your feeds for everybody else. And you can't do that very well with caching, because it's like "Well, we've got a cached version", right? And the requests never hit our server; they're just Fastly. And maybe you can put that logic inside of Fastly, but now you have to point it to different places, and manage that whole deal...
576
+
577
+ And so this also enables that, where you can basically have N caches per request, and serve the right one dynamically, but still have it precomputed. So it's kind of the best of both worlds. By the way, to our listener, I realized this is kind of a dumb way of doing it. If it's super-dumb, and you have reasons why, please, tell me, because I'm about to roll it out... \[laughs\]
578
+
579
+ **Adam Stacoviak:** "I'm about to roll it out...!"
580
+
581
+ **Gerhard Lazu:** I don't think it is.
582
+
583
+ **Adam Stacoviak:** Why is it dumb? Why do you keep saying this? Why do you think it's dumb? What's the logic behind it being dumb?
584
+
585
+ **Jerod Santo:** Storing precomputed text inside of Postgres - it's somewhat large. I read some -- like, how big is too big, and it's like 2.3 megabytes in a Postgres record. It seems like it's fine, actually, but once you start getting up to like 100 megabytes, now you're in trouble. We're not going to make it there with any of our documents. But maybe even at 2.3 megabytes, at scale it's just going to read too slow. I don't know, it seems like a very low-tech, kind of silly way of doing it... And so maybe it's just lack of confidence, is why I think it sounds dumb.
586
+
587
+ **Gerhard Lazu:** I think this is a step in the right direction, because Fly brings the app closer to the users.
588
+
589
+ **Jerod Santo:** Right.
590
+
591
+ **Gerhard Lazu:** And Fly really makes it less necessary to run a CDN, or maybe completely unnecessary, depending on the case. If we want to depend less on the CDN, which I think is a good idea, and if we distributed our apps around the world, that means that we can rely less on the CDN - which by the way, had like all sorts of issues which we are yet to solve - and serve directly from our app... So basically, we are reverting back, putting changelog.com behind the CDN. And we had to do that, because we had a single instance, we had all sorts of issues related to that... But now, if we have multiple instances, one per continent - again, depending on where our users are - we no longer need to depend on the CDN as much as we did before.
592
+
593
+ \[01:02:12.28\] And by the way, Fly itself, it has a proxy, it has a global proxy, which means that depending on where you are, those edge instances, they will connect to the app instance which is closest to the edge. So then we are pulling more of that stuff in our app, which makes us be able to code more things, as Jerod mentioned, pull more of that smarts in code, rather than in CDN configuration or other things... Which are very difficult to understand, very difficult to troubleshoot... I mean, we've had so many hair-pulling moments. That's why we have so little hair \[unintelligible 01:02:46.00\] sections, going like "Why the hell? How does this varnish even work, because it doesn't make any sense?"
594
+
595
+ **Jerod Santo:** Right. And we built our own little version control inside of Fastly, between Gerhard and I, by adding a comment and putting whose name it is at Last Edited, which we would love to just have our actual programming tooling.
596
+
597
+ **Adam Stacoviak:** It seems smart...
598
+
599
+ **Jerod Santo:** If it takes us to where we wanna go, I agree with you 100% that having our app be its own CDN, so to speak, closer to all the users, which is what Fastly is giving us, at the app level, then it can be dynamic in ways that is possible with Fastly, but it's just cumbersome to this day.
600
+
601
+ **Adam Stacoviak:** Yeah. And I guess one more layer here is we haven't truly embodied the vision of Fly, which is our app close to our users, because of this cache issue. This is full circle; the whole reason for this cache experiment was to be able to bring to fruition that actual dream with no ops, or very, very little ops... But we haven't been able to do that because of this cache layer.
602
+
603
+ **Jerod Santo:** Well, our app does run close to our users in the greater Houston area... \[laughter\]
604
+
605
+ **Gerhard Lazu:** Yeah... It's actually in Virginia.
606
+
607
+ **Jerod Santo:** Oh, is it?
608
+
609
+ **Gerhard Lazu:** Yeah, yeah.
610
+
611
+ **Jerod Santo:** Well. It shows what I know.
612
+
613
+ **Gerhard Lazu:** It's the IAD data center. Yeah.
614
+
615
+ **Adam Stacoviak:** Yeah. Well, all that to say, getting to this direction is is challenging. I think the logic in this Postgres sounds fine. I mean, if we were, like you had said, above a larger threshold... A couple megs, not that big of a deal. And if the app is close to the user, and there's one -- I'm assuming there's probably like one or two primary Postgres writes, and then the rest are reads, right? That's how it would set up, naturally, with Postgres on Fly...
616
+
617
+ **Jerod Santo:** Yeah, the writes would actually happen on publish. The writes happen on edit, not on first request, which is what happens now with a typical caching. First request, we calculate it once. Now we're not going to calculate it again for 60 seconds. Then we'll calculate it once. This is actually on write, is when we're doing the compute, which we wanted to move to.
618
+
619
+ The other option is to put this on a static file server like S3, and then manage and blow away different files. But then I started thinking, like, we actually like our URLs, how they are, and so then our app would be reading from S3 and responding as a proxy... And it's like "Well, it was already proxy to Postgres." I don't know. But yeah, we would cache on write versus on read, which makes us have immediate changes. There's no 60-second delay, or five minutes, or whatever you send it to.
620
+
621
+ **Adam Stacoviak:** And I'm in that camp. I mean, I listen to our show immediately, as soon as we ship The Changelog at least... I mean, as just a crazy person, whenever you ship something, you want to make sure it's in production. And the only way to do it is like to test it. And the app I use is Overcast primarily. I don't think I have notifications on, because I just hate notifications just generally. If I don't have to have notifications on for an application, they're off, for sure. But when I do go there, I usually test it on the master feed directly, because... I listen to Master, like you should be. Hey, listener, if you're not listening on Master, you're wrong. Or Plus Plus; then you'd be even better...
622
+
623
+ **Jerod Santo:** Right.
624
+
625
+ **Adam Stacoviak:** \[01:06:06.04\] ...because it's better... But I'm a Master feed subscriber in that regard, and pull to refresh, and it does take a bit for the new episodes to get there, for me at least. So I'm not like I ship it and 30 seconds or a minute later it's in Overcast. It takes longer than I've counted, let's just say. I haven't actually sat there and counted. It's like "Oh, it's not there. I'll come back later", and come back and it's there.
626
+
627
+ **Gerhard Lazu:** The one thing about this which gets me really excited is that we will double down on PostgreSQL. So we talked about this for a while... Crunchy Data is what I'm thinking. But it's not the only way.
628
+
629
+ **Adam Stacoviak:** In what regard are you thinking Crunchy Data?
630
+
631
+ **Gerhard Lazu:** I'm thinking a PostgreSQL as a service, that scales really, really well, so then the app is all Fly. PostgreSQL is managed via Crunchy Data. We have a global presence, nicely replicated, all that nice stuff. And then we consume PostgreSQL as a service at a global scale. Our app runs at a global scale, on Fly, and the database the same, but with someone else. Because the PostgreSQL in Fly - it's not a managed one. It's easy, convenient, we have a lot of advantages, and it's been holding up really well since we set it up. No issues. But we can -- I mean, if the app is distributed, and if the app gets this level of attention, I think so should our database, because now these are the two important pieces. We scale the app, we should scale the database. I mean, if for example we have all these app instances that connect to the same PostgreSQL instance back in the US, that's not going to be any good. Right? Reading all those megabytes across continents... That's going to be slow.
632
+
633
+ **Adam Stacoviak:** Isn't that the point though for like the read servers that are distributed?
634
+
635
+ **Gerhard Lazu:** So we could add multiple PostgreSQL read replicas in Fly; we could do that. Maybe tune them... Maybe. I don't know. Maybe try and understand better what they do... But maybe, rather than doing that, we can grow up our approach to databases, and go with someone that does this as a service. I know Planet Scale comes up as well... There's like a couple that we can use PostgreSQL as a service.
636
+
637
+ **Adam Stacoviak:** But that's MySQL, Planet Scale.
638
+
639
+ **Gerhard Lazu:** There's one which I know is PostgreSQL. Maybe it's not Planet Scale... What was it...?
640
+
641
+ **Jerod Santo:** Supabase?
642
+
643
+ **Gerhard Lazu:** I think it's Supabase. I think it's Supabase. I think that's what I'm thinking. Yeah. See? Not enough time to experiment. \[laughs\]
644
+
645
+ **Adam Stacoviak:** There is a conversation, let's just say there's a conversation. So we may be meeting in the middle, let's just say. Don't wanna give too much away.
646
+
647
+ **Gerhard Lazu:** Exactly.
648
+
649
+ **Adam Stacoviak:** But dreams... We are dreaming together.
650
+
651
+ **Gerhard Lazu:** Exactly. And we need to experiment a lot. So that's the whole point, right? We need to try a couple of things out, see what makes sense... I know Jerod loves his PostgreSQL, the vanilla one, the open source one...
652
+
653
+ **Jerod Santo:** I do...
654
+
655
+ **Gerhard Lazu:** You know, as unaltered as they come.
656
+
657
+ **Jerod Santo:** So good...
658
+
659
+ **Adam Stacoviak:** We're actually coming out with a T-shirt, Gerhard. It says "Postgres-compatible is not Postgres." \[laughter\]
660
+
661
+ **Gerhard Lazu:** Really?! Okay, I wasn't aware of that... Okay.
662
+
663
+ **Jerod Santo:** No, not really.
664
+
665
+ **Gerhard Lazu:** Okay...
666
+
667
+ **Adam Stacoviak:** We want to.
668
+
669
+ **Gerhard Lazu:** Is that the Jerod tagline?
670
+
671
+ **Jerod Santo:** No, that's actually a Craig Kerstiens tagline.
672
+
673
+ **Gerhard Lazu:** Right.
674
+
675
+ **Jerod Santo:** I do like "Just Postgres" as a T-shirt.
676
+
677
+ **Gerhard Lazu:** "Just Postgres." Yeah.
678
+
679
+ **Jerod Santo:** Just Postgres.
680
+
681
+ **Gerhard Lazu:** We will be doubling down on that. That's what matters. And we'll be improving that part as well. All this is leading us into that direction, and that's really exciting.
682
+
683
+ **Adam Stacoviak:** That's why I wrote this right here... I was writing it right there.
684
+
685
+ **Gerhard Lazu:** There you go. On a napkin? It's a thing!
686
+
687
+ **Jerod Santo:** Okay! Now we have a plan.
688
+
689
+ **Gerhard Lazu:** That's how all dreams start, on a napkin.
690
+
691
+ **Adam Stacoviak:** Mm-hm. I've been doodling while we're having this call.
692
+
693
+ **Gerhard Lazu:** Put some B's and some dollars as well, while you're at it.
694
+
695
+ **Jerod Santo:** Yeah, put some dollars on there.
696
+
697
+ **Adam Stacoviak:** Sure.
698
+
699
+ **Jerod Santo:** Step one, Postgres. Step two, question mark. Step three, profit.
700
+
701
+ **Gerhard Lazu:** \[01:09:52.19\]Or Postgres, change the s into $. That'd be good.
702
+
703
+ **Adam Stacoviak:** That's right, I'll do that.
704
+
705
+ **Jerod Santo:** That's our business plan. We're gonna turn Postgres into dollars.
706
+
707
+ **Adam Stacoviak:** Well, let's say somebody's listened this far, and they're thinking, "Man, this really sucks, okay?"
708
+
709
+ **Gerhard Lazu:** What sucks?
710
+
711
+ **Adam Stacoviak:** "I'm here at the end of this amazing episode--" Well, I'm gonna tell you what sucks. I'm gonna tell you. They're gonna be like "I liked this show. Come on, guys... What's going on here?" Can we dream a little bit to where this might go, the next version of Kaizen? Can we give them some prescription? Versus just wait and see? Jerod, you mentioned subscribing to the Changelog, which I think is a great next step after this...
712
+
713
+ **Jerod Santo:** Well, I think it makes sense to do our next Kaizen on the Changelog if we don't have anywhere else to do it...
714
+
715
+ **Adam Stacoviak:** That's right. Yeah.
716
+
717
+ **Jerod Santo:** Which is probably likely, right? I mean, we could cross-post it to the Ship It feed, I guess...
718
+
719
+ **Gerhard Lazu:** Or episode 91 will be Kaizen in two and a half months. \[laughter\]
720
+
721
+ **Jerod Santo:** Yeah. And so will 92.
722
+
723
+ **Gerhard Lazu:** That's also possible. And so will 92, yeah. Or we go straight to 100, and then people are like "What the hell? Where's all the rest?"
724
+
725
+ **Jerod Santo:** Right.
726
+
727
+ **Gerhard Lazu:** So it'll be 90, 100... It will be just going 10 to 10. We were just talking about Fahrenheit and Celsius... \[laughter\]
728
+
729
+ **Jerod Santo:** That's more of a Celsius thing... 100 is hot. I would say we would publish our next Kaizen on the Changelog feed. Ain't that safe? That's probably the safest bet today.
730
+
731
+ **Gerhard Lazu:** I think so. It's what makes most sense to me, too.
732
+
733
+ **Jerod Santo:** And stay tuned for more. We'll have more to say on that episode.
734
+
735
+ **Gerhard Lazu:** Well, I have one thing which I really have to say, and I have to mention this, because I've been trying to get to someone from 1Password since January 15th, when I sent my email, and I haven't heard back... So if someone knows someone within 1Password that can help with their services account... This is so that we can use secrets from 1Password without needing to run the Connect server. I mean, we will set up a Connect server if we need to, but hopefully, we'll be able to access the secrets using this new beta feature, which as far as I'm aware, it's called Services Accounts, that allows us to use the secrets programmatically in CI systems. Right now, we can't do that without the Connect server. And ideally, I would like to use the Go SDK - and you see where I'm going with this... To use it directly in code, so that our CI will never see the secrets. It's just a code that connects to the 1Password instance, and it pulls it just in time as the code runs. So if anyone knows someone, I would very much like to talk to them to get this feature, try this beta feature, see how it works. Alternatively, how do you feel about a migration from 1Password? \[laughs\]
736
+
737
+ **Jerod Santo:** Oh...
738
+
739
+ **Adam Stacoviak:** Negative.
740
+
741
+ **Jerod Santo:** Rotating secrets is my favorite thing to do... Yes, I mean - we want something that works, and works well, so...
742
+
743
+ **Gerhard Lazu:** We can set up a Connect server. I mean, it's so easy to set anything up on Fly these days, so maybe we'll just do that... Which will act as a gateway to 1Password.
744
+
745
+ **Adam Stacoviak:** \[01:13:04.23\] Well, we can make something happen with 1Password, there is some opportunity there. So...
746
+
747
+ **Gerhard Lazu:** Great. That's the one thing which was on my list.
748
+
749
+ **Adam Stacoviak:** Let me go to work, you know?
750
+
751
+ **Gerhard Lazu:** Excellent.
752
+
753
+ **Adam Stacoviak:** I'm a big fan of 1Password.
754
+
755
+ **Gerhard Lazu:** I like it too, very much.
756
+
757
+ **Adam Stacoviak:** And I root for them, in all ways. I've been using them for more than a decade. I mean, like just basically forever. They're embedded in my operations. And now with SSH integrations, and stuff like that - I just love biometrically... And thank you for removing all of our SSH needs, Changelog.com infrastructure-wise, but I still have LAN infrastructure that I have to log into, and biometrically logging in via SSH is just -- it's the way to go.
758
+
759
+ **Gerhard Lazu:** Yeah, for sure. Yeah. And I was reading this blog post on the 1Password blog about passwordless systems. I'm just going to double check the title... So the blog post is "Pass keys in 1Password - the future of passwordless." And it was published on November 17th, 2022. So not that long ago. And it was mentioned a couple more times.
760
+
761
+ So I think that's a really cool idea... So I really like where 1Password is, and where they're going... If we can only figure this thing out, it will be even more amazing for us. So no more secrets in GitHub. Yes, baby! That's what I want.
762
+
763
+ **Jerod Santo:** Cool.
764
+
765
+ **Gerhard Lazu:** Alright. Well...
766
+
767
+ **Jerod Santo:** Should we call it a pod?
768
+
769
+ **Gerhard Lazu:** I think we should call it a pod. Someone needs to sing something, I feel like... It's my birthday tomorrow, so...
770
+
771
+ **Adam Stacoviak:** Jerod sings...!
772
+
773
+ **Jerod Santo:** Happy Trails to you...
774
+
775
+ **Adam Stacoviak:** See? Told ya.
776
+
777
+ **Jerod Santo:** That's all you're getting... Until we meet again.
778
+
779
+ **Adam Stacoviak:** He tried to sing Semisonic on the --
780
+
781
+ **Jerod Santo:** Closing Time?
782
+
783
+ **Adam Stacoviak:** ...on the & friends episode we did. Yeah, you started singing Closing Time. I edited you right out of that, man. I didn't want you embarrassed... You did not do a good job. \[laughs\]
784
+
785
+ **Jerod Santo:** All I said was "You don't go home, but you can't stay here."
786
+
787
+ **Adam Stacoviak:** Well, that's what happened in the one that shipped.
788
+
789
+ **Jerod Santo:** Ah...!
790
+
791
+ **Adam Stacoviak:** Behind the scenes, it was worse. I'm just messing with you, Jerod. I'm just being silly.
792
+
793
+ **Jerod Santo:** I don't even believe you.
794
+
795
+ **Gerhard Lazu:** With all this time that I'm going to have from not shipping a Ship It episode every week - do you know what I'm going to do instead? I'm going to go Dan-Tan! \[laughter\] That's what's happening...
796
+
797
+ **Adam Stacoviak:** Oh, my gosh. Dan-Tan... Comes again!!
798
+
799
+ **Gerhard Lazu:** Every week, I'll go Dan-Tan. \[laughs\]
800
+
801
+ **Adam Stacoviak:** Dan-Tan...!
802
+
803
+ **Gerhard Lazu:** So that's what's up.
804
+
805
+ **Adam Stacoviak:** Oh, my gosh...
806
+
807
+ **Jerod Santo:** I love it.
808
+
809
+ **Adam Stacoviak:** I've got my kids saying Dan-Tan now.
810
+
811
+ **Gerhard Lazu:** There we go.
812
+
813
+ **Adam Stacoviak:** Never telling that story again.
814
+
815
+ **Gerhard Lazu:** Everyone is on it.
816
+
817
+ **Jerod Santo:** Everyone's saying it.
818
+
819
+ **Gerhard Lazu:** So that's my plan.
820
+
821
+ **Adam Stacoviak:** Alright...
822
+
823
+ **Jerod Santo:** Sounds good, Gerhard.
824
+
825
+ **Gerhard Lazu:** Alright.
826
+
827
+ **Jerod Santo:** Thank you.
828
+
829
+ **Adam Stacoviak:** It has been good. Thank you.
830
+
831
+ **Gerhard Lazu:** Always a pleasure. There will be a next one, two and a half months away. Right? Roughly. So I don't exactly when, but two and a half months away. It will be warm and nice where you are, I'm sure.
832
+
833
+ **Adam Stacoviak:** Yeah.
834
+
835
+ **Gerhard Lazu:** I'm looking forward to that... Kaizen!
836
+
837
+ **Jerod Santo:** Same. Kaizen!
838
+
839
+ **Adam Stacoviak:** Kaizen!
Kaizen! Embracing change 🌟_transcript.txt ADDED
The diff for this file is too large to render. See raw diff
 
Rust efficiencies at AWS scale_transcript.txt ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Tim. Fourth time lucky.
2
+
3
+ **Tim McNamara:** I'm so sorry, Gerhard. Honestly, I wanted to be there the first time, I wanted to be there the second time, I got injured, I couldn't come on the third time... But now I'm here, and I want to know that -- like, my time is yours. Yeah, feel free to use it however you think would be most suitable.
4
+
5
+ **Gerhard Lazu:** Well, I have to say thank you very much, Tim, to our listeners. That's how badly I wanted to have this conversation. I tried until it happened. Okay? And we have been delaying it, I think by a few weeks, and we spoke about it briefly in September, and then holidays happened, and then all sorts of other things happened... And I said, "Okay, this has to happen." And again, there's like another big event coming up for me personally; we will talk about it a bit later, I think in episode 90. But for now, what's important is that I really wanted us to have this conversation with Tim. So thank you for joining us. Welcome on Ship It.
6
+
7
+ **Tim McNamara:** No, my pleasure. It's my genuine pleasure. I really love the way that the show has progressed, and it's quite a privilege to be speaking to you here.
8
+
9
+ **Gerhard Lazu:** Thank you, Tim. I appreciate it. Now, how are your ribs, your physical ribs? \[laughter\] That was attempt number three, right?
10
+
11
+ **Tim McNamara:** Right, right. So what happened... As context, I was riding a mountain bike along the river, came across a ditch, and I thought, "Oh, look, that's looking a little bit deep. I should slow down." I didn't quite slow down. I thought, it'd be fun to try and get through the ditch. My front wheel went down and kind of got stuck on the other side, but my body kept going. So I spent about 10 hours in A&E the night before one of our interview slots. And I think I texted you at about 1:30 in the morning, saying "Look, I'm--"
12
+
13
+ **Gerhard Lazu:** You did, actually... So I knew that you really cared about it, right? I mean, if you're like in a hospital, and you remember to text me "Hey, Gerhard, sorry, I will not be able to make our slot." I mean, you have no idea how much I appreciate that, Tim. Thank you.
14
+
15
+ **Tim McNamara:** Yeah. So ribs are okay, actually... Bruising is fine, that will go away. I've injured some ligaments that connect my ribs and my spine, which makes it quite hard to breathe in, and also to do things like lie down or put on my shoes... But I've been told that I will heal; bodies are crazy, they're good things. And yeah, this is all part of me understanding my own limits. I'm no longer sort of 19, and --
16
+
17
+ **Gerhard Lazu:** And Superman, so we clear that, right? You're not Superman... \[laughs\]
18
+
19
+ **Tim McNamara:** Yeah...
20
+
21
+ **Gerhard Lazu:** You can't fly. Well, you can, but...
22
+
23
+ **Tim McNamara:** Multiple years since I'd been on a mountain bike, and one needs to appreciate one's own limits sometimes.
24
+
25
+ **Gerhard Lazu:** Right, right. So how are you with laughter?
26
+
27
+ **Tim McNamara:** Laughter is okay, actually. I've been surprised...
28
+
29
+ **Gerhard Lazu:** Dang it!
30
+
31
+ **Tim McNamara:** Yeah, you really -- but sneezing... Like, let's say there's some dust in the room - that would basically send a dagger into my ribs, which...
32
+
33
+ **Gerhard Lazu:** I'm not sure what I can do about sneezing. I could have helped you with laughter... But let's see. I can only try. So you are a public figure. I enjoy reading your tweets very much. I'll have a question related to your Twitter handle, which I think is great. However, you're tweeting when -- I think it was like the first time that we were supposed to record, that you have a very early morning meeting. So I think you set your alarm clock for 5:15 AM. Now, you're ahead of everyone, because you're in New Zealand. So no one can catch up to you. You're always ahead of the whole world, okay? We try, but you're many hours ahead of everyone. Was that early morning meeting worth it?
34
+
35
+ **Tim McNamara:** Yes. So, I am in a very privileged position where most of my team -- so I can work remotely from New Zealand. This is very rare within AWS. Most of my team are based in the West Coast of the United States, and the remainder are in Europe. And so my day, or my week is typically staggered so that I will have one or two early morning meetings. 6 AM Here is 9 AM in the West Coast of the United States. And over where you are, Gerhard, depending on daylight savings, it's either 10 or 12 hours away. And so someone has to be up late, or awake very early.
36
+
37
+ \[05:56\] Typically, people are very understanding of the fact that I'm in a very strange timezone... The only trouble is if I wake up early, and then also don't get to sleep properly, because I'm sort of mentally preparing for the upcoming discussion, my brain doesn't really have a very good off switch. That's the thing that bugs me.
38
+
39
+ **Gerhard Lazu:** I know what you mean. Yeah, it just wakes up; the brain wakes up separately from the body, it starts churning, and then it wakes up everything else. And usually, it's like work-related thoughts, experiments, things... "Oh, have I said that? Have I done that?" I know exactly what you mean. I get asked quite a lot, too. Okay, it's not just me; that's great to know. And I'm sure others listening to this will be able to relate. So you mentioned that you work at AWS... What do you do?
40
+
41
+ **Tim McNamara:** So my job title is senior software development engineer. My job role though is a little bit broader than a typical software person. So I work within a central team, supporting the development of the Rust programming language within Amazon. That includes AWS, as well as retail, and into like Amazon Go stores; if you're in the States, you might have gone into it, in a physical store... And there are bits and pieces of Rust working in embedded -- in the shops themselves... All the way through to some of our kind of flagship services, including Amazon S3, have components written in Rust. And this is now a technology that is strategically important enough for the company to have two teams actually working on the language.
42
+
43
+ So the team that I work on is mostly centered on internal support. So we do some technical things around like supporting the internal build system, we have a mirror of the open source sort of package ecosystem... Crates.io is the Rust equivalent of let's say npm, for sort of the Node ecosystem, or PyPy in Python... And the other team is primarily focused on, let's say, the compiler, or some other core components of the Rust ecosystem, like the Tokyo runtime. The division isn't clean, but that's the broad division of work.
44
+
45
+ The thing that is slightly unique to my job is I've increasingly become seen as the person who is driving education, or adoption of Rust at Amazon. So this year, I'm actually going to be leading an education project for like "How do we teach tens of thousands of programmers Rust?" I mean, it won't actually be that big, but this is kind of the overall goal...
46
+
47
+ **Gerhard Lazu:** The scale, yeah...
48
+
49
+ **Tim McNamara:** ...that we want to create a pathway for teams that are looking to adopt Rust, to be able to give them a path forward... Because most services at Amazon are written in Java, and teams choose their own tools. So there's no way to sort of centrally dictate that such and such a team or such and such service must be implemented in such and such language. Instead, we kind of need to work kind of slowly, organically, with teams that are interested, but trying to work through their pain points with the language and with the ecosystem.
50
+
51
+ And one of the things that we've found is that teams that are looking to adopt Rust will typically have one or two people who have been tinkering with the language, and they've found that things run really quickly, or that they save a lot of memory, and that they reduce the operational burden... So their systems are more stable, even in a prototype stage. But then there'll be a point at which they hit this kind of organizational inertia, where at like the engineering manager level, or let's call it the middle management tier, will suddenly -- the immune system of the organization will respond. And it will say, "Whoa, whoa, whoa, whoa, whoa... This is making me really nervous. You're implying change, and change is very dangerous." But at a more senior management level, some of the -- Rust is being talked about so pervasively now that it's impossible for senior leaders to ignore it, and they're really sold by performance benefits, by safety benefits, by security.
52
+
53
+ \[10:17\] And so these are kind of the internal dynamics, that we have teams that are very optimistic about their futures, and they're keen to experiment; let's say one step above or beyond them might be a little bit of resistance, or a skepticism of a lot of the claims. The Rust community seems to make these ridiculous, audacious claims, like it will save you, let's say -- actually, I'll talk about my own ridiculous, audacious claim, which is that "Why don't we as a software industry reduce our energy use by 50%?"
54
+
55
+ **Gerhard Lazu:** Now that's something worth doing. Right? Forget about your build times, right? Forget about latency; how about we save the planet first?
56
+
57
+ **Tim McNamara:** We can. And actually, my personal view is that software engineering is now in a significantly important part of social change, because all new products, I'd say the vast majority of them are software-first, or at least software-enabled. This actually means that the people who implement software are an integral part of social change. And I am one of these crazy people who, for example -- like, I have the electric car, I cycle to work whenever I can...
58
+
59
+ **Gerhard Lazu:** You work from home, what are you on about? \[laughs\]
60
+
61
+ **Tim McNamara:** I work from home, but there's an office though, like further down \[unintelligible 00:11:44.26\] I do primarily work from home, so this is a bit of a cheat. Dang it...!
62
+
63
+ **Gerhard Lazu:** Quick, Tim; think of something else.
64
+
65
+ **Tim McNamara:** I was on a roll. I was gonna say, I've been a vegetarian for like two decades now, and it was primarily for ecological reasons... And so I am not normal from that regard, but I still think that there was an argument that I feel quite persuasive; that we as a software industry can do better. We can respect our users by giving them secure software. We can respect our businesses by reducing waste, and also -- and primarily from the Rust perspective, this comes in memory usage.
66
+
67
+ So a lot of these cloud platforms or cloud services are very -- if you notice the way that their pricing structure works, typically compute scales quite well, so it's quite cheap to oversubscribe CPU. You know this if you've ever managed your own Kubernetes clusters; you can kind of oversubscribe CPU quite well. But one thing that is much harder to oversubscribe is memory. Because if suddenly you have two applications that want to use all the memory, neither does very well. Whereas if you have two applications that suddenly require 100% CPU, things sort of slow down, but they don't halt.
68
+
69
+ **Gerhard Lazu:** Crash.
70
+
71
+ **Tim McNamara:** They don't crash.
72
+
73
+ **Gerhard Lazu:** OOM. There is no OOM for CPU.
74
+
75
+ **Tim McNamara:** That's right, that's right. The scheduler just kind of figures it out. And performance degrades, but it doesn't catastrophically blow up. Now, my hunch - and this is from the perspective of someone completely outside of pricing, and all the rest of it, is that Rust's ability to provide very low memory use is actually as significant as some of the other benefits that we've talked about... Like being able to use significantly less compute, because it means that the density of services that you can put into some sort of compute container, let's call that a Kubernetes cluster, or it could even be a virtual machine, or wherever else you are hosting your applications... If you're using serverless as well, we can throw things in tiny, tiny containers, which run much, much cheaper... And by the way, run much faster, especially when we're thinking about latency-sensitive applications as well.
76
+
77
+ \[14:14\] So our experience at AWS is that the real benefit from like a Rust rewrite is at like the pain 99.99, that is tail latencies. So in garbage-collected languages -- so just for anyone who's not aware, a garbage collector is a kind of an appliance that sits next to your application, that manages memory. So if you're using Go, or Java, or let's say even Python, or JavaScript, and now TypeScript I suppose as well, your application will be running alongside - what's also known as a runtime - some software that runs alongside your application, that's doing some bookkeeping to keep your system alive. Now, in Rust, we don't require that runtime. So there's actually less work for the computer to do. And the reason why this becomes important is that the garbage collector actually competes with your own application for time. And in the worst case, it will actually halt - it's called "stop the world" - execution of your application to do its own bookkeeping. Now, that's a problem, because you want to serve your users.
78
+
79
+ **Gerhard Lazu:** Yep.
80
+
81
+ **Tim McNamara:** And so under periods of high contention, these garbage collection pauses are very, very problematic, especially at large scale, when your servers are running very hot, but you might have hundreds -- let's actually take one public case, which is actually not from AWS; it's from a company called Discord. \[unintelligible 00:15:51.20\] it's messaging; it's a little bit bigger than that, but it's primarily a messaging system. And Discord had a service running written in Go, that was running very hot, and they encountered that every two seconds or so their tail latencies would spike by hundreds of milliseconds. And this was just because of the garbage collector. And Go's garbage collector is actually well known for being very good, and for actually respecting the application, and doing its best to kind of stay out of the way.
82
+
83
+ So the Rust \[unintelligible 00:16:32.04\] actually had two effects. One, we had less memory usage overall, which goes to the earlier point... But what we're talking about now is the latency - there was even at p 99 point whatever it is, they never had these latency spikes. And so actually, the user experience suddenly becomes much better.
84
+
85
+ You think about a relatively popular messaging system, one in every 100 messages is going slowly; actually, it's going to create a lot of like lag for that conversation as a whole. Like, it doesn't take that many messages if there are ten recipients, for the p 99 to start really impacting the usability of the entire experience.
86
+
87
+ **Gerhard Lazu:** We will dig into a few more stories, because I was looking to see -- so first of all, a few months ago you gave a talk at AWS re:Invent that fascinated me. I thought it was an amazing talk. And not just because of you, because of the topic as well. Right? So I think they both worked really, really well. The title was "Rust is interesting, but does it really make sense for me?" We will link it in the show notes. At the beginning of the talk you mentioned that software development today is unreliable, insecure and wasteful. I think we have dug into the wasteful part quite a bit. But obviously, there's like other implications for the programming language that you choose. The runtime is important, but also what happens before you run it in production.
88
+
89
+ \[18:03\] Now, you gave the Discord example; that was a very good one. In your talk, you presented Alan Ning, an SRE at Tenable.io. And he wrote this in 2021. He said that with Rust, they saw a reduction of 75% in CPU usage, and a 95% reduction in memory usage in production. I was disappointed, I was hoping 100%... \[laughs\]
90
+
91
+ **Tim McNamara:** Yeah. Clearly, Rust has work to do.
92
+
93
+ **Gerhard Lazu:** But that's good enough, right?
94
+
95
+ **Tim McNamara:** 95% is, a good baseline, let's say...
96
+
97
+ **Gerhard Lazu:** Not enough. It should be 99. It should start with 99, right? That's when we start paying attention. So seriously, they went from using over 1000 CPUs, to 300. And again, we will link to the blog post for others to see. So that's great; less CPU, less memory, but latency. I'm a big fan of low latency. Why? Because that makes everything fast. And everything fast - there were plenty of studies that showed how much money time saved, the perceived impact of responsiveness of pages has on users buying things, or performing tasks... So that cost is immense. Not just for the people running the software, but for the people using the software; the users. There's many more people using the software than writing the software. So that's very important.
98
+
99
+ Now, let me check... So now I'm going to watch my dual fiber WAN setup for latency, okay? That was attempt number for; we recorded, but your fiber was -- well, it wasn't your fiber, but anyways...
100
+
101
+ **Tim McNamara:** Yeah, my internet connection was just not really...
102
+
103
+ **Gerhard Lazu:** It wasn't that. But anyways, like latency in internet is a big deal.
104
+
105
+ **Tim McNamara:** This is something I appreciate a lot, by the way, being in New Zealand and having services quite frequently hosted in the United States or Europe... That apparently, the speed of light requires that I wait for hundreds of milliseconds quite frequently.
106
+
107
+ **Gerhard Lazu:** We need to bring them closer to you. So when I was setting something up, I always thought about "Okay, let me put this thing in Tokyo (that was like last week), so Tim can access it faster." And we'll talk about the thing that I deployed to Tokyo a bit later. Maybe not in the recording, but anyways; we'll see.
108
+ So Fastly and DNS is a three milliseconds flat for me. And that's really important. That's a DNS, big CDN, lots of stuff runs through it. GitHub is 14 milliseconds. Not bad. You've seen that they've improved a couple of things in their infrastructure recently. My ISP has inconsistent routing to Fly.io. So sometimes, some routings -- and I can see them going from 35 seconds all the way to three seconds. So that's like a problem in itself. So latency is a big deal.
109
+
110
+ Now, Alan from Tenable, when they replaced Node.js with Rust, the latency per packet dropped by 50%. And that is a huge, huge thing, because that means a lot of money saved, both in terms of cost of running it, but also for the users. Again, we will link to the show notes.
111
+
112
+ Now, this low latency - I think we're on a roll here. You shared a link with maxday from Lambda Cold Starts. 10 Lambda Cold Starts. This is updated daily, and it compares different languages, lambdas using different languages, and how do they compare. Do you want to tell us a little bit about that, since it was your post? Well, you posted it on Twitter...
113
+
114
+ **Tim McNamara:** Okay, so I'm curious as to whether or not I should tell that... So it's an interactive website that invites you to reload, and then see in real time how fast it would take for ten cold start applications written -- so it's the same application written in multiple languages. And the punchline is that Rust is the fastest. In fact, it's almost instantaneous.
115
+
116
+ **Gerhard Lazu:** Numbers? We need numbers.
117
+
118
+ **Tim McNamara:** Actually, I don't have the numbers off the top of my head, but--
119
+
120
+ **Gerhard Lazu:** 15 milliseconds.
121
+
122
+ **Tim McNamara:** \[22:08\] Whereas let's say in the worst case it's like a Java. Now we're talking, it moves from, in the Rust case, like, let's say instantaneous, or unable to be perceived; less than 15 milliseconds is kind of the limit of human perception. And comparing that to seconds for the Java case means that there is a very significant difference in terms of the user experience for this service. And if you could imagine having functions chained together, so that one depended on the other, now you would actually amplify -- so this problem would cascade, that if you have slow cold start latencies, that our user experience would kind of degrade very, very quickly, and it would be a very poor experience.
123
+
124
+ So there are a couple of reasons why this is the case. One is that - so let's compare Rust versus Python. So I think in the Rust case, all of the ten invocations took around about 15 milliseconds. So that's 1.5 milliseconds per function invocation, versus let's say a Python, where it's somewhere in the region of over 100. In fact, let's go for -- like approximately 150, just to make the mathematics a little bit simpler.
125
+
126
+ There are multiple reasons why this occurs. One is Rust -- so the implementation of AWS Lambda is actually an open source package, an open source thing called Firecracker. Now, Firecracker runs containers. And containers need to be downloaded from somewhere. And smaller containers are easier to deploy than larger containers, because they are faster to download. So there is a significant benefit from actually having smaller containers. So Rust kind of wins by default there.
127
+
128
+ The other thing to note is that because the Rust Lambda function will actually finish executing before let's say the Python interpreter has actually interpreted the script, like has begun to read the Python script. So the Python interpreter let's say might take dozens of milliseconds to actually load up; in the worst cases it can be like over 100 milliseconds itself. And by that stage, Rust would have already finished. And if we think about this, about the change at Amazon... So our personal -- our advice for our internal builders... We use the term "builder" internally, which means anyone that is writing software. Where other people would use software developer, we say software builder. That's just a quirk of the culture. And so our advice is to deploy to Lambda first. So if you're writing a new service, write using Lambda. If you can't get what you need out of Lambda, then go to Fargate. And if you can't get what you need out of Fargate, then go to EC2.
129
+
130
+ So the reason why we use Lambda first is because it reduces the operational burden of service teams. We talked a little bit before that service teams choose their own stacks. They also are responsible for their own operations. It's on the team to actually make sure that they are available to support the application at the scale across the whole fleet of AWS.
131
+
132
+ Now, Lambda is -- and now we're talking about "Okay, well, we need to actually reduce..." It's actually very expensive to run AWS. I mean, people might joke it's expensive to buy from AWS, but it's also relatively expensive to actually run the thing. It's quite big.
133
+
134
+ **Gerhard Lazu:** \[25:58\] Oh, yeah.
135
+
136
+ **Tim McNamara:** So then the question is, if you can reduce, let's say, the cost of running the internal systems -- and in Java, if we look at that Lambda Cold Start analysis by maxday... I'm just opening up the page now. Java 11 takes 435 milliseconds; Java 8 is a 530... Whereas Rust is 15 milliseconds. And by the way, the memory is reduced by at least 75%. So it's a quarter.
137
+
138
+ **Gerhard Lazu:** So just to put this like in X numbers... Java is 29 times slower than Rust. Not 10x, 30x slower. Go is 4.3 times slower. In other words, Rust is 4.3 times faster. So in a way, you can say that Rust is to Go what Go is to Java. And that's what the numbers are saying. So if you ever wrote Java and you thought "Wow, Go is fast", try Rust. Seriously. \[laughs\] The numbers are there. And we haven't even touched the memory, but you were going to say something about memory. So tell us about memory, Tim.
139
+
140
+ **Tim McNamara:** The memory case is significant, because then we can actually bundle the applications into smaller containers, which means we can pack more containers in the same host, and actually get overall savings across the entire system, or the whole fleet... Which at AWS' scale translates to, conservatively, hundreds of millions of dollars, and potentially, another order of magnitude. And so my job, or at least kind of the goal that I've set for myself as someone who's trying to advocate for this language internally is to save the company $100 million a year.
141
+
142
+ **Gerhard Lazu:** That's a nice one. By the way, you're saving the company, but what is not said, that you're also saving users money. Because you, running, will be paying less, regardless where you run, by the way. Even if it's not AWS, by using a language which is very nicely optimized, latency is better, memory is better, you're paying for less, you can do more with fewer hardware, fewer resources... Who would want that, without compromising on latency? And there's something even more important; apparently, you will love it. \[laughs\] That's what Stack Overflow survey says. Right?
143
+
144
+ **Tim McNamara:** Yeah. So it turns out that developers really like programming in Rust. And one of the worst things about becoming an advocate for the Rust ecosystem is that you kind of don't stop talking about Rust. \[laughs\]
145
+
146
+ **Gerhard Lazu:** I'll change the subject soon, but... A few more minutes, and that's it. \[laughs\] No, no, no. Please keep going, Tim. Please keep going.
147
+
148
+ **Tim McNamara:** And there are a few reasons why this is the case. Rust as a language provides a couple of primitives for being really expressive. And it's a very consistent language. And it came late, and therefore could learn a lot of lessons from, I guess, its peer languages.
149
+
150
+ So at this time I'd like to chat about a couple of these... One of them is some of these language features. So Rust is a programming language that kind of bolts together C++ on one side, and let's say maybe Haskell or ML on the other side. And if you know the history -- so functional languages have typically had pieces in them like pattern matching, and -- the technical term is... Actually, let's try and avoid the jargon. But in Rust language, it would be an enum; so an enum you think of as like a set of named constants. But actually, in Rust and in some other functional languages which it's derived from, you can actually have data inside each of these values. And so this provides a very elegant way to model state. And one of the distinctive characteristics between, say, a Rust and a Go is that Rust will require that you always handle the error. In Go, the underscore is available to you, if you want. \[laughs\]
151
+
152
+ **Gerhard Lazu:** \[30:31\] Yeah. I use it often. Guilty as charged.
153
+
154
+ **Tim McNamara:** Right. It's because you know that error will never occur.
155
+
156
+ **Gerhard Lazu:** Right. Famous last words... \[laughs\]
157
+
158
+ **Tim McNamara:** Well, yeah, exactly. I mean, I was a Go programmer for a year and a half or so at Canonical, and we had a large application, and we had hundreds of these things. And to me, now, they just look like grenades, or just mines; at some point in the future there will be and edge case that will cause everything to crash.
159
+
160
+ **Gerhard Lazu:** Minesweeper, that's what I'm thinking. Minesweeper. You never know what's gonna be behind that click...
161
+
162
+ **Tim McNamara:** That's right.
163
+
164
+ **Gerhard Lazu:** Okay, okay. How does Rust handle errors?
165
+
166
+ **Tim McNamara:** Every function that can, let's say, result in an error state is modeled as an enum with a good state, or let's say an okay state, and an error state. And packed inside the okay component is what you expect to be the happy path. And inside the error state, or error side of the enum, is whatever you want. In fact, the only requirement is that it knows how to print itself to the screen. Essentially, once you can do that, it'd be used as an error. And the funny thing about this result type - so the result is the name of the enum... The result type is not defined within the language itself. It's not a special case; it's actually a special case, but it's actually provided by the standard library. It was an idiom inserted into the standard library, and because of the ubiquity of the standard library, and IO - so the standard library provides facilities for being able to interact with the file system, and so forth - that this pattern of returning a result type became pervasive inside the Rust ecosystem. And it's just kind of a downstream effect of having a very well-designed language.
167
+
168
+ I've sort of thought about Rust in terms of "Well--" So Rust was created because of this existential threat that Mozilla face from Google Chrome. Firefox needed to be faster, and it could do that with parallelism. But C++ was actually too difficult for them to fix -- actually, all they wanted to do was parallelize CSS decision-making. So like which style is applied to which elements? And this should be something that's inherently parallelizable, but they weren't able to do it. And essentially, a team was given, let's say, five years to create a new programming language and a new browser that would compete with Chrome. And Rust feels to me like a kind of language you would get if you put a team on a project to develop the world's best programming language, and you've given five years to do it.
169
+
170
+ There will be something that replaces Rust. It's not perfect. There's things that kind of irritate me. And I am particularly irritated by the fact that it's quite difficult to learn; some of its semantics are different. But irrespective of that, it's proven to be very, very practical. So if I can talk from our own experience within AWS, we have been able to reimplement the storage node that sits underneath S3, this thing called shard store... So a shard store stores shards of data. So the way that S3 works is it will take any input objects and split them up and store them in different places physically. And these are called shards. And the shard store reimplementation in Rust has actually been formally verified as resistant, or I think immune to most classes of errors.
171
+
172
+ **Gerhard Lazu:** \[34:23\] When was this? Is this something recent?
173
+
174
+ **Tim McNamara:** Amazon S3's shard store -- in terms of the timing, I think it was publicly announced in 2021... But I'm actually unfamiliar, or actually I'm not sure exactly when the project started. So actually, Rust has enabled S3 to perform better almost by definition at hyperscale, as well as -- how am I going to say it? We've actually been able to increase not just the performance, but also the reliability and robustness of the application itself?
175
+
176
+ **Gerhard Lazu:** Yeah. And the correctness, most importantly. It's more correct than it was before.
177
+
178
+ **Tim McNamara:** I actually don't know whether or not it exceeds the other implementations of the storage node internal API, or whether or not at least it meets the very high standard, if you know what I mean. So it's very difficult to, in some sense, dethrone a service that is running very well, so Rust really needed to kind of prove its worth. And Rust has had a really big impact at these very large services, but one of the places that I would like to kind of point out, that it's also -- this isn't public, because it's internal developer tooling, but one of the other places that it's doing really well is in kind of these developers CLIs, just for things like doing plumbing, or just kind of developer productivity tools.
179
+
180
+ Amazon staff probably work on either Linux or macOS laptops, and there might be ARM or Intel chips, and then they deploy to Linux-based servers that, again, might be x86 or ARM architecture. And so it's quite difficult over a long period to kind of create a developer utility and make it installable. For example, with let's say Python, you need exactly the right version of the interpreter, as well as all of its dependencies, and they need to be the right version. So kind of creating that, a lot of those problems go away with Rust, because it's actually able, like Go, to be "statically compiled", which means it kind of bundles in all of its dependencies inside the compiled artifact, and compiles them for the target architecture, and therefore makes an application much more standalone.
181
+
182
+ **Break:** \[36:58\]
183
+
184
+ **Gerhard Lazu:** I know that, again, back to your AWS re:Invent talk, AWS are big believers in Rust... It's present in many places within AWS, including services that we use. I know I use one of them, Amazon Prime. I was surprised that Rust is there. But also other places, like Firecracker. I'm a big fan of the technology. I think it's amazing. Where else would we see Rust, or would we experience Rust without necessarily knowing? S3... That's a big one, right? So it's like the storage node, as you mentioned, but also, you were mentioning in the talk some of the get, put... Not delete; because we don't want people to delete their data from AWS. And that's a joke, right? But obviously, get, put and delete. Rust is on the hot path, which are like one of the most used API endpoints. Do you want to talk more about that?
185
+
186
+ **Tim McNamara:** Yeah, so there was a really good blog post... I say this because it's my company that wrote it. No, there was actually a genuinely good blog post called "Sustainability with Rust" that was put out by my colleagues about a year ago, at the start of February 22. And that provides a couple of glimpses at the public services that have parts implemented in Rust. Amazon S3, we've talked about components of Amazon EC2, CloudFront, which is our CDN product, as well as the AWS Nitro enclaves. So a Nitro enclave is a heavily secure, kind of almost you can think of it as a sidecar for your EC2 instance, which can be -- well, it's very good at holding secrets. There's probably a more technically precise way of explaining that, but it provides heavy isolation.
187
+
188
+ **Gerhard Lazu:** Right.
189
+
190
+ **Tim McNamara:** And the -- I mean, it's quite a long post, because we are trying to really flesh out a lot of the strengths of Rust there. But one of the things that is -- the big message is that we as a software industry, and we as AWS, as a very large consumer of electricity, can reduce our environmental impact in a very substantial way. And these goals that we have set ourselves for saving, let's say - like this one that I have, of $100 million a year - is very genuine. And what I'm really hopeful for, what I'm really super, super-excited for personally, is that startups and other companies and other businesses can really think about like "How is it that we can reduce our operational cost?" And if you think about -- like, no one wants to have to deal with a broken application at like, let's say, four o'clock in the afternoon, or after the kids are asleep, in our case. They really don't want to have to deal with things breaking. And we have an opportunity to develop software systems that are robust, that can scale well, that can use very few resources, and make use of the hardware; hardware continues to improve, but essentially, software is becoming weirdly more and more bloated over time.
191
+
192
+ **Gerhard Lazu:** Yup.
193
+
194
+ **Tim McNamara:** And I think that Rust is not the complete answer to this. But it only needs to be a partial answer. I mean, it's a programming language... But it is a good partial answer for fighting back against this problem of software bloat
195
+
196
+ **Gerhard Lazu:** I have an important question... What kind of dollars are we talking about here? Is it New Zealand dollars, is it Australian dollars? Right? Because there's a huge difference... \[laughs\]
197
+
198
+ **Tim McNamara:** Right, right, right. Is it Zimbabwean dollars...?
199
+
200
+ **Gerhard Lazu:** Exactly. \[laughs\]
201
+
202
+ **Tim McNamara:** I'm talking US dollars.
203
+
204
+ **Gerhard Lazu:** \[41:58\] So 100 million US dollars. Okay. Okay.
205
+
206
+ **Tim McNamara:** Yeah, that's the target that I'm setting for myself.
207
+
208
+ **Gerhard Lazu:** Those are the good ones, just to be clear... \[laughs\]
209
+
210
+ **Tim McNamara:** That's the world's reserve currency...
211
+
212
+ **Gerhard Lazu:** Yeah, exactly. They're like the strongest dollars, okay? \[laughter\]
213
+
214
+ **Tim McNamara:** Look, we can pick our currency of choice. The thing that I think is significant is that the reason why this saves money is that essentially at the scale that AWS is, energy usage equals cost. And the reason it saves money is because it uses less energy to deliver the same -- in fact, a better user experience. So Amazon is a customer-centric company, and I'm more than happy to actually flesh that out, because there's a lot of cynicism about Amazon and AWS, which I think AWS and Amazon should be prepared to kind of confront... But my personal experience as someone who's been at the company since about eight months now is that this idea of being customer-centric - it's incredibly strong through the company. And in fact, yes, the company would like to save money. Yes, it would like to increase its profit. But the thing that really kind of pushes the services forward is this really strong desire for improving our customers' experience. And since AWS is an enabler of other businesses, because it's a utility compute platform, or a utility compute -- it's essentially always going to be in the background. But our role is to enable or to kind of facilitate others to grow.
215
+
216
+ And yeah, that's one of the things that really struck me... Because I came on to onboard with a very high degree of skepticism myself, but I've been really impressed actually by a couple of things. One is this customer centricity. Another one is a very strong dedication for data safety, and data privacy, and security being like utterly paramount to almost everything that we do. And in fact, the internet connectivity issue that we faced before one of these internet slots was because that my computer had shut itself down and isolated itself, because it had detected that the software updates weren't current, and so it said "No, no, no. The only thing that you can access--" It restricted its own firewall, and so "The only thing you can access is the update mechanism. So do that."
217
+
218
+ **Gerhard Lazu:** It detected the Gerhard threat. That's what happened. \[laughs\]
219
+
220
+ **Tim McNamara:** And I think the cost savings are kind of just a byproduct of being a really -- how do I put this? Just a product I think of developing software in a way that actually meets the expectations that are being put on software developers. The systems that we write - I feel like this is a slightly philosophical point, that software has been given a very privileged position as a way to develop public policy; all of our healthcare systems run with software, our airplanes, our entire transportation network... Every business requires software. And therefore, we shouldn't actually expect that things will crash.
221
+
222
+ **Gerhard Lazu:** For sure.
223
+
224
+ **Tim McNamara:** We shouldn't expect that updates will be hard. Like, we shouldn't expect that the applications that we use will be flaky. Now, again, Rust is not a complete solution. And in fact, there are some things that I think that Rust makes really challenging. It makes life really difficult for learners. Rust is more restrictive in the programs that it accepts; it's more particular. It's kind of more fussy, and slightly more bureaucratic. And that becomes really irritating. So it's less flexible, and therefore isn't as well suited for like quick and dirty scripting, and a bunch of other tasks that other languages or other ecosystems cater to much more smoothly.
225
+
226
+ \[46:06\] And the other the other area that it doesn't do very well on is I think in some of the data science, the scientific method, or the research methodology of being more exploratory, and kind of interacting with a dataset in an interactive fashion. I think Python is much better suited for that particular use case.
227
+
228
+ **Gerhard Lazu:** Yeah. How many years have you been doing Python for? Because you were a Python developer before.
229
+
230
+ **Tim McNamara:** Right. In fact, I was quite into Python. I organized the New Zealand Python Conference in 2012. I feel a little bit this tinge of guilt and regret now that I'm seen as New Zealand's Rust guy... Probably 10 to 15 years; depends how you count, because I am a self-taught developer, and I spent a lot of time in open source before getting my first genuine job. And so you can play around with the numbers a little bit, but I would say a good decade of experience in Python.
231
+
232
+ Rust actually -- in fact, one of the reasons I learned Rust was because I wanted to make my Python go faster. But everyone sort of said to me, "Don't write C. That's for experts."
233
+
234
+ **Gerhard Lazu:** Interesting.
235
+
236
+ **Tim McNamara:** Like "Don't do native extensions. They're really difficult. You can blow up your application, you could cause a security vulnerability. You could create a segfault." I had very limited understanding about what a segfault might be. That sounded dangerous. And Rust actually taught me what a pointer was, which is one of the -- you know, it taught me how memory works. So that's why I wrote my book.
237
+
238
+ **Gerhard Lazu:** That ChatGPT knows about as well, apparently... How did ChatGPT learn about your book, Tim? I mean, seriously... Did you teach it? Is that how --
239
+
240
+ **Tim McNamara:** I didn't go and like hack Open AI. ChatGPT knows about my book because it was announced in about 2017-2018. It wasn't released until 2021. It was the most horrific experience I think I've had trying to get that thing out. So to make things easy for myself, I started with this idea that technical literature has kind of been degrading over time. I think of the O'Reilly books that I used to read in the '90s, and they were really well written, very well edited, and were quite thorough, and just had this kind of genuineness to them. Whereas I've been noticing this trend where the depth of knowledge had become much more light. It's kind of like there wasn't much more of an advantage of buying a book, versus reading the sort of introductory blog posts... And also, that had seemed to be kind of moving towards the content marketing side, where a company would essentially commission a book to promote their product, and give it a vendor-neutral sounding title... But essentially, it was just a guide for using their tool for the particular domain. And so I had this idea, "Why don't I wrote the book that I would like to read? Why don't I write the best book there is?"
241
+
242
+ **Gerhard Lazu:** Okay. Tim, I think I have to buy it now. \[laughter\] Alright, so for the listeners - we have this thing with Tim. It may not be obvious, like back and forth; it's like almost like jokes, but like we are pulling each other's leg... Figuratively, obviously, just to be clear. This was one of those... But I don't buy first editions, unless they're signed; because I know the second edition is coming up...
243
+
244
+ **Tim McNamara:** Ah, right. Okay. Well, I'm going to be in London next week. I don't know when the show is gonna \[unintelligible 00:49:38.21\]
245
+
246
+ **Gerhard Lazu:** There you go. Please bring the first edition, Tim. I will buy it, I promise.
247
+
248
+ **Tim McNamara:** Okay.
249
+
250
+ **Gerhard Lazu:** Signed. Please. \[laughs\] And then I'll buy the second one, too.
251
+
252
+ **Tim McNamara:** So that was the outset. Now, it turns out that it's really hard to write a technical book, especially between eight and eleven PM. Because I've just become a new father, and at that stage I didn't have a job that was in Rust. And I didn't want to read any of the other books, like the official book, because I didn't want to ever -- like, I didn't want to learn Rust, and I didn't want to write a book and infringe anyone's copyright... And so I'm essentially one of the last people left that taught themselves Rust without the use of like the official free book.
253
+
254
+ \[50:26\] So my book distinguishes itself because it allows the reader to kind of work through biggish examples. So we write a database, we write a CPU emulator, we write an NTP client, we write a little kernel of an operating system. It turns out that when you are implementing these things from scratch, to write a chapter about them, this takes a while. And it turns out there are bugs and things that need to be updated... And often, because you want to be able to have the learning path such that you are always adding to the language, I needed to kind of chisel down the examples to only include one extra language feature at a time.
255
+
256
+ I remember there was one part in there where I was like -- we brought up the term segfault before... And I was like, "Oh, you know what I should do? I should write a thing that does what game cheats do." So a game cheat system will actually go and inspect the memory of another process that's running. Like, let's do that!
257
+
258
+ **Gerhard Lazu:** Right. \[laughs\] Okay...
259
+
260
+ **Tim McNamara:** And so I wrote this little utility, and it turns out that it could actually go and basically print you out all of what the memory address looks like. And you have to run into all these problems about like, "Well, this area of memory isn't actually being used, therefore you need to kind of decide what to do with that." The CPU doesn't like accessing memory that doesn't actually exist.
261
+
262
+ And then I needed to teach everyone about like virtual memory versus physical memory, and like how does a number get translated to like a physical place on the chip, and a bunch of other things like that. And that example actually needed to get like whittled down or chiseled away into just printing the memory layout for an example, for some code; just a tiny little thing. But it was originally much longer. And there was one of my chapters that kind of grew to like well over 100 pages, and my editor stopped me and said, "No."
263
+
264
+ **Gerhard Lazu:** "That's another book, Tim. Tim, books shouldn't have books. Books have chapters. That's how it works, Tim." \[laughs\]
265
+
266
+ **Tim McNamara:** That's right. So essentially, what I've done -- most tech books only have like one, what they call a capstone example. But mine has like at least a dozen. And that's a problem for the development of the book, because it slows everything down.
267
+
268
+ So in terms of the readership, it's received very good results, but not universally positive feedback. So the people that complain -- so there's about, I'd say 90% it's either four or five stars. It's like, "This is a wonderful book." A good fraction of people have said like "This is the best book I've ever read." So I kind of got what I wanted out of that. But there was a small fraction of people that are like, "I don't like it. This isn't like the other books."
269
+
270
+ **Gerhard Lazu:** Wasn't that intentional? That was intentional, right? It was by design.
271
+
272
+ **Tim McNamara:** That was completely intentional that this was going to be a different book and teach you with a different learning style. I'm not going to go walk you through the entire language, and I'm not going to create a clone of the official free book. It's essentially redundant; there's no way that it's going to become -- so that's the thing about Rust in Action, is that it is different, and that essentially, it provides things that are, I'd say, 70% or 80% complete, essentially there to kind of nudge you, and to kind of say, "I dare you to go and expand this." It's like "There are so many areas that you could take this. Let's see where you could get."
273
+
274
+ \[54:15\] So for example, the CPU emulator - I only implement addition and subtraction. But we do know what an opcode is; we do know how to implement an opcode. But there's the rest of the spec for this little baby CPU, this thing called a \[unintelligible 00:54:27.23\] Anyway... So I'm rattling on...
275
+
276
+ **Gerhard Lazu:** How is the second edition coming along?
277
+
278
+ **Tim McNamara:** I've just signed the contract, actually. I really wanted a break. I didn't enjoy the process of writing the first book at all. So the second edition, I'm going to clean up -- it's basically do what I think you're suggesting, or the reason you don't buy first editions is that there are things that I could either phrase better... In fact, there are a couple of places where now that I know more about Rust, actually, I feel like I can explain concepts a little bit more thoroughly.
279
+
280
+ I'm going to expand my treatment of what a trait is, because I think this is a term that is odd. Programmers are used to terms like interface, or inheritance, or like an abstract base class potentially... But trait - that sounds a bit strange. And there are some other -isms, or some other quirks about Rust that I think I'd be much more fluent at explaining.
281
+
282
+ What I'm a little bit worried about is I will have less of a beginner's eye. So one of the things that I think worked really well was that I was primarily a Python programmer who was writing a book about learning Rust... Whereas now, I am probably considered an expert Rust programmer, and I'm a little bit concerned about kind of the expert bias or expert blindness coming through. But I'm hopeful that I will be able to counter that. I'm also hopeful to kind of inject another thing.
283
+
284
+ **Gerhard Lazu:** I can help. Beginner to Rust... You sold it well enough; I'm interested. You made me curious. I could help, and maybe others, too; others that you know want to get into Rust, to give you some feedback on the initial... Especially those that may have already read the first version; or they read the first version and then very quickly they can look at the second version and see how they compare, while they're still new to Rust.
285
+
286
+ **Tim McNamara:** No, this is really important. For anyone that's taking on a real writing project, it's really important to always have the kind of reader at the front of mind. I fall into this trap myself, but it's very easy to write so quickly, or that you miss a step, because you're presuming knowledge that your reader doesn't have, by mistake. And you've therefore made a gap, and effectively, you create a dead end for a reader. And that's a really disorientating experience.
287
+
288
+ **Gerhard Lazu:** Underscores... We keep coming back to those. You've created an underscore, right?
289
+
290
+ **Tim McNamara:** That's right.
291
+
292
+ **Gerhard Lazu:** "It doesn't matter what it is. Doesn't matter. It's okay, it's an underscore..."
293
+
294
+ **Tim McNamara:** We have those too, in Rust, by the way.
295
+
296
+ **Gerhard Lazu:** So you are @timClicks on Twitter...
297
+
298
+ **Tim McNamara:** Uh-huh.
299
+
300
+ **Gerhard Lazu:** What does Tim click?
301
+
302
+ **Tim McNamara:** \[laughs\] So timClicks was originally a thing... So I was thinking of like a quirky, pithy handle, and I was clicking around on my keyboard. I was trying everything; it was relatively early in Twitter's life, so I thought I might be able to get a short handle. timClicks actually was just there because I was clicking on the keyboard, and then eventually, I was like, "Oh, this kind of fits."
303
+
304
+ \[57:38\] The things that that really click for me are an inherent -- I've kind of got this inherent idea that people are kind of growth-driven, or they are... I'm a real optimist for humanity, and we collectively are facing very significant problems, that are tractable. And this doesn't mean that I want to be overly prescriptive. I don't want to be paternalistic with my outlook. I don't necessarily require that everyone adopts my lifestyle. What I do think is really important is to have a genuine conception of how people are feeling -- or really listen, I think is really the thing that I want more people to do. And when someone is disag-- so it's very, very natural in a heated discussion to think "I have thought about this through. I have my position very clear. If you are opposing me, that's because you are wrong."
305
+
306
+ And actually, we've known for centuries that two people can look at the same evidence and come to different conclusions. I'm veering off-track here, but what I want to sort of say is that I've been really disappointed at how discourse is breaking down, especially I think in the anglosphere. In the English speaking world, we've had politicians really corrupt public discourse, and I don't know how to create something that probably never existed, like a genuine public sphere whereby you could create a space for genuine debate. I think that it's much more likely that the way forward is through incremental changes, rather than large paradigm shifts to get where we might need to go.
307
+
308
+ Essentially, given that our society is facing, or like humanity is facing very significant existential threats, to actually reduce the stakes of a lot of the decisions. Like, it's not like "Oh, we need to go to Mars!" We don't need these -- like, we can survive here at least for another, let's say, 100,000 years. You think about the technology that we've been able to produce in the last 80, to be able to get to spaceflight... Well, you think about projecting that along into, like, let's say, give ourselves another 100,000 years on Earth, and then we might think around terraforming other planets, or we could colonize other worlds. Because at the moment -- or like some other really radical changes that might be required. So "Oh, we need to shift every factory. We need to completely remove carbon." These all sound like very, very difficult, massive things. But saying, "Oh, actually, tomorrow you can take the bus." Or "The next meal, or the next time that we cater for an event, we can think about the food that we're buying, and its carbon footprint." Or we can think about the ecological sustainability of our purchases; things that we have genuine control for.
309
+
310
+ I think there's no single lambda function invocation that's going to justify Rust. But tens of billions or trillions of invocations - at that sort of scale, things change. And, again, I kind of want to stress that we have problems which are solvable, but they aren't going to be solved with expecting some government or some massive corporation to kind of make a huge shift. Instead, we can make lots of small changes, and I think that that's what clicks.
311
+
312
+ **Gerhard Lazu:** This sounds like a great takeaway to me, Tim. I was going to ask you what would be your key takeaway, but this sounds like it. Well, you made me definitely think -- I'm not sure about the clicking part. We'll see what clicks. But you made me think. I'm looking forward to you getting on your bike, and cycling all the way to the UK, so that you can be here in a few weeks time when I'm very much looking forward to meeting you. I know that you'll be joining Rust Nation UK. That is 23rd-25th of February, I think. When is the conference happening? I'm looking at it now...
313
+
314
+ **Tim McNamara:** I think it's actually the 17th...
315
+
316
+ **Gerhard Lazu:** Rust Nation UK... 16th and 17th of February. So - I mean, it's at the Brewery. Oh yeah, that's a good place in London. Now, I think this episode will come out around the 15th, so there won't be a lot of time for you to listen to it and join the conference... But just in case you are, we had the conference in mind. Tim, it's been an absolute pleasure. Thank you for joining me. I look forward to what you do next.
317
+
318
+ **Tim McNamara:** Yeah, me too. Absolutely. It's gonna be fun, whatever it is.
319
+
320
+ **Gerhard Lazu:** Until next time.
321
+
322
+ **Tim McNamara:** Take care. Bye-bye.
The hard parts of platform engineering_transcript.txt ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Hey, Marcos. How's it going?
2
+
3
+ **Marcos Nils:** Hey, Gerhard. Doing great. It's a sunny day here in Punta del Este, Uruguay, and I'm really happy to be here with you to chat about technology, life, and whatever comes up.
4
+
5
+ **Gerhard Lazu:** Yeah, welcome to Ship It. It's been a long time coming. I'm so glad that we're finally doing this.
6
+
7
+ **Marcos Nils:** It's great. I think this is the first time that I've been on the show, right?
8
+
9
+ **Gerhard Lazu:** The first time, yes. Not the last time. I'm sure it's not the last time... Well, I say that; it depends how it goes. \[laughs\]
10
+
11
+ **Marcos Nils:** It really depends. Yeah, so let's see if we can get some interest from the audience and make this episode like something for people to take with them.
12
+
13
+ **Gerhard Lazu:** So the first thing which I want to say is thank you for Play With Go.
14
+
15
+ **Marcos Nils:** Oh, my pressure.
16
+
17
+ **Gerhard Lazu:** What made you build it?
18
+
19
+ **Marcos Nils:** First of all, it's a joint effort. These things are difficult to build by just one person, so I would like to congratulate and basically celebrate it with the other authors of the Playground. There's one person which I started the whole Play With series thing, which is called Jonathan. We are colleagues. And the other person which helped me make Play With Go what it is today is someone in the Go community, which you also know, it's someone also very close to you, which is Paul Jolly. He used to work in Go tooling, and I think he's actually working in Go today... But he's very involved in the CUE project right now, with Marcell as well.
20
+
21
+ So yeah, it's a fun story, because we met in London... I actually went to the Go meet-up there in 2018 or '19, I think. I can recall exactly... And he was presenting something around learning Go. I think the brand new go.dev domain was also published there, with Carmen showcasing it. I had a history of making Play With Docker, which - we can come back to that later. But in any case, I pitched the idea to Paul, he was telling me that it was very difficult for them, for the Go tooling community to be able to show people how to do specific things, especially with all the module madness back in the days, between different tools around how to handle dependencies, and all that. And I basically showed him Play With Docker, which is an open source project, and then we started brainstorming about, "Hey, how could we leverage on this to do something a bit more structured and robust to showcase Go use cases?" And long story short, a few weeks after that we collaborated together and then we shipped playwithgo.com.
22
+
23
+ **Gerhard Lazu:** And what happened afterwards? What happened after you got it out there?
24
+
25
+ **Marcos Nils:** Basically, the reception was pretty nice from people using it. I guess what I take with me of that experience is that I learned a lot during that process. First of all, I met people around the project; I think that's what I like the most of doing open source, is the people around it. And I had the experience to do a little bit of pair programming with Paul, I learned a lot of things from him; hopefully, he learned from me. And basically, the community was super-open to it, they really liked it., and that allowed us to involve more people to actually produce more content for Play With Go. And if you go now, you're gonna see that there's a lot of things around more advanced use cases module retractions, or how to handle different versions, or how to bump a major version on a module, how to handle go mod replaces...
26
+
27
+ So it's been great. I mean, I have to agree that it's been quite stale for the past couple of months, I would say, this year... But we are looking for contributors, or like people that want to showcase different Go use cases... There's the new Workspaces thing that we would like to include as well. But yeah, we're looking forward to keep collaborating to it, and make it bigger to actually help people to grasp the more specific use cases of Go, which are not so much related to the programming language itself, but more the tooling around it.
28
+
29
+ **Gerhard Lazu:** So while I have used Play With Go and multiple times, and I've found it super-useful - again, thank you very much for that. And I really mean it. It's been so easy, so easy, especially when it comes to sharing with others. This is it. Super-simple. I know that you started with Play With Docker; that was your first Play With thing. What was the context which led to Play With Docker?
30
+
31
+ **Marcos Nils:** \[06:10\] That's a really funny story... I don't know if I would state it as an example. Maybe it will, but... We were in Berlin actually, with Jonathan, the person that I mentioned before, which was someone that I was working with at the time... And we were attending an event that was called Docker Contributors Summit, or something along those lines. And one of the personal challenges that Jonathan and myself had whenever attending any of these type of events, either DockerCon, or HashiConf, or whatever, was to use the event to hack something really, really simple, to showcase probably people, and then to basically help the community in some sort of way; the community of technologists that were attending to that event.
32
+
33
+ And on that summit, we attended Jérôme Petazzoni's Docker training, where he basically taught people advanced use cases of Docker, and then he showcased the latest features, and so on and so forth. And I recall that at the time Docker Swarm was becoming a thing... And then he had a lab where around 30 to 50 people were in a single room, and he was handling actual pieces of paper with IP addresses of different Docker Swarm nodes that you needed to use in order to follow the course, where you had to actually SSH into multiple terminals, then create a cluster out of Swarm nodes, and all that.
34
+
35
+ So then we were sitting there with Jonathan and then we said, "Hey, this is very confusing, it's very difficult to follow." And it wasn't only us. People were saying, "Okay, how do I use this? What happens if I lose my paper? What happens if my connection drops?" It was also quite challenging for him to spin up all this infrastructure, because he needed to -- he was actually using three to five nodes per attendee, and there were like 50 people there. So if you do the math, sometimes he was running out of like cloud resources to provision all that in a single availability zone, that was in Amazon back in the days.
36
+
37
+ So yeah, anyways, we realized that there was a process that could be optimized there, and then we said to ourselves, "Hey, it would be amazing if you could do all this in a browser." You have your cluster there, your terminals there, you can share it with someone else, you can even invite people to collaborate with you in that environment a remote environment thing... So yeah, we basically -- I still recall that one night we got some beers, and then we said, "Okay, let's ship something this night. Let's do a very minimal POC of how this would work", and then we basically did it.
38
+
39
+ The other day, we -- there's a picture actually somewhere in one of the DockerCon keynotes where we presented the official project, there's a picture with... There's Jonathan, myself, Solomon Hykes, and then Julius Volz from Prometheus... And the four of us were drinking at a bar, and then we are actually showing Solomon play with Docker, right? And then I recall him saying, "Oh, it would be great if you could do docker run, and then expose a port, and then start an NGINX, and get like a public URL where I can connect to that service, the public service, by doing some routing magic happening." And then the other day we actually shipped that...
40
+
41
+ **Gerhard Lazu:** Wow...
42
+
43
+ **Marcos Nils:** ...and it's there, out in the wild. Yeah. And then after that, we added a bunch of things. It became like a big thing. But yeah, that was the spark that basically started everything.
44
+
45
+ **Gerhard Lazu:** \[10:02\] It's interesting how many great ideas start like that. "Let's try and see what happens", literally. "Let's take a few hours a day, get it out there, and see if this thing floats or sinks. And if it sinks, that's okay. And if it floats, how well does it float? And how much weight can we put in?" and things like that. So yeah, I mean, that's what most of these stories have in common. Try it out, because everything is so random. No one can predict what's going to work and what isn't. Get it out there, and see if it floats.
46
+
47
+ **Marcos Nils:** Yeah, exactly. It's about solving a user's problem, right? Like, if you follow Paul Graham's school, basically, it's all about that. It's all about the users. And in this particular case, we were presenting an alternative to a very annoying problem, and that actually seemed to work for people.
48
+
49
+ **Gerhard Lazu:** Yeah. Now, I know that you haven't finished with the Play With series, and I don't think you'll ever finish; that thing is like one of your things. What is the latest creation in the Play With series, that I know most people will not have heard of this yet?
50
+
51
+ **Marcos Nils:** So the latest one, which is completely different from the others, because it's not reusing any of the Play With backend open source stuff - which you, of course, already know - is Play With Dagger, I would say, or the Dagger Playground. In case people don't know, I'm currently working at Dagger, with you, Gerhard. It's a portable CI/CD system which is programmable; you write your basic pipelines with code. And of course, one of the challenges there is to actually show people how this programmable thing works, and what you can do with it.
52
+
53
+ So around two months ago, we released play.dagger.io, which is the playground that you can get into, and then you're going to see -- currently, you're going to see a GraphQL interface, but we are improving that... Where you can describe your pipelines in GraphQL queries, and then you can run them out there, and you can basically share them with people, and then also bring them to the community to get feedback, or maybe showcase what you're doing... It's been great, because it's a different type of playground than the ones that I'm used to, which presents its own challenges. But yeah, that's basically the last thing we shipped.
54
+
55
+ **Gerhard Lazu:** One thing that has changed since is the URL. So if you're trying to go to play.dagger.io and it gets a DNS error, that's okay. It's normal. It's meant to be play.dagger.cloud.
56
+
57
+ **Marcos Nils:** Oh, you're right.
58
+
59
+ **Gerhard Lazu:** So that has changed. There were many things that happened in the background. What matters is that there is a Play With, that you can try Dagger, it's putting up the GraphQL API... It has built-in documentation, that's really neat, and that all comes from the API. And a bunch of other things, but I'll let you discover them if you want to.
60
+
61
+ Now, the idea of this episode started with the following hook: hard truths about platform engineering. So over the last eight years I know that you have helped build three separate engineering platforms for three different companies. Before we dig into what they were, and what worked well, and what could have been better, what does platform engineering mean to you, Marcos?
62
+
63
+ **Marcos Nils:** Hm... That's a really good question. So if someone comes to me today and tells me "What is platform engineering?", first of all, I would feel a bit confused about the term... Because a platform to me is not necessarily like something concrete, that you need to ship to accomplish a goal. I guess platforms - the objective is basically, as everyone knows, to make developers' lives easier, to make them more autonomous... So you can do that in different ways, right?
64
+
65
+ \[14:08\] And the ultimate goal doesn't need to be to build or ship a platform. You basically need to -- you could solve that, the developer experience objectives, or developer experience tasks and goals by delivering a set of opinionated workflows, and basically present blueprints, or present golden paths to your engineers... But that necessarily doesn't mean that you need to ship a platform for that.
66
+
67
+ So I guess that at some point in time people started converging all these ideas, of like how to build this experience into one single term, and then they started building products around it, and that's what I believe the whole ecosystem calls platform engineering. But to me, I don't see any specific, relatable deliverable with the term, and basically the goals that you need to achieve. So to me, platform engineering is basically making developers' life easier, which is what sysadmins, DevOps, SREs, and a lot of - call it whatever you want - people has been doing for the past few years.
68
+
69
+ **Gerhard Lazu:** Yeah. So in the same context, when platform engineering gets mentioned, sometimes as clickbait, the following thing tends to appear, which is "DevOps is dead." Now, obviously, DevOps is not dead, just to make it clear... But it tends to attract clicks, it tends to attract eyeballs. What is the relationship that you see between platform engineering and DevOps?
70
+
71
+ **Marcos Nils:** So I guess the natural relationship that people do is they usually try to encompass the platform engineering term in shipping a product a whole, fully-fledged platform that your company is going to use to do everything in it... And that kind of confuses who does what, because on one side you have the DevOps teams, which have been some sort of like siloed team in the company, that is usually working behind the scenes, providing tooling and workflows for devs to ship code. And then you also have the SRE team, which are generally more thought to be close to the infrastructure the cloud services, and the availability of the services. So when you bring a new term like platform engineering, and then you try to see who fits where, it becomes a bit blurry to me who actually owns that product. And that's why one of the hard parts, which is this episode's name, of platform engineering, is understand that in my opinion, the platform is built or should be built or could be built by everyone in the company. It doesn't need to be a specific team that owns it, and a specific team that dictates what is built. Of course, there needs to be someone that drives the future, and then basically provides a frame for everyone to contribute to it... But what I've seen working the best is if you make everyone part of the project, and then you provide a framework where people can basically bring their opinions into it, and then, understand how those opinions could help others in the organization to basically build faster, more secure and reliable software.
72
+
73
+ **Gerhard Lazu:** \[17:53\] Okay. So if DevOps is mostly concerned with getting the code wherever it needs to be, whether it's an artifact, whether it's production, whether it's staging, all the tooling that takes the code from your laptop and it gets it out in production; there is a lot of it. It's usually CI/CD, but not only. You have security scanners, sometimes you shift left, and then some of that stuff happens on your laptop... All sorts of things around getting the code out into production. How does platform engineering change this? Because platform engineering is also concerned, with having a platform, having some primitives, having some tooling that people can use to also get their code out there. There must be a difference.
74
+
75
+ **Marcos Nils:** Yeah, that's a good question. That's why I'm saying that one of the hard truths or the hard parts about platform engineering is not that a single product is going to help with everything, is going to make everything magic. You are still going to need DevOps, you're still going to need SREs. The only thing that I see -- I guess we're speaking about the current state, or how people are currently presenting platforms, right? So even if you adopt a platform, and that's a very controversial topic, as well - like, "I'm going to be adopting something which is an out of the shelf solution" - you're going to still need people that curate the golden path to basically do whatever thing you need to do for your software. Like either train a machine learning model, or like deploy a simple API to production... And those opinions, you basically need to talk to the users, understand their pain, and then iterate on a solution, gather metrics around that solution... "Okay, what am I optimizing for? Am I optimizing for the bringing down the change failure rate, or do I want to optimize shipping, putting codes faster to production? Do I want to optimize bringing down the downtime?"
76
+
77
+ So that team is going to have a specific metric that they're going to be aiming for, and then usually, the team that does that is the DevOps teams, right? Because developers have a different set of metrics; product developers have a different set of metrics, that are usually more related to the company business. Like "Okay, we need to bring more users, we need to --" I don't know, whatever that business metric is. So you need someone that is thriving for curating the internal, basically, shipping metrics. And usually, those people that are in charge of that should be DevOps teams, or could be DevOps teams. But one of the things that I've seen happening a lot in organizations is that DevOps teams don't have a clear set of metrics, and that's why the line becomes blurry when platforms arrive, because you have like SREs, and DevOps trying to overlap in different tasks. But in my head, and in my experience, the goals are very different and very clear, but they complement each other. Like, if you ship software safer and faster, that's going to make the system more robust, hopefully, and more available, and that's going to allow developers to ship more things, which is going to ultimately move the business metrics. So everything is connected, right? And I guess the platform gives, as I said before, a frame to all this, but it's not going to solve it, it's not going to like magically merge teams into some magical product that is going to basically fix all your things.
78
+
79
+ **Gerhard Lazu:** Yeah. So we started high-level on purpose, just to paint a picture of how complicated this is. And everyone has a slightly different opinion, and also a slightly different experience. And you yourself had three separate experiences building platforms, or contributing to teams that build platforms, and each of them had very different outcomes. So I'd like us to start digging into that. We can start with the first one; that was, I think, about eight years ago... What was the context in which that platform was built?
80
+
81
+ **Marcos Nils:** \[22:06\] Yeah, that's a very nice story. So I guess what I wanted to bring to this episode, as you mentioned, Gerhard, is that first of all, platforms have been here for a very long time, even before eight years ago. And the context that I had when that happened was that I was working for a very, very large eCommerce company in Latin America called Mercado Libre. Back at the time it was like a $100 billion company. Now it's way less because of the market... But in any case, there were more than 1,000 engineers; we had like more than 2,000 applications back at the time. Probably now it's like way more; probably bigger than 5k, or something around those numbers. And cloud was still on its very early days; very few services were available, only like one or two players were there... And the company had pretty much all its infrastructure on-premise. So we were hosting our own data centers, networking stack, and pretty much everything. And we recently adopted OpenStack, the project, which - you could argue that OpenStack was a platform as well, but it had like way higher objectives, which was trying to basically help in all the on-premise challenges.
82
+
83
+ But in any case, Docker wasn't a thing. it was around 2013, 2014 and Docker was still on its very early days; it wasn't production-ready. It was like a toy project. And then we basically needed to give developers - what I said before more autonomy, provide them a golden path to the things... Containers weren't a thing, so we had to basically provision VMs to run dedicated workloads on each VM.
84
+
85
+ And then we came up with something that is called Meli Cloud. Meli is the name of the company, the public ticker, which was basically a set of services built by what we call the architecture team. We had like the whole infrastructure department, we had different teams. One was managing the OpenStack deployment, the other one was managing the networking... And our team, which was called architecture, we were -- I mean, back at the time we were more than 15 people working on that. We were basically taking care of what you would call a platform team today, right? It's funny, because we weren't even called DevOps; we were software engineers working on, I would say, cloud services internal cloud services. So we were basically providing that, right? So what we shipped back at the time was a very simple CLI, which was called the Meli CLI, where you could basically create what we call a pool. You had like a pool of VMs for your team, and then you can spin up multiple VMs, and then push your application, generate a bundle out of it, and then you could tag the bundle, and then deploy that bundle to production, everything through the CLI... Which was basically a very basic and opinionated golden path, but it basically solved a lot of headaches for people to actually do that simple task.
86
+
87
+ **Gerhard Lazu:** So what worked well, with that platform, the things that you are most proud of in practice, the things that were good in practice?
88
+
89
+ **Marcos Nils:** \[25:50\] So the things that I liked the most, and the things that I saw people actually enjoying was the fact that they give them a lot of autonomy when they had to manage their resources. So the typical flow back in the days was that you got into the company, you downloaded a CLI, and then you configured a set of credentials, which basically gave you some permissions to do specific tasks. But then after that, you could basically create an application that would give you a template with everything you needed to do things.
90
+
91
+ It's funny, because it's very similar to -- if you see more advanced "platforms" today, it's pretty much the same flow. You get a template, and then, of course, you get a Git repo out of it, and then when you push to that repo, there's a series of hooks that get triggered... As I said before, we didn't have containers, so there were some convention Bash scripts that you needed to write in order in order for your application to build, install dependencies build, and then being monitored; it was pretty much Bash back at the time. I wouldn't be surprised if they're still using that, to be honest... And then once you pushed that, there were some services, which were the ones that we basically built, that basically checked that all of that was in place, and then basically make sure that we deployed that thing to a specific pool of VMs. And then you could select between like a rolling deploy, or an A/B deploy... You know, pretty similar to what we have today. But we were using basically the OpenStack foundations for that, so we had to write a lot of code to make that happen, to orchestrate between different components of the OpenStack platform, and do all that work ourselves.
92
+
93
+ **Gerhard Lazu:** It's interesting that it is these principles that stay the same. Even when buildpacks came along, you had like the different things that would run, and even when you were to implement your own buildpack, you would basically fill the template with whatever you needed for your build pack. And I remember doing that a couple of times and thinking, "Wow, this is really simple. It's still scripts everywhere..." Yes, there was scripts everywhere. And I'm seeing something similar now with GitHub Actions, where you have like those little fragments, those little actions from the marketplace that you run, you config in your pipeline... And they can be anything; a lot of them are TypeScript, some are, again, Bash... I mean, that thing hasn't gone away. I'm sure there's a couple other examples. I've even seen - and those are my least favorite ones, the ones that have to build a container to run the action. That takes a long time. It can take many, many minutes. But the principle is the same. You have like some script - for a lack of a better word - that runs, you combine that script with a bunch of other scripts, and then you get a workflow. And the idea is the same - health checks, the same. I'm curious to ask you about the metrics, but I don't think that's relevant anymore. Many things have changed. But the basics have mostly stayed the same.
94
+
95
+ So in that world, I'm going to ask you, what could be better? And I'm going to also answer it. VMs, right? We all know. So let's skip over that answer, because VMs have their own downside. What could have been better in that world, apart from VMs?
96
+
97
+ **Marcos Nils:** So we made a lot of mistakes by building the platform. I guess a lot of the mistakes that we made were also contextual to the infrastructure that we had, the decisions that we made... But one of the things that I really recall - I don't know if regretting, or like learning a lot of things the hard way, is that because we had an on-premise deployment or an on-premise thing, we wanted to build a lot of managed services for teams. For instance, if you had to deploy a memcache cluster, if you wanted a MySQL cluster, or those basic services, Elasticsearch as well - so there were two ways that you could do it. And remember, I'm talking about sub-1000 engineer organizations here. So there's a bunch of people that usually don't communicate with each other, and they reinvent the wheel multiple times; that usually happens.
98
+
99
+ \[30:20\] So what we did is that we tried to -- from the architecture team, we tried to come up with services similar to whatever you can find in any cloud today RDS, Elasticache, whatever... But we tried to come up with those services ourselves. And we tried to do it in a fast-paced kind of way, where we could show developers that we were shipping fast.
100
+
101
+ So I recall that the first thing that we did was called BQ, which was an analogous thing to SQS, I would say; it was like a queue service where you could push a message and then consume it from somewhere else, and it was our initial approach to deliver an eventful system, event-driven storage system between applications, so you could get a notification when an entity changed, and then that got replicated all over the place, and you don't lose a message, whatever...
102
+
103
+ And the thing that I remember the most was, first of all, it is very difficult to build a high scalable, high throughput distributed systems yourself, right? There's PhDs out there that are actually doing this S3, SQS, whatever, and we were just a bunch of senior software engineers, but trying to tackle very challenging problems. So we found a lot of issues across that way. And even though we solved the problem for some time, eventually you had to basically re-adopt or basically redo all that work, because it wasn't scaling. So we were basically trying to chew a little bit of a bigger bite than we can actually \[unintelligible 00:32:11.16\]
104
+
105
+ So yeah, we made some of those mistakes multiple times. So we tried to come up with this service, with \[unintelligible 00:32:19.04\], which - it worked pretty well for some time, but then it wasn't scaling anymore. It was built also on top of Node.js, which was around that time 0.4, very early days, we were adopting very edgy technologies as well... So I guess we fell into the trap of, "Hey, we are at the very top of the of the wave, we're riding the wave on the very, very edge, so let's try to do crazy things. Let's try to copy Amazon in what we're trying to do", and then those weren't really good decisions.
106
+
107
+ **Gerhard Lazu:** How hard can it be, right?
108
+
109
+ **Marcos Nils:** It's extremely hard, yeah.
110
+
111
+ **Gerhard Lazu:** Yeah.
112
+
113
+ **Marcos Nils:** So if you're building a platform today -- I will try to bring that example to today. If you're trying to build a platform today, whatever that platform is, and then you need to build those services yourself, or you're either -- sometimes you're not building the service, but you're adding a lot of like logic to an existent service, for whatever reason... Like, you try to make it highly available, or you try to do like automated replication, or backups, or something to magically happen - be very mindful about that, because it is not an easy task. And usually, not even if you have like one or two very experienced engineers working on that. Try to be as simple as possible when designing those systems, because it's not an easy thing to do.
114
+
115
+ **Gerhard Lazu:** \[33:48\] Yeah. I think it was 2014-2015, around that time, when I was involved with Pivotal Data Services. And I was on that team, so we were building a bunch of stateful services, we were managing a bunch of stateful services in the context of a platform. This was the Pivotal Cloud Foundry platform as a service; you could run it on-prem... Anyways. And that problem was really hard; really, really hard. Especially when you had production data in those systems. How do you do upgrades? And the distributed systems - I'm thinking specifically RabbitMQ, because I spent a long time in that world. Queueing, you mentioned, is very hard. And once you can put a dollar amount to every minute that this system is down - wow; you're starting to see some serious issues, and you're starting to see some serious consequences of something being down. And because the stack is very deep, you're building things on top of things, you're affecting things that you don't even know exist. And then you start seeing like weird failures in organizations because payroll is down. Why is payroll down? because the upgrade is going through; there's a lot of data to migrate, and it will be down for another couple of hours. Now, that is not the worst thing that can be down, but I think fast food orders are maybe top of my list. When people can't order things from their phone, or I don't know, cars can't get unlocked, because there's a service bus that makes use of this queuing system...
116
+
117
+ **Marcos Nils:** I still recall like one last fun story around that - we also built like a distributed caching system. It was similar to Elasticache from Amazon, but the difference was that we needed it to be Redis, I believe, and Elasticache was Memcache, I believe, initially... So we basically built an API that accepted writes and reads; we were both Redis and Memcache protocol compatible, but under the hood it was only Redis... And I still recall that the day -- of course, we did a bunch of tests on the service, and all that... The idea was that you said "Okay, I want to cache" and then we automatically provisioned like a multi-zone, full turnaround, replicated caching solution for you. So you didn't need to deal with that yourself. And I still recall that the day we put that in production, or the week, it was the company end of the year party; and then we built the service using Node.js, because we had a lot of experience in Node.js, and we were using like an other U cache inside the API so we keep the hottest keys in memory and it was like a very famous other U caching library. And when we were at the company party, we got a page saying that the service was basically being restarted because of some reasons, and then we basically investigated, and then we saw a memory leak. Long story short is that the caching library that we were using never evicted keys. So it was growing to the infinite. And yeah, basically, then we had to patch the thing upstream, and then it was like a very difficult thing to do... So yeah, anyways; basically, delivering and working on these very sensitive, and supposedly infinite scalable systems - it's super, super-difficult.
118
+
119
+ **Gerhard Lazu:** So that was the first platform. We still have two more to cover. Let's move on to the second one. What happened with the second platform that you were involved with?
120
+
121
+ **Marcos Nils:** So it's also very interesting, because the second platform - and you're gonna see that in these three different stories the platform itself is a completely different outcome and product. And that's what I think is the takeaway of this episode - "What is the platform, and how do I do it, or what do I do?"
122
+ \[37:54\] On the second case, after I left this \[unintelligible 00:37:55.17\] company, I went to like bootstrap a startup with five friends, right? So we were only six people working there, and it was a machine learning startup. The people that were working there had few knowledge on cloud, and distributed systems; they were mostly physicists working on AI. So they mostly knew VMs, and a lot of Python, and GPUs and that's pretty much it, right? And we basically had to build -- we were trying to build some sort of an as-a-service AI things, and we had to build the whole pipelines to basically train the models, ship the models... Because before that, these physicists were just -- they were having names on the VMs... And you know where that comes from, right? When you name your VM like a ninja turtle, or like a Pokémon, whatever... DaVinci, Michelangelo, and all that.
123
+
124
+ So yeah, we basically were only two engineers working on "the platform", and we needed to basically come up with a workflow that allows people to ship reliable code. It's all about that, right? So during that time, we learned a lot about AI, and GPUs, and all that. And Docker was already an important thing in the industry. Docker Swarm wasn't there yet, so the whole orchestration wars - I guess the only thing out there was Mesos, and as you probably remember, Mesos was initially aimed for very large organizations, so we basically used the bare metals of Docker. And then we built like a very simple "platform", which in this case it wasn't even like a CLI, or like anything that you could run locally. It was mostly like a very opinionated workflow on how to ship the code. And it basically worked in a way that you provided us a Docker file, and then you basically pushed code to your repo, and we took basically the responsibility of kicking off the CI/CD pipelines; everything was using Amazon back in the days... And then we were triggering all the build cycle in the Amazon building services; I think it is still called \[unintelligible 00:40:35.11\] We will just packaging the AMI, and then we had an agent running on a VM or a set of VMs in the cloud, which was basically picking the artifact, and then deploying the thing into an autoscaling group, and that's pretty much it. And then you also had the ability to fine-tune how you wanted the deployment to happen; like, A/B because you wanted to try something, or if you wanted to do a rolling as well... But it was like a very minimal and simple thing; there were no steps involved that the developer had to do, but to create a Docker file. And I believe that that was the magic of it. We actually managed to go very far with that simple approach. Of course, it was like a completely different context, but our main contribution to basically the whole stack was making sure that the flow is usually simple enough to follow, and that developers had to do -- especially these people that came from the AI world, where they knew very little about services, they actually had to write the minimal amount of descriptors, which were a Docker file, in this case, to basically be able to package and ship their thing.
125
+
126
+ \[41:55\] And that was pretty much it. We were very happy about the outcome, because even though we were a very small company only six people, again a very, very tiny startup, we managed to bring some hard opinions on how to accomplish very specific tasks, which these AI engineers were ultimately very happy about, because they didn't know anything else. So before what we shipped, they usually created the VMs manually, they'd SSH into them, and then they uploaded the whole -- they basically cloned the repository, and then they started there. And that's it, right?
127
+
128
+ Remember that this was, again six years ago, probably... So still, what we knew about platforms were very, very early stages, and they were very difficult to operate. This whole platform engineering, or PaaS term is not something new; you could basically argue that it was mostly coined by Heroku maybe, or something around those dates... But people that have been in this space for quite some time you and me, and probably other people in the audience, will remember projects like \[unintelligible 00:43:12.08\] as well... And it's funny, because nobody -- I mean, I haven't seen people mentioning those projects right now, today. If you go, for example -- the holy grail of platform engineering today is, I would argue, things in the CNCF, right? So if you probably google "platform engineering" or something, you're going to land probably in a project that is somehow related to the CNCF. And if you go to the CNCF, all the platform things basically go around Kubernetes in some sort of way. But if you actually dig into the very deep of the early days of platform engineering, you're still going to find projects that are active in GitHub, which are the ones that I mentioned before \[unintelligible 00:44:02.25\] which are still a thing, and people still use. And you could also argue that those are platforms, right?
129
+
130
+ **Gerhard Lazu:** So I have one question regarding the startup, and the second platform that you were involved with. How did you solve the stateful data problem? Because that's the really hard part. Whatever platform you have, there will be state, and usually lots of it. The more state you have, the faster it's changing, the harder the problem. You need to distribute it, you need -- oh, there's so many things. How did you solve it in your startup?
131
+
132
+ **Marcos Nils:** So the good thing is that since we were more connected to the cloud, we were just basically using Amazon, we relied on the services that Amazon provides to manage state; basically, all the things that needed to be transactional were basically in the RDS database... And you could create a pretty reasonable multi-zone database back in the day, so that was very nice.
133
+
134
+ The other challenge that we had was, of course, the state of the machine learning models; when you train a model, that basically generates an output, and then you need to ship that model to the VM where the model is running. There was a lot of funny things that we did with containers and AI, but that's probably for a different episode and audience. But as part of this platform, we had to come up with a service that basically helped these AI engineers to basically move the state from one place to the other in order for the applications to work. So when you needed, for example, to deploy a new version of a model, and you wanted to do some sort of A/B testing, which - that's something very common in the AI world, where you need to keep the old model and the new model, so you can compare performance between the two... Back at the time -- I know that today there are more evolved platforms, like this MLops platform that is very popular, I can't recall the name... But in any case, we had to build a service that took a snapshot from the EBS, created a new EBS out of that, then spawn the autoscaling group, connected all the pieces, and all that bit, right? And I guess that's what we usually call DevOps kind of work, right? We weren't calling that platforms back in the days; we were just calling that our day-to-day job, which was basically building solutions for AI developers to basically be able to ship and manage the state, and be able to basically test different models.
135
+
136
+ **Break:** \[46:45\]
137
+
138
+ **Gerhard Lazu:** What about the last platform? I think this is the one that you have many things to say about...
139
+
140
+ **Marcos Nils:** Oh, yeah. The last one - I would think this last platform, because it was one year ago, is going to probably resemble to the newcomers, and to the most young audience, that is basically navigating the whole current platform engineering trends, and CNCF, and all that. So hopefully, that's going to be a good takeaway for you. And afterwards, we could wrap up with some conclusions out of this whole story.
141
+
142
+ But anyways, the last platform that I worked on was Wildlife Studios is the company; it's a resilient gaming company. If you play mobile games, and you've seen like Tennis Clash, or Zooba, and Sniper 3D, this is the company behind it. And it's an interesting --
143
+
144
+ **Gerhard Lazu:** Sorry, I have to interrupt you. We need a cleanser. You are the South America champion of something related to gaming.
145
+
146
+ **Marcos Nils:** Oh, yeah.
147
+
148
+ **Gerhard Lazu:** Tell us about that... Because I want to have it in the recording for us to know what that is, and I could reference it back.
149
+
150
+ **Marcos Nils:** Cool. Yeah, so back in my young days I was very fond of FPS games, and I happened to land in the Quake series, for whatever reason. And I basically started very early days on the internet multiplayer gaming thing. So I started with Quake 2, using dial-up... And then I moved of course to Quake 3 using cable, and basically, I started to like the game a lot, so I dedicated to play one on one, or two v. two in Quake 3. And one thing to the other one, and then I became -- I mean, for one year I won basically the Quake Pro League something in South America... So yeah, you could say that I was in the very best top players in Quake 3 in Latin America for -- I can't still recall the year; I would say 2004 and 2005, around those dates. So yeah, that was really good, good dates.
151
+
152
+ **Gerhard Lazu:** I'm going to make an assumption now... It's unlikely to be true, but hopefully it will be funny. Is all your involvement with platform engineering, and all the problems and all the frustrations that were building up during the day, that you would take to FPS, and use all of that frustration, and you channeled it in the game? Is that how it happened?
153
+
154
+ **Marcos Nils:** Well, maybe it was the other way around, right? I took all the angriness and all those sentiments from the game and then I put it into platform engineering. Because platforms happened after Quake. But yeah, you could say that. It's funny, because there's -- I don't know if you saw it, but there's a game which allows you to terminate Kubernetes pods by playing... I can't recall if it is Doom, or another FPS game... But basically, you configure a Kubernetes cluster, and then you are in inside a Doom map, and then all the pods are enemies, in that map. So once an enemy is basically killed, the pod dies. So...
155
+
156
+ **Gerhard Lazu:** Wow, I haven't seen it. We have to link it in the show notes. Okay, so this was hopefully a pleasant segue for listeners, and now we're going back to the third platform, the one that you started talking about, which was for a gaming company.
157
+
158
+ **Marcos Nils:** Yeah. So I guess the most important thing and also takeaway from this is context, right? Context matters a lot when you're building this type of solutions. And when I arrived at the company, they had a quite large, you could call it DevOps... To be honest, I don't like the term DevOps, because to me, everyone that basically produces software is a software engineer, right? You're working on a different problem, which is infrastructure, or developer tooling, but you're still a software engineering that does a job; same job. I guess the term DevOps is easier to use to basically hire people, because you need certain sets of skills... But to me, everyone that writes software, or interacts with software in some sort of way, it's a developer or software engineer.
159
+
160
+ But anyways, when I arrived there, the SRE/DevOps teams were 5 to 10 people, but the most critical thing, which is similar, was that developers were already exposed to some underlying concepts of "the platform", because they already had a platform they were running, whether you like it or not. They were using Kubernetes, and were very early adopters of Kubernetes. They were following a GitOpsy approach, but not fully GitOps, because they didn't have an operator in the clusters managing state; they were versioning their deployment descriptors, in this case where Kubernetes manifests, in Git. But the way that those got applied was they basically kicked the CI pipeline, and then the CI pipeline, when it finished, they did a kubectl apply against the cluster, right? So it was imperative-ish, more than declarative, than what GitOps basically tries to evangelize for.
161
+
162
+ \[52:18\] So it was important to take note that developers already had a workflow. So they were already exposed to Kubernetes manifests, they were already exposed to kubectl... And to be honest, a lot of them were happy about it, because it was another tool in their tool belt; they could also find a lot of Kubernetes content out there to learn about, they could find courses, there are even books on Kubernetes... So developers are curious about things, and they like learning new stuff, so they actually liked a lot of the things that they used on a day to day basis. But of course, some people were frustrated, because they didn't know Kubernetes, and they didn't want to learn it. So that's fine. And they also had to deal with a handful set of TerraForm stuff, because in order to get a database, what the team built before I joined the company was like a very automated pipeline where you had to write your own TerraForm resources, and then that basically got built, and then provisioned for you, but you still needed to make a lot of mistakes along the way to get it running.
163
+ So what happened - and please be careful if you see this happening in your organization - was that some new VP was hired... That usually happens in large organizations. We're talking about - this wasn't as big as the first company that I mentioned, but this company is big enough. We were around like 200 to 400 engineers. So what happened is a new VP arrived, and said "Back in the company that I was working before, that I'm not gonna mention, we had a team build the platform, which is basically a centralized UI and control plane, where developers could jump in, and through a UI request a database, deploy applications and all that." So he basically came with his very constrained and opinionated way of building a platform, and basically told these SRE and DevOps teams, "You need to build this."
164
+
165
+ And of course, I was actually there when that thing happened, and then we started researching, "Okay, what should we do?" Because embarking into a project with that magnitude, where you need to basically try to see how you're going to leverage the current workflows that you have and try to make them more API served driven - it's a very complex task. We also didn't have like any experienced person in the team to build the UI, which is a totally different set of skills... So we tried to look out there what we could be using to basically deliver something like this. And then we came up with what a lot of people probably saw recently, which is the very early days of Backstage, which is the developer portal that Spotify basically open sourced... Which is claimed to be like a tool that you can use to basically build a platform for developers.
166
+
167
+ It's a very interesting project, I would say. It's very complex as well. It has a complex architecture. But the most important thing is that Backstage brings its own set of opinions, right? It is designed in a way that it's meant to be very extendable, and very pluggable, and I would argue that it was designed for a very specific type of organization and model of organization that we clearly didn't have.
168
+
169
+ \[56:14\] So what happened, to make the story short, is that we adopted Backstage, we started doing some changes to the core of Backstage, because it wasn't designed for our organizational model and what we needed to build, and then that led to a multi-month, massive project where it wasn't clear what everyone was doing, because this platform was supposed to serve the data team, the traditional application team, the gaming teams... And basically, several months passed by, and because this Backstage thing also came with their own opinions, which didn't involve developers having that much tooling locally to work, we were basically taking away some of the tools that developers were used to, like kubectl and all that, because we wanted to simplify that process. But then we didn't realize that people actually liked to be able to do some of those things, because they felt they had more control.
170
+
171
+ So in any case, we started building the thing, and then at some point we realized -- we said, "Hey, it would have been way easier to iterate on the workflows, and on the basically pipelines, and the golden paths that we currently had by slightly changing the experience with little improvements, than to basically throw this big, massive thing to try to fix it." And to summarize, basically... My feeling - and that's why this is a hard truth or hard part of platforms, is that in my opinion, and again, this is a personal opinion, given experience and all that... Trying to implement an either out of the shelf platform, or like either try to implement an "Oh, let's build this platform thing in the company", most of the time is going to be very resource-intensive, and it's going to generate a lot of noise within your developer teams, and within basically your SRE and DevOps teams.
172
+
173
+ So my suggestion for companies, either small or big, trying to tackle these challenges, is start small, and think mostly around the golden paths and the opinions that your company currently has, that your developers are actually asking about. Because if you bring an already-existing platform, you might be lucky and it could work for you, but if you bring something that already has opinions, you are probably adopting someone else's opinions on a different context, that is going to very likely not work for your case. Even though everyone is trying to do the same, which is basically ship applications, the context and the past matters a lot, because people need to feel confident about it, and they need to reason about it and understand it in a way that makes sense for them. And I guess ultimately, it's all about people and interactions, right? So the platform should be like a project or a tool that makes developers feel more confident, and if you don't take into account the past experience and the current workflows that people have, and are currently using, it's going to be very difficult to generate adoption for it.
174
+
175
+ **Gerhard Lazu:** \[59:52\] I think most of us suspect that big bang rewrites are a bad idea. And what typically happens is that you're trading problems that you know, and are familiar with, some intimately, with problems that you don't even know exist. And you may not like what you're getting. And it's impossible to know what you're getting, because everyone will tell you about all the positives. And to be honest, most people don't even know what the negatives are, until they try it out. The bigger the change, the higher the risk it's not going to work out as you imagine it. So how do you minimize the change? How do you improve what you have? How do you make those small, incremental daily/weekly changes and see, "Is this better?", rather than the whole big bang, "Forget what we have, we know we can do this better, start to rewrite..." Or "We bought it, and it will solve the problem for us." No, it won't. It will bring other problems. And it will make some of the problems that you have maybe redundant, but you don't know what you're buying into. You don't know what you're getting yourself into. And I think this astonishment and disruption that you're going to inflict on everyone cannot be underestimated.
176
+
177
+ **Marcos Nils:** Exactly. I totally agree. And it usually helps a lot if you drive those conversations backed by data. One thing that I also believe didn't do in our last platform implementation was to basically present developers - and not only developers, but also upper management - with the current raw data of our current processes. Okay, how much time it takes to onboard a developer, how much time it takes to deploy something, how many -- you can take the DORA metrics if you want, but it's important, once you get those metrics, pick one and understand how you can optimize that one, and then really take a conscious decision if you either need to build something new to tackle that particular problem, or you can tweak, iterate on a current workflow that you have to basically get that metric either up or down to whatever level you need, in order to keep forward.
178
+
179
+ And I would argue that in most of the cases, you can do little increments on your current processes or software cycles, that could help accomplish that without trying to adopt or find like a magical platform solution that fixes that.
180
+
181
+ **Gerhard Lazu:** Yeah. And the other thing, which would be worth mentioning is that when people identify, they go "We're going to try this new thing, but we will only take a subset of the applications. We'll start with one, or a few", what typically happens is that a year or two later you end up having two platforms; half your workload's in one, half your workload's in the other one. That's what usually happens. So be wary of that. Some will succeed to solve this problem, but most of you will end up in that world. And it's also not good, because you now have twice the maintenance, twice the upgrades, twice the way things go wrong in different ways... And - well, that's a good challenge. And if you enjoy a challenge, go for it. But maybe there's a better way. So if you were to build another platform today, where would you start?
182
+
183
+ **Marcos Nils:** I think I have a very concrete answer for that. So if I would build something that can be called the platform, the most important thing to me to build is a centralized visualization place that people know to go to basically do something. Of course, you need to build the workflows that I mentioned before, and I'm going to probably build those based on my experience and what I know, because I've already been in this space for a long time, so I have a lot of opinions on things that I would like to adopt, and things that I would like to do. Of course, I would optimize for the simplest solution, so it's understandable by anyone.
184
+
185
+ \[01:04:04.19\] But in my opinion, the most valuable short-term outcome you can give your developers, no matter how big the company is, is have a place that people don't need to think about, that they can naturally go and get an answer to the question that they have. For example, how do I create a new application? Go here, and then I'm going to give you a set of steps, I'm gonna give you an API, whatever that is; you can decide on the implementation based on your experience, whatever works better for you... But it's nice to have a single place to go and to that specific task.
186
+
187
+ "How do I now see the metrics of my application?" "Go to the centralized place, and then I'm going to take you to whatever monitoring system that we have implemented, and then you can go from there."
188
+
189
+ So having like a central cockpit to see the state of your app, understand how to do things, see the metrics, and then get feedback, maybe ask a question to your teams through the messaging systems that you have, and potentially start building on top of that - in my opinion, it's the best thing that you can do for any stage of the company, especially now that we are remote. Because that simplifies and augments communication by a huge, huge amount. So yeah, I would focus primarily on that.
190
+
191
+ **Gerhard Lazu:** Right. Is there something that exists today that you'd be tempted to try out as a first step? Something that you wouldn't be building from scratch, something as a starting point that you would use. Or maybe a couple of things that you would use.
192
+
193
+ **Marcos Nils:** Hm... That's an interesting question. To be honest, I don't think there's something I would use. What I would do until I can build that is - since it's all about communication, as I said before, I would build a platform probably based on whatever communication system we have in the company; either Discord, or Slack... They are very well integrated things that you can leverage and build a lot on top of. So I think what I would do is I would start building some sort of like ChatOps flows, where you can go to these communication things and say, "Okay, what operations do we have possible? You can do this, this and this." Initially, I would do them through input/output; I think we will call it prompt engineering now, after ChatGPT... So I would do some prompt engineering there, where you can get feedback rapidly through that channels. And then eventually, when I have time, I would go towards like a UI-based thing, because there are some things that can be explained through the terminal, or through text. But I think that there's a lot you can accomplish through text initially, right? So you can -- if you want to say, "Hey, I want to see, for example, the metrics of this application", you can redirect people to whatever monitoring system you have, as well as login system you have as well. So personally, I'm not currently looking at any particular product or service that could fit into that space. The only thing that I'm looking at, but not directly relatable to like build a platform is, of course, the new WASM trends that have been happening out there... Especially -- I think Fermyon is the company, and there's another one called Cosmic something... But I'm not looking at those as ideas on how to build a platform, I'm basically looking at those on what is the opinionated workflow that they present for developers to build and ship an app. Because I'm going to probably -- eventually, in the future, if this new paradigm brings new, different ideas on how to simplify that approach, I'm going to probably take those ideas and build something within the context of my teams and my organization, with the hope to simplify the process for my company. It's going to be very difficult to adopt them as they are, and try to implement them for our teams.
194
+
195
+ **Gerhard Lazu:** \[01:08:24.20\] Yeah. Can I assume that you would pick containers over VMs?
196
+
197
+ **Marcos Nils:** Yeah, you can assume that. That's the right assumption. And I think the tough question is "Are we going to pick serverless over containers?" Or what's gonna happen there, right?
198
+
199
+ **Gerhard Lazu:** Hm... Interesting. Which way would you go, serverless or containers?
200
+
201
+ **Marcos Nils:** To me, the serverless story is very appealing. Today, I don't see a strong use case to move everything to serverless, because I currently see that there are some things which are not yet solved. The compute thing is still a problem. The statefulness of serverless still has some quirks on it. For example, if you need to have like a persistent database connection, that's something that you cannot do, because they could eventually scale to zero. Since I'm not working on very sensitive systems, I don't care that much about cold starts, so to me that's not a problem, but I know that some people complain about that. So even though I agree that I would do a lot of things in serverless, I don't think we are right on the spot to basically fully translate to that. But I do believe is going to be disruptive in a very short time, about how people think about shipping and building and packaging apps. And it makes a lot of sense to start thinking of apps as a combination of Lambda expressions, of like basically different sets of logic that get connected to each other, instead of like a shippable binary that you build into a thing, and then have to deploy.
202
+
203
+ **Gerhard Lazu:** Interesting. Would Kubernetes make your cut?
204
+
205
+ **Marcos Nils:** The way it is designed today? No. I think that's one of the -- I mean, that's one of the biggest challenges in platforms, is that when the foundational basis changes, it's very difficult to adapt. OpenStack, even though it was designed to be on-premise, was designed to be VM-based. Kubernetes has been designed from the ground-up to be container-based, and all the components around it and all the concepts around it are around pods, and nodes, and containers, and sidecars, and all that. Even though I agree that Kubernetes is a very flexible platform, I wouldn't call it an orchestrator. I think people are more accurate in calling it an OS, because it really provides a foundation. You could eventually come up with serverless resources. I mean, you currently have projects around it, like Knative... It helps on that. It creates the abstractions to handle serverless workloads.
206
+
207
+ I don't know, to me the whole system is designed to be very container-aware, that it's going to require quite some effort to basically try to abstract it and make it serverless-native; I don't think it's going to cut it, but I might be wrong. I guess we'll see what happens.
208
+
209
+ **Gerhard Lazu:** \[01:11:56.03\] So it's very hard for me to not start recording the second episode with you right now... Because this is something really interesting, and I'm sure we could dig into it. Unfortunately, we do have to stop on this occasion. And the last thing which I'd like us to cover is what are you most looking forward to in 2023? ...since this is one of the first few episodes.
210
+
211
+ **Marcos Nils:** As personal, general, technology platforms...? Is there any context that you would like me to scope the question to?
212
+
213
+ **Gerhard Lazu:** I think it would be technology. I think that's what the listeners will most resonate with. But if you want to share a personal, I'm most looking forward to. Go for it.
214
+
215
+ **Marcos Nils:** So I envision a 2023 where you can safely run CI/CD pipelines locally and in the cloud. That's what I'm currently working on to help developers about. That's one thing that I would like to see happening, I would like to see more WASM and serverless adoption, and use cases of companies using it for production workloads, where you can see that it actually -- I wouldn't say scale, but you could actually see it as a viable paradigm to basically build an app on top of that. I haven't seen too many serverless-based products so far.
216
+
217
+ Of course, I guess we can't deny the AI effect, right? So there's gonna probably be a lot of tooling around that. For the platforming space I'm not sure where the biggest advantages are going to be. I don't know if it's going to be cloud costs, or some sort of like Copilot-based thing... You can probably say to ChatGPT today, "Hey, I want to provision a database in AWS using Pulumi and TypeScript", and it's going to probably do it today. So there's gonna be a lot of that as well, like prompt engineering.
218
+
219
+ And I don't know, the last thing that I would like to see is -- one thing that I miss a lot, that I would actually like it to come back, is some sort of meetup thing, right? I believe that, given the past two years, a lot of human contact has been lost in communities, in people sharing knowledge. We all became a little bit salty on things, especially on social media and all that... So I would love to see something that brings people together. I don't know, I really miss the early days of the excitement about a new technology, about something that is going to make people's lives easier. So it would be nice to see something coming up to helping that problem as well.
220
+
221
+ **Gerhard Lazu:** Okay. Those are all great things to look forward to.
222
+
223
+ **Marcos Nils:** Yeah, we'll see what happens.
224
+
225
+ **Gerhard Lazu:** Speaking about people, and speaking about coming together, today is exactly one year since I joined Dagger. So I'm very glad that the 4th of January 2023, when we are recording this, I get to share it with you. Thank you, Marcos. I really enjoyed it. Thank you very much.
226
+
227
+ **Marcos Nils:** Happy new year, Gerhard, and it's a pleasure to work with you every day. We are building some really cool stuff together, and the most important part is that we are learning and we are sharing. It's all about that, right?
228
+
229
+ **Gerhard Lazu:** Yeah. Likewise. Same here. I can hardly wait for the next time that we get together in-person. It has only happened once, and that was cut short. And yes, it was COVID, unfortunately... But this year, it's gonna happen again. Not the COVID part, just the getting together part. I would much rather not have that again... But we never know. So once again, Marcos, thank you very much...
230
+
231
+ **Marcos Nils:** I promise that you're going to be able to cycle the Golden Gate Bridge. It is going to happen.
232
+
233
+ **Gerhard Lazu:** Yes. That is on my list. Still on my list.
234
+
235
+ **Marcos Nils:** Take it for granted.
236
+
237
+ **Gerhard Lazu:** Alright, Marcos, see you in the next episode.
238
+
239
+ **Marcos Nils:** Always a pleasure. Take care.
Treat ideas like cattle, not pets_transcript.txt ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Today you shared something on the Small Bets Discord general channel that caught my attention. "The only small better who shared the podium with Fernando Alonso." So first of all, what is a small better?
2
+
3
+ **Daniel Vassallo:** Well, I think it's somebody who sees work as a series of small bets, I like to call them, which is basically just a very time-boxed effort towards something rather than an indefinite project. So I think it's one way to tame the uncertainty of doing something speculative, to some degree, and I think a mistake many people do, which I've done myself many times as well, and I think I've learned from it, is starting a project without a fixed date of when this project will end, and without well-defined expectations... So what this leads us to do many times is to keep working on it, keep believing that eventually destiny will reward us for all the hard work we've put into it... And unfortunately, I think it turns out to be an almost delusional way of operating and treating the uncertainty of these types of projects.
4
+
5
+ I think there are some people who figured out a better way of doing it. I mentioned in that Discord channel the people behind Basecamp, the company, have been advocating this type of work for probably 20 years or more. Timeboxed budgets \[unintelligible 00:02:45.02\] is small for you or for your company, depending on circumstances. If you're young, 20-year-old, with no dependents, it might be a bit bigger; if you're a company of 50 people, it might be some other time. If you're a person like me, with two young kids, quite busy, it might be a slightly shorter time... So on and so forth. But that's the key thing - there's some fixed time, and you build something within that budget, right?
6
+
7
+ And then, once you put it on the market, you have no obligation, but you have the option to make another small bet on the same project. People behind Basecamp have been building their flagship product this way, again, for 20 years or more. But they've abandoned many things along the way. In fact, at one point five or six years ago they've pretty much shut down all the other projects and focused on one. Recently, they started adding again, Hey, and I think they have some other things in the works as well right now.
8
+
9
+ So I've been very much inspired by people like them, and as many others, even outside of software and tech, and my industry, and I think there's lots to learn from it, again. And I think it's a learning of -- there's no recipe for success in business, but I think it's a recipe for taming the uncertainty, making things less daunting, to stay motivated, to not get fooled by the randomness of all of this, and diverse.
10
+
11
+ **Gerhard Lazu:** Wow, you mentioned a lot of great things. I promised our listeners that we will unpack them soon. Back to the post that you shared. That was David's "Glorious days like these." We will put the link in the show notes. When you go to the post, and even if you don't, the one thing which will strike you is the calendar screenshot, which is completely empty. That's the takeaway. If you're trying to cram things, if you're trying to do too much, maybe it's not the best approach. Slack in the system... What do you understand by "Slack in the system", Daniel?
12
+
13
+ **Daniel Vassallo:** Yeah, it's something that is very highly underrated in productivity advice nowadays in the modern world. It's mostly focused on trying to squeeze every second out of the day, and pack as much as possible, creating habits and systems and techniques, calendars, tools to try to be even more optimized. Optimization however, comes at a cost. Like a Formula 1 car. A Formula 1 car is very optimized to go as quick as possible on the circuit, but it's very fragile. As we all know, they break apart easily, and highly unreliable as compared to regular cars. That's the cost of optimization, and I think it's a very similar analogy to us. If we want to be agile, and we want to have the capacity to pounce on opportunities as we bump into them, we need to have some intention and slack in the system, right? Some safety margin, which could be three days in the calendar, or a mental sort of capacity, or a combination of those things... So that's when we bump into a good opportunity, when inspiration strikes, when we're lucky - we don't just put it on the shelf, because opportunities and inspiration is perishable, as is commonly said. But we'll have the opportunity to pounce, work on them, and take action.
14
+
15
+ \[06:11\] I think sometimes we're lucky, we get into these serendipitous situations, we bump into great opportunities, but we're not able to benefit from them, because we don't recognize them, or we don't have the capacity to recognize them. I think when we have a very busy calendar, I think subconsciously, we'll put the blinders on, so that we don't get distracted, because our focus is on getting to do the planned things. So we become blind, essentially, to opportunities that might fall right in front of our eyes, and we literally just don't see them.
16
+
17
+ So I'm a big believer in trying to arrange our life/work arrangements as much as possible to intentionally create slack in the system, which will look like wasted time when you look at them individually. You might look like "I didn't do anything meaningful, day after day after day", but then sort of the big things tend to happen, and eventually it might be five days out of 365. we might look at data, the 60 days of the year, and say, "Nothing came out of this slack", but then there's those five days when the important things tend to happen.
18
+
19
+ **Gerhard Lazu:** This resonates with me deeply, because I realized, having been part of your course, and having had that time to reevaluate a few things, I realized that I have no slack in the system. My system was 110%, 120%. I was trying to cram more things than work hours. And that usually takes from family time, it usually takes from personal time, and it's not great. So I remember what we talked before we started recording - that is a consequence of this, of the importance of slack. For our listeners, everything will make sense in a few episodes, so let's not spoil the surprise just yet.
20
+
21
+ So when I reached out to you about recording this episode, I mentioned that our conversation will be a good follow-up to episode 77. And I'm really enjoying this coincidence, because the last episode, 77, was with DHH.
22
+
23
+ **Daniel Vassallo:** Oh, interesting. Fascinating.
24
+
25
+ **Gerhard Lazu:** Exactly. So you are the follow-up right to that conversation. It just so happened. So I'm really enjoying this coincidence; serendipity, as you mentioned it. That's the one word which comes to my mind.
26
+
27
+ **Daniel Vassallo:** Yeah.
28
+
29
+ **Gerhard Lazu:** Okay. I really enjoyed being part of the course, of the Small Bets course, and there are a few ideas that resonated with me deeply. First of all, slack in the system, but also the role of randomness, bias of survival, and lifestyle design. And these happen to be the titles, with the various sessions that you hold. And you already mentioned a few of these at the very beginning. And it seems to me that these ideas resonated with thousands and thousands of people. How many cohorts have you had so far?
30
+
31
+ **Daniel Vassallo:** 21 cohorts of far. Yeah, starting the 22nd in February. Yeah, so we have 2,000 members in our community, so it's been quite successful.
32
+
33
+ **Gerhard Lazu:** Why do you think these ideas resonated with people? Do you have a theory as to that?
34
+
35
+ **Daniel Vassallo:** I believe that subconsciously -- I think there's something in our DNA. This is my speculation, that this is a more prudent way of dealing with the uncertainty of life and business in general. I think we've been, again, in the modern world we've been almost brainwashed by venture capitalists and the media in general and other things - maybe not intentionally, but it's the consequence of believing that we have some potential to fulfill, and we should go all-in, and we should try to maximize everything, and sacrifice all our lives, and so on and so forth... Which seems glorious, and glamorous, and so on and so forth, but the reality is that I think subconsciously, we realize that the odds of those things happening are very, very low; one in a thousand, one in a million. And we know that we only have one lifetime. Those payoffs might materialize if we had the option to live a thousand lifetimes, or a million lifetimes, but we know that that's not the case.
36
+
37
+ \[10:14\] So I think, again, it's just our subconscious - it's there to try to protect us, and when we see a different way of operating, I think we realize it's a better bargain to almost shoot for the middle, where things are much more attainable, much more sustainable, there's less risk of hidden downsides than with going with the extremes... And they click. And again, people like DHH and Jason Fried have been talking about these things for decades; many others as well. None of these are probably my own original ideas; it's things that I stumbled on as I was living the enterpreneur journey myself. I did lots of mistakes, and I'm sharing them with people, with examples of my own and from others that I've encountered... And they're resonating with people. And I think they're resonating with people as well because people are trying them and finding some small wins, and then they're realizing that this is a much more motivating way of finding success, that is more prudent, a lot more attainable, and so on and so forth.
38
+
39
+ This is basically like if you want to make money from writing - you could try to aim to become the next J.K. Rowling and sell a billion dollars in books, which is of course possible, because it was done, and this probably will be done in the future... But we all know, it's unlikely. But there's other ways if you want to monetize your writing; there are probably dozens of different ways that are much more modest, which might make you a few hundred dollars a month, maybe a few thousand dollars a month, or something like that, which are much more attainable. And we are focusing on those, and ignoring the more speculative J.K. Rowling type bets.
40
+
41
+ **Gerhard Lazu:** Yeah. These ideas have been around for a long time, for sure. Do you remember the moment when you realized, "I believe in this so much that I'm going to do something about it"?
42
+
43
+ **Daniel Vassallo:** I think it was almost an existential crisis. I quit my job; I had a good, a reasonably cushy, well-paying job at Amazon until the beginning of 2019. I quit because I realized a career as a full-time employee was probably not the ideal one for me. We can talk more about that. But I quit thinking that I'm going to do the typical software guy bootstrapping thing, that I'm going to think of some good idea on my own, just by brainstorming and thinking hard, then I'm going to work on it for a while, release it, try to find customers and figure things out along the way. And as I was doing this, and even though I was getting good signals with my project, that is what I started, six months flew quickly... You know, time flies, and I didn't have infinite -- I had quite some savings, I was lucky, but I didn't have infinite savings. Sooner or later, they were going to run out. And I remember, literally, staying up at night, with almost pretty much high anxiety, thinking what will happen if I release this project and nobody pays? Or what if a few people pay, but it takes a long time for this to grow? When will I know? When should I pivot? What signals should I be watching for? What if I end up spending all my savings, and I'm back to square one with nothing now to show? I have to go back to a full-time job now, with no savings... And I didn't have an answer to all these questions. So I realized that I was on a journey that was highly uncertain. And again, I feel like I was duped a little bit. I might have duped myself into believing that -- I treated this like any project when I had a full-time job.
44
+
45
+ \[14:05\] When I was working at Amazon, my boss gave me a project, I worked hard on it, and if I showed that I worked hard for it, I would be okay; I will get promoted maybe, or at the very least keep my job. That's how I worked before. But in the outside world, that wasn't enough - putting in hard work, putting in lots of efforts, doing the right things is not enough. There's lots of uncertainty.
46
+
47
+ So I think that was an epiphany for me, the realization that this is a highly uncertain part and that I need to do something different. And this something different to me - I didn't really understand it formally. But I said, you know, "Probably what I need to do right now is try to cover my expenses." Cover my mortgage, cover my bills, by any means possible. Of course, legally, ethically and whatever...
48
+
49
+ **Gerhard Lazu:** Of course, of course.
50
+
51
+ **Daniel Vassallo:** But anyway that I could, without necessarily being very picky, very choosy. I don't care about how sustainable it
52
+
53
+ is, or how high the return on investment is; whatever I can, to at least have some baseline income. And literally, the next day, after I went through this week of anxiety, I picked up my phone, I went to my personal contacts, and I found a friend who needed some programming help, and I started doing some freelancing with this friend. But the connections wasn't a lot; I was doing like 20 hours a month, barely enough to cover my mortgage... But it was a huge, huge improvement in my peace of mind. Now I have some income. And I knew that if I wanted to, I could maybe find another client, or maybe add some more hours... It was incredibly liberating. Now, I wasn't relying on this project succeeding anymore; it almost became like a bonus. Now, if it succeeded, it's great. If it doesn't, I'm not going to be desperate, as there's already something there.
54
+
55
+ And immediately, things started to -- one thing started to come after another. I was seeing these creators, the creator economy was flourishing, self-publishing, educational products, programming books, and whatever, and some of them making decent payoffs, some of them less so... And I said, "I have some technical content in my head. I worked at AWS for almost 10 years, I know certain products inside out... What if I do a brain dump of some of the things I know, and try to sell it online?" And this became another -- now I call them another small bet. I spent like a month on this project, not a long time; brain dump of everything in a Word doc, minor editing, put it out on the internet... And within a few weeks, it made low five figures, and it was, again, eye-opening to me. Like, why am I bothering with this huge, ambitious project, highly speculative, that might never -- yes, the upside might be huge. If all the stars align, I might make a big exit, or have a high income from it... But there are these small, much more achievable projects that are much more attainable.
56
+
57
+ And over time, I kept experimenting with different ideas, some things worked, some things didn't, and I started to believe that this is a sound philosophy, and then, as I kept searching it more, I realized "This is nothing new." The operators in this world, like book publishers, venture capitalists in Silicon Valley, movie studios in Hollywood, operate on this strategy, and some of them have been doing it for centuries, like book publishers, even VCs... I mean, not the modern kind, but investors in general, since probably the Middle Ages have had some portfolio of activities; they only invest small amounts of their funds, they never go all-in on one thing... They treat ideas -- I like to call it treating ideas like cattle, not like pets, so you don't fall in love with your projects... Yes, you want them to succeed, you'll be disappointed if they don't... But if they don't, it's just business; you move your attention to something else.
58
+
59
+ \[17:57\] And I started to see these parallels with how these industries operate, and I think many of their principles apply even to us as individuals. Yes, we're not investing capital, unlike a book publisher, or a venture capitalist who's investing their capital; we're investing our time. But it's equally scarce and precious, probably more so... And I think we can sort of, again, reason the same way as a VC does with their funds, but in our case with how we invest our time.
60
+
61
+ So what's our time? We only get a few hours a day, that is disposable to our projects... How should we invest it? Should we go all-in for many years on one project? Is that a wise strategy? That would be like a venture capitalist going all-in with all their funds on one idea. It's almost never going to happen, right?
62
+
63
+ **Gerhard Lazu:** Yeah, of course.
64
+
65
+ **Daniel Vassallo:** Or should we sort of build a portfolio of small, safe to fail ideas, and have a good selection criteria for things that have a decent fighting chance of giving us a small payoff, keep building on those payoffs, keep building reputation, building assets, building sort of knowledge over time, and benefit that way? And again, I think back to sort of the previous question - this is why things are resonating with people, because they see it almost subconsciously, almost instinctively, that this is a much more prudent and realistic way of succeeding in this randomness-laden world of business.
66
+
67
+ **Break:** \[19:24\]
68
+
69
+ **Gerhard Lazu:** How do you know when it's time to start a project or wind down a project? And I'm calling it a project, maybe that's the wrong word...
70
+
71
+ **Daniel Vassallo:** No, I like project. So I think when to start is -- I think you need to have some selection criteria. When I talk about these ideas, some people react to "You know, it's probably not a good idea to just do random things." And I agree; you shouldn't just be trying random things, with no clue how they're going to succeed, or how you're going to make them work. I think it's important to have some selection criteria that allows you to filter ideas and arrive to things that have some fighting chance of working; you have some hypothesis. You could say, "Okay, I could do this, and here's how I will find N number of customers, and presumably, some of those customers will pay so much, and that would result in this type of payoff. And this payoff would be good for the type of effort I'm putting."
72
+
73
+ So for example, when I created my AWS book, I thought "If I make $10,000 out of this, it would be great." I spent a month on it; if I were to earn $10,000, great. And I thought "Okay, what would it mean --" You know, this is a $30 book. Back then it was like that; how many people do I need to sell to to make $10,000? You know, one divided by the other, and I said, "Okay, maybe if I get 5% conversion rate, or 3%", I don't remember what numbers I assumed, "I need so many visitors." I had some hypothesis, and I said, "Okay, I have a Twitter account", I had 5,000 followers, "Maybe if I tweet about this 50 times over the next year, I will get those views."
74
+
75
+ \[22:20\] No guarantee that this was going to happen, but I had some idea that there's a chance, that there's a reasonable chance. So I think when you bump into an idea, and you see the potential for a small payoff, I think it's worth giving it a shot. And I think many people exclude ideas, because they're not sustainable, or they're not really caring, or they don't have some potential to be something big, but I think that's a mistake. People leave a lot of good ideas on the table, and I think these small, one-off payoffs are very much underrated. They don't just give you the payoff, they also give you lots of lessons, lots of new skills, new information about yourself and what you like to do, or what you're good at; they build your relationships with customers, more reputation, testimonials... So many accumulative things that you can build on from these small wins.
76
+
77
+ So when do you choose to start something? I think it's when sort of something meets your selection criteria. Is it something you can bring to market quickly? Is it something that you think you can bring to market on your own, or by yourself? Is it something that has a fighting chance of working, you can see some line of sight of how that payoff could materialize? If it's yes, and you have the time, give it a shot.
78
+
79
+ When do you choose to wind something down? I try to, if possible, avoid even being in that position, right? Why do you need to shut something down? You need to shut something down presumably because it's costly to keep on the market, right? Either financially, or operationally, mentally, it's just a headache. If part of the selection criteria is you choose projects that don't have that property, I think you're much better off. Then almost nothing fails; things just haven't succeeded yet... Which I think is a much more liberating way of looking at things. And then - yes, you don't have the obligation to give more attention to projects. You may realize you'd rather give attention to other things, that have more potential, or you try to give the attention to. But again, you don't need to necessarily make this hard decision of "I need to shut this down" or "This is a failure." So that's something as well that I sort of over time realized is important for me, that in my selection criteria I ask myself, "Would this be costly to keep on the market? Would I need to make the payoff before some time?" And as much as I can, I try to avoid those projects.
80
+
81
+ **Gerhard Lazu:** Interesting. That's a very interesting perspective, because you're right, when you put a book out there, a course, something that's finished, in a sense, then you see what happens with it. And there's no need to do a version two, and there's no need to maintain it or anything. Once it's done, it's done.
82
+
83
+ Now, I've been meaning to ask this for a long, long time... And the last cohort, the one in December - there were some very late nights, and I have to say, I was falling asleep around like 12 AM... I think it was like four for the Asian timezone... How do you go from the AWS book and the courses to chopping boards? How did you make that -- how does that fit?
84
+
85
+ **Daniel Vassallo:** Yeah, good question. I think it's this cat and mouse bad idea, it's like, I don't label myself anymore as a programmer, or as a writer, or anything. To me, I keep my eyes open for opportunities... And the cutting board started -- I started woodworking relatively recently, as a hobby, purely. I had no intentions of commercializing anything. I wanted to sort of make some furniture for the house, and I started watching YouTube videos, got interested, started doing it, liked it, and so and so forth.
86
+
87
+ \[26:03\] But as I was doing it more, I started thinking, "It would be nice if all the things that I'm figuring out, I could monetize some of these." And I was thinking, "Should it be an educational project? Should I make some physical products?" I was starting to get tempted into the eCommerce space, because I had no experience with it. I was curious. And long story short - again, I stumbled onto somebody on Twitter who was talking about the hygienic properties of wooden cutting boards, and how much better they are, and how many studies there are that show that they're more hygienic than even stainless steel, and glass, and whatever... And I thought I can make a cutting board; I happened to make some for my house, and what if I tried to sell them? They're relatively easy to ship. I mean, they're just a rectangle, and not that bulky... And I gave it a shot.
88
+
89
+ I created a few designs, I've set up a landing page, I shared it online, I got a few sales... To be completely honest, the cutting boards project didn't really pay off as I was expecting. I was hoping I'd get some recurring stream of orders over time. I did get a flurry of sales, I made a few thousand dollars the first few months, but sort of the sales stopped. So that expectation that I had, that somehow they will keep happening - it didn't really happen. And again, this cost me nothing to keep on the market. So if I get an order today, I could do it, and maybe if I get inspired to come up with some creative marketing campaign, I could always give it a shot, and so on and so forth.
90
+
91
+ But to me, again, this is like recognizing luck, as we mentioned in the beginning. Once you keep your eyes open to new things, you start seeing opportunities that if you have the blinders on, you would fail to see. Personally, I made a course on how to sort of build an audience on social media. That was probably even more drastic to me, because until a year before, I was completely ignorant on social media. I had never had a social media account, I never had a Facebook account... I had Twitter, but never had tweeted; just occasional scrolling through things. I never did anything in public. I was completely green, clueless; I don't have a marketing background, or whatever... And to me, the fact that just a year later, literally 12 months, or even less, not only I gained experience on social media, but I was actually educating people... \[unintelligible 00:28:22.25\] for a long time. That was my best-selling product. It made over $300,000 in sales, it's quite incredible to me... And to be honest, I don't even know where people are coming from... Even today; I still make a few sales a day. Like, it's bizarre. And again, if I restricted with myself with the labels of "I'm a programmer" - that's what I did for almost 16 years professionally, and before that, when I was a kid, as a hobby for a long time - I wouldn't have considered these opportunities.
92
+
93
+ So I highly encourage people, again, to let go of the labels... Yes, think of your skills, assets, everything you're into, what you like to do. There's opportunities in adjacent things. I like to say that with your skills, they should feel like an asset; you have the option to use them, but not an obligation. It is a mistake I did as well myself; when I jumped into self-employment, I almost felt I had an obligation to use my programming skills, because it's a waste; it's the sunk cost fallacy. I have it, I should use it.
94
+
95
+ **Gerhard Lazu:** Of course.
96
+
97
+ **Daniel Vassallo:** But why? I mean, it's irrational. I should not feel an obligation to use it. Yes, I have the option, but there's many other things I could do or I could try.
98
+
99
+ **Gerhard Lazu:** Yeah. So obviously, this approach, this mentality, and these ideas - they all start as ideas - they haven't worked just for you. They worked for many other people that are in the Small Bets community. Are there some examples that you can share with us?
100
+
101
+ **Daniel Vassallo:** \[30:03\] There are a few. I'm not sure how willing they are for me to share publicly their numbers, or whatever... And success, again, is very relative. To some people -- many people in our community still have a full-time job, and they're not necessarily looking for, but they just wanted to have a portfolio of side projects. And these might be making sometimes just a few hundred dollars, maybe $1,000 a month, and for them it's a big success, right? It's something supplementing their income, exposing them to new things, giving them a fallback, so that if they get laid off or something were to happen, they don't have to start from scratch. That is very, very reassuring.
102
+
103
+ There's a few others who, yes, have taken a similar path, of starting with educational projects, mixing in software projects, other things, and going in many different directions, and have some highly unexpected successes in areas that they least expected... And to others, which I still -- this might sound may be a bit... I don't know what to call it; maybe a bit bizarre, or maybe... I don't know. But for what I'm doing in the Small Bets community, I think even if people haven't had success yet, the fact that they've been exposed to a different way of doing things is already, I believe, a success. The fact that the inspiration -- "Here's another way of doing things", it's knowledge. It's some amount of guidance. And again, we have 2,000 members, many people join, in very different circumstances in their lives... Some people are curious, some people are tempted, some already are taking the plunge and working for themselves, want to make things better... So there are many different arrangements. But having a place where you can go share ideas, get feedback, have a support network of people who can help you sort of see a strategy that is much more prudent, and reason about it on a very pure, fundamental level - I think it's very empowering. I try to measure success that way.
104
+
105
+ I like to use the example - and I shared it on our Discord recently - related to woodworking. Recently, I bought a book about how to make your own doors. Well, recently - about a year ago. And I read it all, I enjoyed it, I learned a lot, but I never made my own door. And maybe I never will. Was that a regretted purchase? No. Actually, I really enjoyed it. Actually, recently I was recommending the same book to other people; money well spent. It taught me a few techniques... But if the author of the book were to make a survey of who actually put these techniques in practice, I would have to answer no; I haven't, and maybe I never would. But it was still a very satisfying purchase; the time I spent reading it was very well spent, and I recommend it... But what it gave me - again, showing me how a door is made, I learned something, some techniques here and there... And also knowing that if I did want to make a door, I'm not going to start from scratch. I have a reference that I could refer to.
106
+
107
+ I think it's a similar thing over here - out of the 2000 members, I'm sure maybe only a very small percentage actually put everything in action, and are living off their projects. But there's a wide spectrum in between; some people are just getting started, some are just waiting for the right circumstances to happen, but they have something to fall onto if they wanted to.
108
+
109
+ **Gerhard Lazu:** Well, if anyone is interested to see who is out there applying these skills, these ideas, you can always go to Daniel's Twitter. Sometimes you retweet what other people post... So you can check it out, see what examples are out there. And that is a very real world view as well. Real time, what is happening, not just when we recorded this. So going back to AWS - you were a software engineer for 16 years.
110
+
111
+ **Daniel Vassallo:** Then I took a rest for nine years, but before that, I worked for other companies. Yeah.
112
+
113
+ **Gerhard Lazu:** \[34:06\] So what was it like for you to be working at AWS as a software engineer? I'm assuming you also had -- was it always a software engineer? Or did you go into management?
114
+
115
+ **Daniel Vassallo:** No, I was in a bit of a hybrid role in the last couple of years, but my official title was always a software engineer.
116
+
117
+ **Gerhard Lazu:** Okay. What was that like for you?
118
+
119
+ **Daniel Vassallo:** I went in thinking it was temporary. I think I always had the inclination that eventually I will want to work for myself. I grew up in a family of self-employed people, and I always sort of felt it's also in me that I want to control how I work, what I work on, and so on and so forth. When I joined, I thought this was temporary; I just want to see how the sausage is made in a company like AWS, and I wanted to learn, and so on and so forth. But I think I fell a little bit into the corporate trap of -- you know, I kept getting promoted, I kept being treated reasonably well... They offered me to move to Seattle, I came... Again, they kept increasing my pay beyond my expectations; when I was about to leave at one point, they almost doubled my salary. It was very hard to think of an alternative that would have been better financially. Like, if I had to leave to work on my own, just the gap in time until I start making money probably would -- again, if you're thinking just financial expectations, it made it very difficult.
120
+
121
+ And again, it wasn't -- despite Amazon's reputation, it wasn't harsh work, or super-long hours. Yes, it took all my creative energy, that's for sure, and it was always something on my mind... But for a while, there was a period when I started to think "This isn't for me. Maybe I'll move companies. Maybe I'll go work at a different company eventually", but I thought this was what I was going to be doing... Until I think at one point -- it was actually, to be honest, a post by DHH, that I think triggered in me to consider... I think it was in 2015, November 2015, I was on vacation, I was sort of somewhat, again, a bit confused about the discrepancy between sort of me thinking I'm going to stay as a full-time job and the situation I was in... And there was this great post by DHH \[unintelligible 00:36:28.13\] The topic was mostly about considering, again, the all-in venture capital approach of making software, where DHH was recommending people to reconsider that approach, and start something smaller, more prudent, more modest, and so on and so forth. And again, I felt it was much more empowering to me. As I was reading that, for some reason, even if it wasn't the intention of the post, it made me realize that a career as a full-time employee wasn't for me. I started thinking of all my peers, my bosses, my bosses' bosses, and I realized that they were living a lifestyle that I didn't envy at all. I don't want to become one of them. Probably they were worse off than I was.
122
+
123
+ So no matter how much more I was going to get promoted, or how much more money I was going to be making, I was still going to be working on somebody else's terms, on their own schedule, without lots of control over my time. And I'm not saying this is the right thing for everyone. I think it's important for us to recognize our personality. But for me, it dawned to me that this was wearing me out mentally, probably even physically, and I needed to find another arrangement.
124
+
125
+ So after that, it was mostly -- I spent another couple more years there, just figuring out when's the right time to leave, and at one point I wrapped up a project I was working on and I just took the plunge without anything concrete there. Long story short, nevertheless, I think I did learn some things, that I probably should have left sooner, in hindsight...
126
+
127
+ \[38:03\] But my takeaway for me personally is that I realized almost for sure that the full-time work arrangement, nine to five, Monday to Friday, 40-hour workweek wasn't going to work for me, even in the most ideal situations. Even when I have a reasonably cushy job, well respected, highly paid, getting promoted, without any major dramas or challenges. Even when everything was well, it was still a problem for me, right? And that sort of made me realize that I can't just be trying to optimize this... Putting lipstick on the pig, or what do they call it?
128
+
129
+ **Gerhard Lazu:** Yeah, that's a good one... \[laughs\]
130
+
131
+ **Daniel Vassallo:** That I need to make a radical change.
132
+
133
+ **Gerhard Lazu:** Yeah. Okay. I think this conversation is really timely, because many people, with all the layoffs in the tech industry, may be worried about their job, or maybe they have lost their job. What would you say to those people?
134
+
135
+ **Daniel Vassallo:** I think it's important to recognize, again, to recognize what our true, real preferences are, and to be really, really honest about ourselves. It's very easy to get fooled by signals from the outside, and what our friends are doing... We're all slightly different, and I genuinely believe this - it doesn't really matter what other people are doing; if we feel deep down that structured work, highly structured work, for example, is not ideal for us, I think it's important to start treating full-time employment as temporary.
136
+
137
+ Yes, circumstances might require you right now where you want to keep up with it for a while, maybe for immigration issues, maybe for family, you're just about to have a baby, it's not the right time to make a big change - I understand those things. But once you start to see things as temporary, it's beneficial, because now you start thinking of the next thing, and you start optimizing for that.
138
+
139
+ Probably the worst thing one can do is be in an arrangement that is incompatible with their preferences. I think that's the biggest opportunity cost. In the modern world, when we talk about opportunity cost, we tend to think about financial opportunity cost. "Oh, I was doing this, I was making $100,000 a year, but I could have been doing this, making $150,000 a year." I think the biggest opportunity cost in life is living a lifestyle that you dislike, that isn't the ideal one for you. So sort of understanding that, and reflecting on our experience - how do we like working? Do we like to have somebody who tells us exactly what we should be doing? I'm certain there are people who flourish in that arrangement. "You know, I'm just a soldier. Tell me what to do. I'll do it to the best of my abilities. I will be super-proud of it, super-fulfilled." Perfect. Nothing against that. I think if you recognize you're that kind of person, look for those kinds of opportunities. Don't look for ambiguity and lack of structure, but the opposite; look for structure, look for precision, and follow that.
140
+
141
+ But if you realize that you're the kind of person that will flourish when you have an open schedule, you don't have somebody telling you exactly what you want to do... You want to wake up in the morning with nothing planned, and you just go aimlessly and you still sort of manage to do things - you probably would want something less rigid. And this could be the next step; it doesn't necessarily need to be super-radical. You could transition from full-time employment to freelancing. Maybe if you were just laid off, and you're that kind of person who eventually would want to take more control over their time, maybe the next thing to do - don't try to find a full-time job. Try to do financing; use your existing skills. Maybe even you offer your freelancing service to your previous employer. Maybe you got laid off, but maybe your employer would be willing to bring you on for 10 hours a month to help with some ongoing things.
142
+
143
+ Yes, it's a trade-off, you might lose benefits, or whatever... But again, what you're gaining instead is you get much more flexibility with your time; you can add more clients now. You can take time off without asking anyone's permission, right? And you can start to, again, build reputation, build testimonials, get a well-oiled machine of how to charge clients... And so on, and so forth. So it needs to be an incremental step, and then over time, again, adjust along the way.
144
+
145
+ \[42:06\] And I mentioned the two extremes, and you could be in the middle. You like a bit of structure, you still would want to -- again, bringing back the DHH, Jason Fried people, I've read almost everything they've written; they seem to like some structure. Both of them, I think, they like to say, "I work from 10 to 5, and I go to my room, and after 5 I forget about work." So it's different than how I work. I don't have fixed times, and I'm probably even less structured than that. Totally fine. Again, if you think that's important for you to activate your creative energy, that you need to lock yourself into your office and say "This is work time", and so on and so forth. For me, it's just whenever I feel inspired. I almost can't make myself productive unless I feel there's something important. And then, whenever it is, wherever I am, I will be productive.
146
+
147
+ But I learned this over time, as I was living my life, and I think what I did right is I acted on it, instead of just saying "That's suboptimal." Eventually, I took the plunge and I changed things. And I keep changing them. There's still some things that aren't perfect, and I keep adjusting with it.
148
+
149
+ **Gerhard Lazu:** So I was thinking more long along the lines of the embracing change philosophy, and change being constant, and randomness, where people - they want to have that safety, they want to have that structure... But structure more in terms of "This is where I work. I've been working here for nine years. I see myself working here for at least another nine years..." But things change. So for those people I think it comes as a shock when that happens, because they were not expecting that. And it's not a matter of it being right or wrong, it's a matter of something unexpected happening and them feeling somehow cheated... Like "It's not right, it's not fair that this is happening." So I think for those people it's important to realize that the only constant thing is change. It will keep happening, whether it's AI, whether it's automation, whether it's frameworks... They keep coming, right? And it has nothing to do with you yourself personally, right? People take it as like a personal failure. And I think they shouldn't. I think they should understand that things are very volatile right now; things are maybe even more random than they were last year. And the financial sector is in a bit of a turmoil, there's like some very real world-changing events happening right now, which have an impact on everyone. And as a result, that trickles down, or trickles up all the way to the tech industry. It's not immune to these changes. So I think there's a lot of wisdom and in what you share around randomness, around uncertainty, around trying things out and seeing what sticks, and what makes sense... And not being afraid to try things out.
150
+
151
+ **Daniel Vassallo:** Yeah. And I think one technique that I really like is the idea of negative visualization, that I think applies to everyone, no matter on what end of the spectrum you lie, whether you prefer the structure, stability, whatever, or more chaotic sort of arrangements... You should be thinking of what would happen if what I'm living right now gets disrupted? What if I were to lose my job? What would the consequences be? Would I be able to recover easily? Will I be able to find another job quickly? Would I be able to keep my current lifestyle? Would it affect me significantly? And if you convince yourself that things are fine, then you're in a good spot. I think you sort of suddenly realize that there's not much to worry about, and whatever happens, happens. That you will be able to react.
152
+
153
+ \[45:53\] But if you realize that you're in a bad spot, that you're unlikely to get hired maybe because your skills are now -- you know, something I realized for example when I was working at Amazon, in these senior positions, at big companies... There's probably only maybe ten companies in the world that would want that kind of level or that speciality, because they're senior executive directors for some specific thing that maybe only Amazon, Google, Microsoft, and maybe Facebook would ever want. So what if those five companies were suddenly to realize they don't care about this act function anymore? What would it make to you? Would you be able to do something different? Would you want? Would it affect your dignity, or your pride, or would it affect your lifestyle, because you'll make significant less money? Are you prepared?
154
+
155
+ So it's not a pleasant thing to think about the bad things that could happen, but I think it's a technique to, paradoxically, improve your peace of mind. In my course I like to use the example of the preppers, the survivalists, who worry about a volcano erupting... What do they do? They don't just worry about it and say, "Oh, hopefully it won't happen." They prep. They stock their bunker with food and medicine and water and fuel and whatever... So they don't want it to happen, but if it were to happen, they know that they have reasonably prepared.
156
+
157
+ So I would recommend everyone to do this... To think about the worst-case scenarios and how you would react to them. Sometimes you realize all you need is a little bit of planning; maybe you can buy some insurance, maybe if you have some emergency fund savings, you're in a much better spot... Maybe you realize you need to reskill yourself, to retrain yourself, you need to know some other things to be able to fall back to them if they were to happen.
158
+
159
+ Some of the people in our community like having some side hustles for this season, to know that if I were to get disrupted, I have something independent that's going on. Yes, it might only be 10% of my income, not something significant, but if I were to get disrupted or affected significantly, there's something I could fall back to, and eventually I could \[unintelligible 00:48:02.29\] which is much better than going back to square one, starting from scratch.
160
+
161
+ **Gerhard Lazu:** Yeah. I think that what you do is really important; what you do in your job is really important. I think you have the same mentality, because I've seen you quoting Leonardo da Vinci... It was just a screenshot, where Leonardo - he talks about what he does, not who he is, or what he is.
162
+
163
+ **Daniel Vassallo:** Yeah, the labels thing. I really like that. So do not describe yourself with -- I'm not a programmer. I can do this. Like, once you automize your assets, your skills, I think, again, you start seeing opportunities. Leonardo da Vinci - he could have called himself a sculpturer, or a painter. But there were a million other things he could do. And I think, of course, nobody's like Leonardo da Vinci probably, and maybe nobody ever will be... But I think it's a good attitude to think of ourselves that way. "I can write. I can manage a team of people. I can do some basic web design. I can program. I can do some woodworking (completely unrelated). I can do some DIY around the house. I can fix leaking toilets." Of course, not everything is conducive to a commercial opportunity, and maybe I wouldn't want to... But once you just with yourself think in these terms, you start seeing other opportunities, that you would be blind to if you just think in labels.
164
+
165
+ **Gerhard Lazu:** Yeah. So to continue on that thought, I've read somewhere - I can't remember where - that you also happen to be head of product at Gumroad. Is that true?
166
+
167
+ **Daniel Vassallo:** I used to. So I did this for a couple of years. And this was another thing, actually, related to what I've been talking about. So long story short, I was using Gumroad to sell my own products for a while. I got to know the CEO of Gumroad, Sahil, over Twitter, just because - you know, business relationship; he used to ping me with new features, ask my feedback, so on and so forth... So it was a bit of a warm lead there.
168
+
169
+ \[50:04\] One fine day, random day, it was August 2020, pandemic period, I remember I was just scrolling Twitter... And I saw Sahil posted a "We're hiring" tweet. And it was mostly programming jobs in areas that I had no knowledge in; sort of Android development, and some other things... But I clicked out of curiosity, because it was part-time work, high hourly rate, and I was curious at Sahil was doing.
170
+
171
+ At the very, very bottom of the page - it was a long page, with lots of technical job openings - there was an asterisk, and Sahil said, "Eventually, I'm looking for somebody to help me with product management at Gumroad." And I thought - again, I never had a product management role in my life, specifically it was never my title, my job description. Nevertheless, for my last two years at AWS I was almost hybrid software engineer/product manager/team lead; sort of I was doing a bit of jack of all trades. I was \[unintelligible 00:50:57.18\] team. Our real manager had left, and we couldn't fill the vacancy, so I was sort of doing lots of different things. And I enjoyed it, I thought I was reasonably good at it, and Gumroad happened to be a product I was using every day; I was recommending it to people, because I enjoyed the tool, very simple to use... And I got to know Sahil a little bit... And I sent him, an email; and I wasn't looking for a full-time job or anything. And I told him -- and again, this was literally a couple of hours later. Slack in the system; I didn't just say "Eventually." I reached out immediately. I said to him "Look, I don't know what you're looking for; I don't know if you're looking for something part-time or whatever for this, but I was wondering if you'd consider not even part-time, but a quarter time though... I could maybe do (I said) like 10 hours a week on average." And I told him, "As you know, I use Gumroad every day, I have some good ideas. I worked as a de facto PM for a little bit, even though it was never my official job title. I love Gumroad", so on and so forth.
172
+
173
+ We jumped on the phone the next day< and I started doing it. So I did it for a couple of years, until May of last year, I think. I was doing this on a very -- it was a freelancing gig, essentially. It was a contract type job, helping the Gumroad engineering teams sort of prioritize things, prioritize features, scope features, talk to customers and figure things out, things like that.
174
+
175
+ One thing in my portfolio - again, in the meantime I was doing several, four or five other things... But this was a nice income stream; eventually Gumroad grew so much that it sort of became almost impossible to keep up with everything happening. That's why primarily I stopped, because it was quite a nice arrangement, but it sort of became too difficult. And again, I wasn't looking for a full-time job. That was very important for me. 10 hours a week was the most I could dedicate, I wanted to dedicate to a single activity like this. Once I started thinking about 20 hours or more, it started to feel "I don't want it to be that big." So yeah.
176
+
177
+ **Gerhard Lazu:** That's very nice. Thank you for sharing. Okay. So you were a software engineer, now you're a jack of all trades: whatever takes your fancy. No labels. We already cover that part. I'm wondering, how do you take your ideas and put them out in the world. I would call it "ship it", just to go with the show... But how do you go from idea to actually putting it out there? Gumroad is obviously one platform that you use to publish your content... But what does it take, for example, for you to get your website out there? I'm basically asking about your tech stack, and your setup, and what you run.
178
+
179
+ **Daniel Vassallo:** \[53:50\] To be completely honest, I think it's probably one of the least important things to me. Whatever it's easiest. I like Gumroad, because it was less friction. The fact that Gumroad had a well-defined style, I didn't have to think about what fonts to choose, what color to use, whether the title should be 24 pixels or 25 pixels... I liked those constraints. I'm not necessarily saying everyone should do this, but for me it's easiest. It's a tool I'm familiar with now, so it's probably even more -- I gravitate even more to what I'm familiar with.
180
+
181
+ To me personally, the most important thing when you launch something isn't your tech stack, or whatever, it's how you're going to get attention to your product, and how you build credibility with what you're offering... Which are obviously not engineering or technical topics. Because to me, if you put something online and nobody hears about it, it's as if it doesn't exist, right? And if you can't build trust, and you managed to get attention, it's also the same thing. People will bounce the site off and nobody will close a transaction with you.
182
+
183
+ And by the way, actually, related to Gumroad, I think putting something on Gumroad helps a little bit; not a lot. But by placing something on Gumroad - many people use Gumroad every day. It has - I think Sahil shared some numbers recently - millions of buyers per month, or per year, I forgot. Many people already have their credit card saved on Gumroad. It's like buying something from Amazon. There's some small trust. Sure, it's not sufficient. Probably the creator needs to have much more trust, but it's something that helps a little bit. So it's one factor that might help me choose one tool versus another.
184
+
185
+ But what's most importantly is "How can I bring attention to this page? What am I going to use? Is it my Twitter audience? Is it my email list? Is it search engine results? Is it paid ads? Is it word of mouth? Is it talking on public forums, communities on Reddit, Hacker News?", whatever. And once I get people, how can I make them trust that what I'm offering them is what they're looking for. And you can do that with testimonials, success stories, with your own story, putting your own skin in the game a little bit if you can... Many different techniques. And I think that's what people should be focusing on, more than the tech stack. The tech stack, whether you use WordPress, or SquareSpace, or a static HTML file on some S3 bucket, whatever it's easiest to -- again, I would recommend people to choose the simplest things, just because you don't want to be wasting time, or spending too much energy, or feeling that there's friction. You don't want anything to be daunting, that you keep putting it off; you want something to get over with it. And I love Gumroad for that case. Like, you literally can start selling things by picking a name, setting a price, and you basically get a link, and if people visit it and pay, next week money will arrive in your bank account. It almost can't be easier. And of course, you can keep optimizing it, you can keep adding images, and demos and whatever... But it's all optional.
186
+
187
+ **Gerhard Lazu:** Yeah, that's very wise advice, again. Do the simplest thing for you, without wasting too much time. If simple means wasting a lot of time, then reconsider that.
188
+
189
+ **Daniel Vassallo:** And remember, you always have the option of improving later, right? Rarely you paint yourself in a corner with these things. There are some cases where you could, but I think most of the time you can always optimize, and improve, migrate, or whatever.
190
+
191
+ **Gerhard Lazu:** So this year, to me, it feels like it started yesterday. Obviously, it's been almost a month now. So we're still in January, but by the time you're listening to this it'll be February. So even though one month out of 12 are gone, I'm wondering how do you think of 2023? Do you have something in mind that you would like to do, something that you'd like to try this year?
192
+
193
+ **Daniel Vassallo:** Honestly, nothing very specific. I got to approach it like I've approach 2022, 2021 and 2020, with just an open mind, taking care of my downside. My main concern, my only goal, I would say, is to sustain my current lifestyle and to not go back to a nine to five job. I know it's not the end of the world if it were to happen, but for me it would be a significant lifestyle downgrade.
194
+
195
+ \[58:10\] So the attitude of doing whatever it takes to maintain this, I think it's a helpful attitude for me. As I mentioned before, I almost feel incapable of being highly creative, highly productive, unless I feel the need to. So right now things are going well, I have sort of this routine, I'm doing these regular cohort courses, I have the Small Bets community, there's a steady stream of people joining, things are going well; I don't feel I need to do any major changes... I have some plans to improve things, working on some small improvements here and there, but nothing radical.
196
+
197
+ But again, I recognize that this thing might not last. No idea whether by the end of 2023 it's still going to be working as it is today. It could be much worse, it could be even better, or anything in between. But I'm fine with that. I'm still with my eyes open, and if I run into some idea, some opportunity, I have lots of slack in the system right now, mostly in a state of wandering about, waiting to pounce on something... And to me, basically, after the year that started, I would have been incapable of predicting how it's going to be ending. So if I were to get this question, at the end of 2021 I would have certainly not predicted how 2022 would have finished. And same thing for the previous year.
198
+
199
+ So I think I would be foolish to try to predict where 2023 will end, and again, for the type of personality that I have, I think I like it this way. Some people might say, "Oh my God, it would be a nightmare for me to not know what I would be doing, how much I would be making", but I realized actually I think I need this... Because if I don't have some variance, some unpredictability, some ups and downs, I lose almost the drive to actually do things. If things are too steady, I become incapable of working. So I recognize it, and when things are good, I take it easy; when things start to show some signs that they need some correction, I don't need to convince myself that I need to work harder, or try other things. Suddenly, almost another part of my brain activates, and I become super-motivated. Again, I start seeing better, new opportunities, new ideas, maybe new, creative ways of marketing my existing products and services, and so on and so forth.
200
+
201
+ **Gerhard Lazu:** Yeah.
202
+
203
+ **Daniel Vassallo:** So a vague answer, unfortunately, but --
204
+
205
+ **Gerhard Lazu:** No, no, no. That was spot on. Honestly, if people know you, and they've been following you for a while, this matches everything else you've been putting out there. This is you.
206
+
207
+ **Daniel Vassallo:** And I would add, I think this is a common theme that I've been hearing with many other creators and entrepreneurs, that what they're doing right now is something that a year ago, two years ago they would have never imagined to be doing. I think there's an important takeaway about that, because again, we tend to think that our imagination knows what's best for us, what's the ideal business, what's our dream job, and whatever... But then we stumble into random things, and we realize "This is it. We would have never imagined to be doing these kinds of things." So the openness to that I think is very important... And sometimes having these very specific visions can, again, harm us rather than help us... Because then we ignore everything else that's not conducive to that one specific thing.
208
+
209
+ I'd like to say - you know, New Year's was relatively recently, and many people put new year's resolutions, some that are professionally related, "I want to finish/publish a book before the end of the year", or something like that. And I dislike these very specific things, because again - yes, it might be a noble goal to try to do something like that, but you condition yourself to ignore everything that's not related to publishing that book. But you could bump into something that's 100 times better for you, that you will probably enjoy much more, and it will probably give you a much better payoff, in the middle of the year... But if you're so focused on the book, you're almost going to ignore it, because it's distracting for your goal.
210
+
211
+ \[01:02:26.29\] So it's okay to have goals that are broad, for example "I want to be able to cover my expenses with my self-employment income this year", because there's a million ways to do that. "I want to continue to live my current lifestyle" or "I want to make $100,000 before this year" - that's a decent goal, because there's many ways to do it. But the very specific ones, I think they tend to cause more harm than good. Yes, the specific goal might make you reach that goal, but the downside is that you might have missed out on many things that you could have stumbled on, that you couldn't even have imagined on your own.
212
+
213
+ **Gerhard Lazu:** Yeah. You mentioned $100,000 per year, which reminded me of something that you wrote not that long ago... "Y'all need to make six sales per day of $45 each to make $100,000 per year." I think that puts things a little bit in perspective for people that think it's a huge thing. It's not. With the right approach, with the right openness, with the right opportunities coming your way, you not being too busy to even see them, or not have time for them... Which is why, again, going back all the way to the beginning, having that slack, having that open mind, being relaxed... Not being stressed all the time, not rushing everywhere, will make things happen for you, in a way that nothing else will.
214
+
215
+ Okay, so as we were preparing to wrap up, for the people that stuck with us all the way to the end, is there a takeaway that you'd like them to have from our conversation?
216
+
217
+ **Daniel Vassallo:** Yeah, I think what I'd like to recommend people is to recognize that life and business are much more random than they seem. A mistake that I think we tend to all make, and even I make sometimes, is just believing that things are much more linear... And I think, especially in business, the things that help us succeed in a much more predictable environment, like our full-time job, or when we were still at school, you're given the syllabus, you study those things, if you work hard and you study the right things, you will almost automatically move to the next level, so it's almost like a video game... In business, it's rarely like that, right? There are some business activities where that might be true, but in general, especially when you're creating products or services or doing things like that, what helped you in those domains not only they won't help you, but sometimes will harm you. I think when you're dealing with something very randomness-laden, you need a completely different strategy. Rather than sort of just hard work, you need to tinker and experiment with different things, rather than optimizing; you need slack in the system. Rather than focusing on the goal, you need to focus on your downside, on your staying in the game, because you don't really know what will end up paying off...
218
+
219
+ \[01:05:22.12\] So what's critical is just to avoid a game overstate. And then lots of small, different techniques that I think become obvious and more much more clear once you realize that you're operating in a completely different domain. That what works isn't predictable, the payoffs are not very predictable... Whereas when we're working in a much more predictable environment, like a full-time job - yeah, you're told by your boss "If you do all these good things, you're very likely to get promoted" - yes, there's some randomness; maybe you won't get promoted this year, maybe you have to wait another year, or another quarter, or whatever... But the domain of unpredictability is very limited. And if you were to get promoted, you know more or less how much you're going to be making. It's probably not going to be a million times more; you'll probably get a 10% raise, or a 20% raise, or whatever... Things are easier to reason about in that world. But in most other business activities, all those things go out of the window. You could work as hard as you can, put all your time and effort, you could be seen as skilled, extremely motivated, do all the validation things and whatever, and yet your payoff might never materialize. And this could be very discouraging. Discouragement is actually a very ruinous thing, because it can lead you to that game over state. Not necessarily financial; you don't need to go bankrupt to get ruined. It could be just a mental discouragement.
220
+
221
+ So I think the antidote to this is, again, very prudent risk-taking, reducing the efforts we put, tinkering, small experiments, taking care of the downside, negative visualization, "What would happen if this project that I'm working on doesn't work out? Will I become depressed? Will it put me in a bad financial situation? Would it harm my reputation?" All these things are important things to cover, and make sure that you adjust your inputs to be compatible with your risk appetite.
222
+
223
+ **Gerhard Lazu:** Yeah.
224
+
225
+ **Daniel Vassallo:** So I hope that helps people, or at least give them some food for thought... Not necessarily to embrace everything, but at least to reconsider how to approach highly uncertain activities that are different than a project that your boss might give you.
226
+
227
+ **Gerhard Lazu:** The predictable world versus the stochastic world. That was one term that - even though I have heard it before, small bets, it really stuck with me the way you presented it; the way you made people think what type of game are they playing. Because it's all a game at the end of the day. We are Homo Ludens. We love to play. And if you don't see that way, it's not as fun, I have to say. So what type of game are you playing? That's important.
228
+
229
+ **Daniel Vassallo:** Or you happen to be playing. Sometimes I think we don't recognize - are we operating in this world, or the other world? So recognizing it is important.
230
+
231
+ **Gerhard Lazu:** Alright. Daniel, it's been an absolute pleasure. Thank you very much.
232
+
233
+ **Daniel Vassallo:** Thank you, Gerhard. This was great. Good conversation.
234
+
235
+ **Gerhard Lazu:** Have a great day, everyone.
236
+
237
+ **Daniel Vassallo:** Thank you.
Why we switched to serverless containers_transcript.txt ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** One of the talks that I didn't have time to watch in-person... I think it was while I was giving my talk - this was Cloud Native Day in 2022 - was Florian's talk, "Why we switched to serverless containers." I have re-watched it on YouTube, twice. Not because I was preparing for this, but because I really liked it, you know? So Florian, welcome to Ship It.
2
+
3
+ **Florian Forster:** Well, thank you for having me.
4
+
5
+ **Gerhard Lazu:** What is the story behind the talk? What made you want to give it?
6
+
7
+ **Florian Forster:** It basically boils down to we have been huge fans of Kubernetes so far. And I still really like the ongoing effort behind Kubernetes. But while running a cloud service that should be kind of scalable, we hit some limits around the quickness of scalability in Kubernetes, and we rethought our stack to better adhere to that, and to kind of get better scaling efforts and better cost profiles as well. Because that's also a side. So the economical side was also kind of a driver to that.
8
+
9
+ **Gerhard Lazu:** Yeah, yeah. So when you say serverless containers, what do you mean by that?
10
+
11
+ **Florian Forster:** Well, we labeled it that way because we thought, "Okay, Citadel is being built like a Go binary, crash it into a container, you can run it basically everywhere." And that's what our customers do. They use normally Docker Compose, or Kubernetes. And we wanted to keep that availability around, so that we actually eat our own dogfood, and not just create something new for our internal purpose, but instead rely on the same things we tell our customers. And so serverless containers is kind of the definition - in our head, it's basically a platform where you can run some kind of OCI image. And you could call it native, you can call it AWS Fargate, you can call it Google Cloud Run, whatever fits your poison. I mean, it could even be like fly.io, or something like that. Basically, plug in a container and it should be scalably being run. That's kind of the definition we made right there.
12
+
13
+ **Gerhard Lazu:** Okay. Okay. So the container part - that is very clear. But why serverless?
14
+
15
+ **Florian Forster:** Because we don't want to handle and tackle the effort of running the underlying infrastructure in general. Because, yeah, if coming from Kubernetes, you are well aware you need to handle nodes to some degree, even with things like GKE Autopilot, there is still an abstract concept of compute nodes behind it. And the whole, let's say undertaking with Cloud Run and AWS Fargate and all of those - you basically plug in your container and tell it "Please run it from 0 to 1000", and you don't need to care whether it's one or 100,000 servers beneath. That's why we call it serverless to some degree, even though the word is absolutely -- it's wrong. \[laughs\] I don't like the word serverless, to be specific.
16
+
17
+ **Gerhard Lazu:** Okay. So if you could use a different word, what would you use, instead of serverless? What do you think describes it better?
18
+
19
+ **Florian Forster:** You could call it container as a service, to some degree, I think... Although that's kind of plagued by the HashiCorps, Nomad, and Kubernetes, and Docker Swarms... Because people tend to think that a managed Kubernetes offering its container as a service. And I don't like that nuance, but that's a personal opinion... Because I still, to some degree, see the underlying infrastructure. And yeah, serverless kind of reflects well on - yeah, there is no server to be managed... But still, you can use your funny old OCI image and just throw it in and it will keep it working.
20
+
21
+ **Gerhard Lazu:** Yeah, yeah. So just by going on this very brief conversation, my impression is that you've been doing this Kubernetes dance for quite some time. Right?
22
+
23
+ **Florian Forster:** Yes... \[laughs\]
24
+
25
+ **Gerhard Lazu:** You're not in this for a few months and decided you don't like it... Or you've tried it and you said, "You know what? This is not for us." You've been using it for a while, because you have a very deep knowledge about the ins and outs of how this works. So tell us a little bit about that.
26
+
27
+ **Florian Forster:** Well, I mean, the first contact points with Kubernetes were around where the kube-up script was still there, by kubeup.sh.
28
+
29
+ **Gerhard Lazu:** Wow... Okay. I don't remember that thing... So that's a really long time. Okay.
30
+
31
+ **Florian Forster:** \[05:43\] Yeah. It was way pre Renshaw, pre K3s... Yeah, even OpenShift was like the classic OpenShift at that point in time. Yeah, at the company I worked at that time we had a need for an orchestration system to run some things in a scalable fashion, and we started poking into the Kubernetes ecosystem, because everybody was kind of hyped... "Yeah, yeah, Google is behind it. Borg is being replaced by Kubernetes." Everybody was talking in that way, and I thought it's worth looking into it. And I liked some thoughts, but still, until today, it kind of gets more enterprisy, and that's a great thing if you are an enterprise. But if you're a startup, it's too much abstraction, too much things to care for. It feels not like a hands-off operation. I mean, even just running observability in Kubernetes is like, "Yeah, pick your poison." There is like 20 different ways of doing things, and why should I even have to need to care for that too much if I just want to run like one container? Please run that container for me. And so yeah, that's where my personal change from Kubernetes to something else started. But still, I totally like what they do on their end, even though a lot of complexity is involved.
32
+
33
+ **Gerhard Lazu:** So I think we finally did it. We've found the person with 20 years of Kubernetes experience, right? Everyone thinks they don't exist. We've just found him. \[laughter\]
34
+
35
+ **Florian Forster:** I definitely aged like 20 years around Kubernetes...
36
+
37
+ **Gerhard Lazu:** Because of it? I see. Okay, okay.
38
+
39
+ **Florian Forster:** Yeah, definitely. I mean, at one point, funnily enough, we decided to switch to Tectonic, like from CoreOS; it was at that point still around... With their self-hosted Kubernetes control plane, like running the kubectl plane in containers... It was a great thing to do. And as well Etcd. But yeah, it had some nifty tweaks and problems... And you aged quite well in that case.
40
+
41
+ **Gerhard Lazu:** Okay, okay. So what is it that you do now? What is your role?
42
+
43
+ **Florian Forster:** I'm not sure what the translation into English is, but let's call it person for everything that nobody else takes care of...
44
+
45
+ **Gerhard Lazu:** I see. Janitor? No. Handyman? No... \[laughter\]
46
+
47
+ **Florian Forster:** I label myself as CEO, though still I feel more like a CTO-ish, head of DevOps-ish kind of guy, but...
48
+
49
+ **Gerhard Lazu:** Whatever. Yeah. Whatever needs to happen, you know, you're there.
50
+
51
+ **Florian Forster:** Exactly. I do a wide range on the business side of things, the overall vision on how we want to shape Citadel, and also the things - how can we ease some stress in our operations part of the game... So normally, well-opinionated about many things, even though I'm not all the times able to talk in the whole depth.
52
+
53
+ **Gerhard Lazu:** Because time, right? And time constraints. You can't be everywhere, doing everything all the time, at 120%, right? You have to pick your battles.
54
+
55
+ **Florian Forster:** Exactly. I mean, if you asked me whether I like Go generics or something like that, I need to resort to the answer "I have not experienced them yet, because I haven't had time to look into it."
56
+
57
+ **Gerhard Lazu:** Okay...
58
+
59
+ **Florian Forster:** My engineers will tell you a different story, but I'm not opinionated on that end.
60
+
61
+ **Gerhard Lazu:** So what does Citadel do? And maybe I'm mispronouncing it, but I'm sure that you will give us the official pronunciation.
62
+
63
+ **Florian Forster:** I mean, we call it Citadel as well, so it's totally fine. And the logo kind of tries to reflect the word origin, because it comes from the French way of building fortresses, with the star design... The place where you commonly see this is in Copenhagen, for example; there's still like the fortress with the star design. That would be called Citadel, but in French, but nobody cares. So Citadel in English.
64
+
65
+ \[09:45\] What we basically try to do is we want to bring the greatness of zero, like a classic closed source proprietary cloud service, with the greatness of key cloaks, run yourself capabilities, and we want to combine them into one nice, tidy package, so that basically everybody who has at least a heart in engineering can solve some of the problems around the identity game in general. So that includes things like "I want to have a login site, I want to have authorization, I want to have single sign on, with different IDPs, I want to have tokens that I can send through the world..." And everything like that. So basically, you could call it a turnkey solution to solve user management and authentication in general.
66
+
67
+ **Gerhard Lazu:** Okay. Okay. So there's one word that comes to mind when you see that. Actually two. CPU and hashing. \[laughter\] So from all that description, all I'm thinking is "I hope your CPU is fast, and your hashes are even faster." But don't skimp on the cycles. Right? Like, make sure you do enough iterations. Okay? No cheating, please. Okay.
68
+
69
+ **Florian Forster:** Yeah, I mean, it comes down to hashing. We rely on Bcrypt normally, and yes, we need many CPU cycles... But also, there's like a second thought, that if you do signing tokens - so that's also quite exhaustive for CPUs nowadays. And so yes, we use a lot of CPU to make that happen.
70
+
71
+ But actually, the stress is somewhat alleviated down to the future, because the pass key concept and \[unintelligible 00:11:30.03\] in general, since that relies on public-private key cryptography, it reduces quiet well on our end the amount of CPU we need to use, because RSA signature verification normally takes around one millisecond. So it's not that much of a stress. But hashing, Bcrypt, password with 10 to 12 of iterations might be more unrivaled of like 800 milliseconds to 1,000 milliseconds, around that \[unintelligible 00:11:57.27\] if you run like two to four CPUs.
72
+
73
+ **Gerhard Lazu:** Yeah, yeah. Okay. Do you use GPUs for any of this?
74
+
75
+ **Florian Forster:** No. At the moment, no. I mean, it comes down to a certain degree that our cloud provider does not really allow for that... And on-premise environments also are oftentimes really restricted in the kind of GPU they have around... And so we try and avoid for Citadel too much of integration and depth there. So we still rely on CPUs. But there might be a time where we want to retrain some analytics model with machine learning, and so we are looking into that space. But it's not yet there, to be honest.
76
+
77
+ **Gerhard Lazu:** So most of the workloads that you run, I imagine they're CPU-bound, because especially of the hashing. But I also think network is involved. So you need to have a good network. I'm gonna say good network - not high throughput, but low-latency network. Can you predict these workloads? I mean, people sign in whenever they need to sign in; they don't sign in all the time. Right? So can you tell us a little bit about the workloads specifically that you run?
78
+
79
+ **Florian Forster:** Yeah, we have a common pattern called double-dip traffic. So we often see like traffic starting like seven o'clock, until twelve, and then from one to around seven in the evening. So that's the major traffic phase. But if you spread that across the globe, you see certain regions ingest traffic during certain times, but globally, overarching, it's basically a big white noise line. It's like more flat if you watch all the regions, but if you really do a region, you really see like double-dips happening all the time.
80
+
81
+ \[13:45\] And yeah, having a network that can be easily and fast scalable, and really provides low latency to our customers, because they use our API, our login page, is kind of important to the service quality. And so we internally set our goal to "Okay, let's try and keep latency below 250 milliseconds for 80% of the calls." That's kind of the SLO we're aiming at, but that's not all the times possible. I mean, down under - yeah, you get bad latency. You get strange latency if you go for South America... So yeah. Normally, it does not -- it comes down not to the problem being Citadel, who you can easily run at the edge, but rather how can you move data along the journey, and that's kind of the thing that holds you back most.
82
+
83
+ **Gerhard Lazu:** Yeah, that's right. Especially if you're in like SAML, or something like that, where you yourselves need to talk to another provider, which itself may have variable latency at specific periods of time... And then you can only guarantee what you get, basically. You can't make (I don't know) GitHub login, or Twitter login, or whatever you're using, faster than it is.
84
+
85
+ **Florian Forster:** Yes. It's bound to external services. And in our case, for the storage layer we mainly rely on CockroachDB, since they can move around and shift data closer to data centers that they are actually being used. But still, the data is the most heavy thing right there if you want to reduce latency.
86
+
87
+ **Gerhard Lazu:** Okay. What does your tech stack look like? Like, what do you have? What is Citadel running on?
88
+
89
+ **Florian Forster:** Basically, we use internally Cloud Run for our offering, so Google Cloud Run, because we can easily scale during traffic peaks, as well as CPU peaks, because Google will scale Cloud Run containers as well based on CPU usage. That fits our narrative quite well. And also, the ability to scale to zero in not frequently used regions is really a thing we really like.
90
+
91
+ Below that, we use CockroachDB dedicated, as like their managed service offering of Cockroach. In the past, we actually did run Cockroach on our own, but we figured the amount of money and time we need to invest basically is being taken by Docker if they run it; it's even economically more value, because they include their license quite nifty into their cloud offerings, so... Basically, you have multiple reasons to do that.
92
+
93
+ And upfront, we use Google CDN and Google Global Load Balancer with Cloud Armor to mitigate some of the risks around DDoS, and rate limits, and stuff, because we see malicious behaviors all the time... And yeah, the stack is really lean in that regard... Because the only thing we can append to that from an operational perspective is like we use Datadog to some degree for observability purpose, and also Google Suite... And we're kind of torn in between, because - yeah, why should I send logfiles to Datadog if they already have -- I already have them in Google's cloud offering. Why should I pay twice for it? But there are some catches in Cockroach's offering that you only can use Datadog to monitor the database, and so we've kind of bound to Datadog. So it's not so funny in between state there...
94
+
95
+ **Gerhard Lazu:** Interesting.
96
+
97
+ **Florian Forster:** ...because we try to reduce our amount of third-party processes whenever possible on that end.
98
+
99
+ **Gerhard Lazu:** Okay.
100
+
101
+ **Florian Forster:** That's kind of the stack we use.
102
+
103
+ **Gerhard Lazu:** I like that. It's simple, but it sounds modern, and it's complicated in the areas where it's sometimes by design; as you mentioned, this coupling between CockroachDB the dedicated one, which is managed for you, so you're consuming it as a service, and there's a coupling to Datadog. And I'm sure that choice was made for whatever reasons...
104
+
105
+ **Florian Forster:** Yes... \[laughs\]
106
+
107
+ **Gerhard Lazu:** Okay. So there's Google Cloud, there's CockroachDB, there's Datadog. Anything else?
108
+
109
+ **Florian Forster:** \[18:10\] We use TerraForm to provision infrastructure, and we use GitHub Actions mainly to do that. We have still some stuff in TerraForm Cloud, but we're constantly migrating into GitHub Actions and private repositories, because it fits better with our flow. 80% of the company knows how to deal with Git, and so we -- we did a lot of classic GitOps in the past, with like Argo and Flux and all of those tools... We feel comfortable with Git, and so we try to shift as many things that we can into into that space. So from a source control perspective - yeah, it's GitHub with us. But that's about it on that end. There's more chat tools, and things around, or auxiliary services, but the core operations really revolves around that stuff.
110
+
111
+ **Break:** \[19:03\] to \[21:02\]
112
+
113
+ **Gerhard Lazu:** How much of this do you use for the actual Citadel software as a service, where it's all managed for you? I think you're calling it the cloud now?
114
+
115
+ **Florian Forster:** Yup.
116
+
117
+ **Gerhard Lazu:** \[unintelligible 00:21:11.10\] versus building the binaries that you make available to users? Is it like most of this is for the cloud service, or what does the combination look like?
118
+
119
+ **Florian Forster:** I think 80% is for the cloud service, because the whole CI side of the story comes down to GitHub Actions, some Docker files, and some custom-built shell scripts, because protoc and gRPC tends to be a little bit nifty how to build clients with gRPC. So that basically comes down to that from a CI perspective. And we can basically run all our components we have - like, we have a management GUI in Angular, we have like a cloud GUI in Next.js, we have like the backend of Citadel, which is written in Go... And we can cram everything into GitHub and GitHub Actions, into a dockerfile, basically. And that's a process that is constantly evolving to reduce drag for additional committers... Because it kind of is -- if you don't make that understandable, it's a high entry bar to commit things.
120
+
121
+ **Gerhard Lazu:** Yeah, of course.
122
+
123
+ **Florian Forster:** \[22:23\] Yeah, and you need to reduce the drag. I think an honorable mention on that end is like Cypress for each retest stuff... Because we test our management GUI with Cypress, which basically passes on to our API, so we can do that end to end from our perspective... Yeah, and then there's like automated things, like Dependabot, and CodeQL, and static analysis tools to make the security right... But that's about it, because we think the tool drag - I call it often this tool drag... I don't like having too many tools around to do the same job, because you lack focus if you do that.
124
+
125
+ **Gerhard Lazu:** Of course. Of course, that just makes sense. Yeah, that cognitive overload of having ten ways of doing the same thing, based on the part of the company that you're working in... Okay, okay. So how many engineers are there for us to better understand the scale of people that touch this code, and work on this code, and help maintain it? So there's the company, and then the community, because that's also an important element.
126
+
127
+ **Florian Forster:** Yeah. Currently, it's like eight dedicated engineers, like software engineers from our end, working on Citadel's code. Some of them also working on things like the Helm chart, and stuff, because it's mainly our engineering staff who does that... And then we have like 20, 25-ish external contributors across the field who do a variety of tasks. Because we have also some separated packages from Citadel. Like the OpenID connect library, for example, is not in Citadel; we use it on our end, but we needed it to create, for example, a Go library for OpenID connect, and we wanted to have it certified, so we needed to do like an extra leg on that and. And we have separate maintainers on that end from Citadel, because many people are using \[unintelligible 00:24:17.17\] without our knowledge, so to say.
128
+
129
+ So it's multiple projects, multiple contributors, but the stack beneath is kind of the same. And with us it's basically the eight engineers who work on that, even though the company now has roughly 15 employees since we started to work more and more on technical writing things, API documentation... Because otherwise, it's not nice to work with--
130
+
131
+ **Gerhard Lazu:** Oh yes, the docs. Oh, yeah. Tell me about it. Yeah. Okay, okay. \[laughter\] Oh, yeah.
132
+
133
+ **Florian Forster:** Docs is like the conflicting major pain point all the times. And I don't say that lightly, but I think that you can use that on any project - as soon as it's kind of open source, it's hard to get good quality, consistent reading docs... But that's a topic - for example, we want to invest quite heavily over the coming few months, because we really feel like if you want to engage with developers and engineers, you need to have proper documentation. Otherwise, it feels like the impediment is too high.
134
+
135
+ **Gerhard Lazu:** Yeah, for sure. And I can't think of a better way to show the users that you care, you care about your product. I mean, very few would dig into the code and say, "Wow, this is amazing code. I'm going to use this product, because it's amazing." How often does it happen?
136
+
137
+ **Florian Forster:** \[laughs\]
138
+
139
+ **Gerhard Lazu:** But put like a nice docs site out there, easy to search, easy to understand, with good flows, and people go "Wow, they really spent some time", even though the code may not be that good, but the docs are, and then the perception is "This is amazing."
140
+
141
+ **Florian Forster:** \[26:02\] Yeah. I mean, I have recently heard the term "Content is oxygen for the community", and I think that's also applicable to the documentation side of things, because it's not only outreach content, but also, and rather important, the documentation side of things. Because even if you write like the best blog about something, at one point you will link to your documentation, and if that link is not nicely, tidy being done - yeah, it breaks the experience. And so if I need to point out one specific thing we need to improve over the coming months, it's really docs. They need to have a clear flow. "Where should I start? From where can I go to what?" So they need to reflect the user journey, basically, and that's kind of the biggest rework we will do, is restructure docs to better appeal for that.
142
+
143
+ **Gerhard Lazu:** Yeah. Okay, that makes a lot of sense. Yeah, for sure. For sure. I'm just taken aback by how simply you've put that. It's a very high problem, right? And it all boils down to this. So if you don't get this, then forget everything else, about like making them easy to understand. I mean, that's important, but what are the flows? Where do you enter? What are the entry points? Where do you drop off? What happens next? What is the follow-up? What is the story that you get when you go to the docs? And if you just get referenced -- by the way, there's so many types of actually... Four, as far as I can remember; there's guidelines, there's references, there's...
144
+
145
+ **Florian Forster:** Examples, oftentimes...
146
+
147
+ **Gerhard Lazu:** Examples, exactly.
148
+
149
+ **Florian Forster:** API documentation...
150
+
151
+ **Gerhard Lazu:** Oh, yeah.
152
+
153
+ **Florian Forster:** In our case, we split out the whole self-hosting part, because in our cloud it's not applicable. But if you want to run it on your own, you need like production checklists, examples to deploy it to x, y, z, how to configure all the nifty details, how to configure a CDN, how to configure TLS... I mean, there's a whole array of topics just for the self-hosting stuff. And so you kind of need to figure out that flow, and it will take you a lot of time to do that. But if you figure it out, it will get beautiful. The only thing you can break at that point is basically to style how you write content, in what kind of language, and that's especially difficult if you have non-native speakers and engineers; they tend to write different documentations, as dev rels, or as content marketing guys or gals, because it's just a different way on thinking of it... That's the second thing to get right there.
154
+
155
+ **Gerhard Lazu:** For sure. For sure. Okay, so I'd like us to come back now to your talk, because one thing which I really liked in that video, and in your presentation, is you talked about why you haven't chosen Microsoft, and why you haven't chosen AWS. It's not like, you know, "We haven't even looked there." You did try them, you did consider them, but there were certain things which wouldn't work for you. So tell us a little bit about that, how you ended up with Cloud Run, which I don't think it was your first choice, but basically, you ended up there because of your requirements.
156
+
157
+ **Florian Forster:** Yeah, the first and most prominent thing that struck us was kind of having end to end HTTP support. Because we provide gRPC APIs to our customers, and they need HTTP/2. And while verifying that with all the different offerings, it was kind of hard to either get proper documentation, whether they support it or not, or they oftentimes only supported it from their CDN to the customer's site, but not in the upstream... And so that was kind of really like bogging us down on that end.
158
+
159
+ \[29:51\] I mean, we could have chosen the route and say "Okay, we do not offer gRPC, but only gRPC web and REST, because we supply that as well. But we really wanted to have the HTTP capabilities, because we think at one point there is a unique opportunity to be taken to use streams and stuff for identity-related things. So if something changes, we can notify you immediately, which can be an interesting way of thinking of it. And it reduces latency quite a lot. I mean, our own measurement states that's reduced seven milliseconds each call purely down to JSON serialization... Which is not like a bad thing, but it's seven milliseconds.
160
+
161
+ **Gerhard Lazu:** Yeah. It adds up, right? Seven milliseconds here, three milliseconds there... Before you know it, it's a minute.
162
+
163
+ **Florian Forster:** No, really, if you have microservice architectures -- I mean, if you have like five services cascaded, and every time they call like our introspect endpoint to validate some tokens, it adds up. It's 50 milliseconds only serialization at that point. But that was just the decision they made there, and really, the major breaking point was the HTTP stuff. It was just too confused, too not clearly stated. We started poking around, we saw that eventually you can make it work, to some degree. For example, Azure - what's it called now? Azure App Container Instances, I think... They now use Envoy as proxy, so you now get HTTP/2... But as soon as you want to use the web application firewall - yeah, well, you're in a bad spot, because that thing does only support HTTP/2 to the front, and not to the back. And the CDN as well. So it always did come down to friction on that end. And so yeah, we choose Google Cloud Run, exactly. That's one of the major reasons we chose Cloud Run, even though we don't like some of the limitations with Cloud Run. There are some which we don't like.
164
+
165
+ **Gerhard Lazu:** Tell us about them.
166
+
167
+ **Florian Forster:** I mean, I still to the day not fully understand why Google Cloud Run needs to use their internal artifact registry... Like, you need to push your images into Google's registry, and from there on out, you can fetch it in Cloud Run. I don't know why that decision was being made. There might be a technical reason to that. I mean, you could argue on availability; that would be a ground. But I don't like that fact. And the other thing that really is kind of a bad thing is if you want to use VPCs, you need to use the VPC connector, which now can be edited. I guess they released that like one month back, or something like that, but you need to have basically VMs in the background that handle connectivity from your Cloud Run to your VPCs. And since we use Cockroach, that traffic passes through a VPC, and we use like \[unintelligible 00:32:47.18\] gateways, so we pass anyway traffic through that, because we want to have control over what traffic leaves our site, and stuff. And that VPC connector thing is always and so like, yeah. It's there, I don't like it, because it scales not down. It's just scaling up, and then I have like 10 VMs running in the background, doing nothing. But yeah, it's kind of a thing I don't like. But other than that, it's a great tool.
168
+
169
+ **Gerhard Lazu:** That's right. One thing that maybe we haven't done as a good job to convey this is that your service is global. So when you're saying VPC connect, you don't mean just in one region or in one zone; you mean across the whole world. So how many pops do you have worldwide, where Citadel runs?
170
+
171
+ **Florian Forster:** We have like a core pop region. It's like three regions we run constantly, that comes down because we run our storage there as well. And sometimes if we see different traffic profiles, we start regions without storage to them, just to get some of the business logic closer to customers. So that can range normally from three to nine regions during normal operations. But since Google's internal network is quite efficient from our perspective, and their connectivity is really great, we don't need to spread it to more regions than that normally.
172
+
173
+ \[34:13\] We did some internal experiments, we built like a small TerraForm function where you basically can throw in a list of regions you want to deploy, and it will basically deploy to 26 regions in like one to two minutes...
174
+
175
+ **Gerhard Lazu:** Wow.
176
+
177
+ **Florian Forster:** That works really well. But you get strange problems if you do that, because sometimes you want to have like hot regions, because your application is anyway running, it can serve traffic quite easily... If you have to cold-start a region, it always takes a few milliseconds to do that. And it's not like a big thing, but it can influence customers' view on your service. Because if you hit the login page and it takes like two seconds to get everything spinning, and database connectivity set up, and everything, it gives you some drag. And so we're trying to keep hot regions, as we call them.
178
+
179
+ **Gerhard Lazu:** Yeah, that's crazy. Like, you say that two seconds is slow for a whole region to come up. It's like, "What?!" \[laughter\] Like, try booting something; it will take more than two seconds. Anything, really. Wow, okay...
180
+
181
+ **Florian Forster:** I feel it's like engineering ethos that you might at some time over-engineer certain things... But still, it feels right to do that, because it's more easy to just scale up an existing region and throw some more traffic into that. You can easily steer that around. The thing you most of the times will miss out is basically 50 milliseconds. There is edge cases with different regions... So if you live like in Australia - yes, we don't have like an immediate region in your vicinity... But yeah, that's a matter of - if you have enough traffic, you will actually open a region at one point, and then you try to keep it hot as long as you can. And I always think the classic engineering decision that comes down to that is the same thing that Cloudflare and Fastly are constantly arguing around... I mean, the last time I checked, Cloudflare was still "Let's build small regions across many places", and Fastly was like, "No, let's build like huge data centers, with huge compute to them." It's a matter of what your service needs to decide that, and we decided that "Yeah, two seconds feels bad."
182
+
183
+ **Gerhard Lazu:** Yeah. But it's interesting... Sometimes routing to a region which is further away, sending a request to that region can be faster based on your workload. And even though your workloads are super-optimized, for something to be up and ready in two seconds, that's just crazy. Try doing that with Java... \[laughter\] Right? Or something which is like slow to start. Not picking on Java, but it's known for slow starts, which is why you wouldn't stop it. And that would be like a no-starter; like, you can't even consider that for what you're doing. Maybe GraalVM does things better. But JVM, it's slow by design, because it runs optimizations... I mean, a lot of things need to happen. Again, nothing wrong with that, but not suitable for this workload.
184
+
185
+ **Florian Forster:** I mean, it's even small things involved into getting like fast startup latencies... One big drive of-- we hypothesize - we don't really have evidence, but we hypothesize - is that the image sizes of your containers influence that quite a lot. Even though I think Google does quite a lot of magic in caching things in Cloud Run to scale it quickly. But nonetheless, we see that bigger images take more time. So for example, we have a documentation page built with Docusaurus, and we normally use Netlify to deploy those things, and we are now currently testing to move that to Cloud Run as well. And that container is approximately 500 megs in size, because of the images and stuff, and it takes more time to start. So it takes like three to four seconds just to get like the node server started, even though everything is pre-built, so it's not like we are compiling stuff on the fly. It's really like start the Node server with static assets.
186
+
187
+ **Gerhard Lazu:** \[38:14\] Okay, okay. Yeah, you're right, you're right; that can make a big difference as well. So a few seconds is not bad, right? Especially if we have like a blue/green style of deployment, where -- and I know that Cloud Run supports that. So you're not taking the live version down, and that's okay... Something that needs dynamic requests - for that it's a bit more difficult, right? Because it needs to service it, it needs to keep the connection open... There's like a couple of things happening, rather than just static ones, where you can just cache them, use a CDN. So you said that you are using a CDN, right?
188
+
189
+ **Florian Forster:** Yup.
190
+
191
+ **Gerhard Lazu:** Okay. Okay. How is that like? How are you finding the Google CDN? Because I haven't used it in anger; I mean, only small projects... How does that work in practice?
192
+
193
+ **Florian Forster:** I actually quite like it. One of the things we like the most is that you can cache assets across multiple domains. So for example, each of our customer has their own domain name, and our management GUI is built with Angular, so we have a lot of static assets to that... And if one customer accesses that data in one region, we can cache it for basically every customer. And that's nice, because you basically can ignore the name as the host name, and instead just cash the files. And that's a feature I have not easily found in CloudFlare or Fastly's offering. I mean, you can always make it work to some degree, but that was basically -- yeah, just input a validation rule into the Google CDN and it will take care after all that stuff.
194
+
195
+ And the overall strategy with Google's pricing is more beneficial to our end, because we basically only pay usage, and we are not feature-locked. And with the Cloudflares, Fastlys, and everybody, you basically are always feature-locked until you get to their enterprise offering. And at that point - yeah, the cost is quite steep to be paid at that end... Even though they have great offerings, but it feels wrong to spend so much money on a CDN, when it's just -- it's basically caching some static assets. I mean, it's not doing the heavy-lifting, it's more quality of life improvement, I would call it.
196
+
197
+ **Gerhard Lazu:** I was reading something about this as well, enterprise features, feature locking, things like that. It was your blog, where he said you charged for requests, right?
198
+
199
+ **Florian Forster:** \[laughs\] Yes.
200
+
201
+ **Gerhard Lazu:** See, I have done a bit of research; not too much, but I did notice that, where you mentioned that... And to me, that is very reasonable. You don't have to upgrade to higher price tiers just to unlock certain features. I mean, why? Does it cost you more? I mean in development time sure, but you don't get the full experience of the service. Per seats - again, that pricing can work in some cases. So what influenced your decision to do that? I thought that was very interesting, for all the right reasons.
202
+
203
+ **Florian Forster:** We thought long and hard about our pricing, so many times in the past... We even had like a feature locked model, closely to what Cloudflare does... And what does not reflect well in the security area is if you want to provide your customers with a security service, you should give them the means to have a secure application, and not to tell them, "No, if you want to have 2-factor, you need to pay extra", because that kind of defeats the purpose of having an external specialized system of handling the security in the first place...
204
+
205
+ And the second thing there is like if you price by identities, customers will stop creating identities at one point, if they can choose. And we wanted to remove that sensation by telling them, "Hey, store as many things as you like, do as many things as you like. The only thing we want to have from you in return is we need to be able to finance our infrastructure to some degree, and so it comes down to pay us for the usage."
206
+
207
+ \[42:20\] That's really what it boils down to... Because it feels like a nice trade-off, even though - and I can be honest on that, and it's during sales meetings - it can sometimes be a problem or impediment, because people still think in users. "I want to have like a million users. I want to have a price for a million users." And I mean, we have a lot of data, we can do cost estimations for that; it's not like a big problem. We even do over H deals where we say "Okay, we'll give you like 10 million requests a month for a price XYZ, and we will use it for 12 months, and we will check after five months how it was working", because we want to reduce that friction out of the equation. But it's just a matter of different strategy, and we are committed to that end, because it feels like the right thing to do, even though it has some challenges.
208
+
209
+ **Gerhard Lazu:** Yeah, for sure. For sure. Yeah, I mean, to me, that sounds a more sensible approach, a more honest approach, a more open approach. Everything is out there for you to use. There's like one requirement that we are able to support all your requests, and we are able to give you the quality of service that we know what you want... And these are the SLOs, and that's what it will cost. Okay. How long have you been running the cloud service for, your cloud service? Six months, 12 months?
210
+
211
+ **Florian Forster:** Now, looking at the date, it's like seven.
212
+
213
+ **Gerhard Lazu:** Seven months. Okay.
214
+
215
+ **Florian Forster:** Yeah. The thing we call Citadel Cloud now is now seven months in age... But we had a service we called Citadel v1, with kind of a different sensation to it, with the old pricing I just mentioned... And that was started in mid-2021. But we learned so many things across that journey that we needed to reconsider some of the things, like pricing, deployment strategy, locations where we deploy, because customers actually care sometimes about that... The API needed to be reshaped to a degree... And so yeah, it comes down to an evolution of Citadel, like from version one to version two, and our cloud service changed as well. And so the new service is basically seven months in age right now.
216
+
217
+ **Gerhard Lazu:** Okay. What are the things that worked well for the cloud service in the last seven months? Good decisions, that proved to be good in practice?
218
+
219
+ **Florian Forster:** I think it's not only directly the cloud service, but the overall change in our messaging, what we actually want to sell, and why we recommend that you use our cloud service - that message is being picked up better since like the seven last months. So that's a thing we constantly improved. So many people now use the free offering we have, because it provides already a lot of value, and we are even considering increasing the amount of things we give you, the amount of requests as well, to get developers an easygoing free tier that they can actually start building software. Because nobody really likes to run things. And I think that's the most -- let's call it the biggest change I experienced so far in behavioral things... Because everybody's always shouting, "I want to run system XYZ on my own hardware", but in the end everybody turns to some kind of free hosted offering, because everybody just knows "Oh, no, I don't want to take care of backups. Oh no, it just runs. Oh no, I need to start it again." So --
220
+
221
+ **Gerhard Lazu:** "Upgrades? Again?! Oh, my goodness..." When did you last upgrade your phone? Serious question.
222
+
223
+ **Florian Forster:** My phone... I'm quite pedantic, so I will catch up on releases in one to two days. \[laughter\]
224
+
225
+ **Gerhard Lazu:** \[46:14\] Okay, that's a great answer... But for me, the updates just happen, right? I mean, unless it's like a major update, your applications on your phone - they just update. It's not a problem that people think about, or should think about. So if you run it yourself, guess what? You have to think about that. And then you say, "Oh, dammit, I want this free auto update feature. Why don't you give it to me?" Okay, well, it's not that easy. I mean, the easiest auto-update - like, delete it, and then deploy it again. And then you're good. But people don't want that. So the point being is, you want the service, because this stuff should be seamless, and someone needs to put in the effort for it to be seamless every single time.
226
+
227
+ **Florian Forster:** It's hands-off. We really call it hands-off. I mean, we take care of the TLS stuff, we will take care of updates, of backups, of rate limits, of malicious traffic... Everything is just handled for you, and that's a value I think is going great with the community... Even though Citadel's open source version is really like -- there is like 99% of the things that we have in our cloud service is in the open source code, and then you can run it on your own. You can even get like an enterprise license with us to get support, and stuff. So we really encourage you to do that. But the main reason we still encourage doing that is we see many customers having special requirements on data residency, or data access... And we always tell them "We don't want to do like a managed service offering for you guys, because it feels wrong. Because we still have access." And if your reason is, "Nobody else should have access", well then you need to run the system on your own.
228
+
229
+ **Gerhard Lazu:** Yeah. Yeah, that makes sense. I forgot, you're a security, right? \[laughter\] And security has this very important requirement. No, sorry, I have to run this. I mean, I understand... I can pay you to run it for me, but it has to be in the specific locations, with these restrictions, and... Yeah, that makes sense.
230
+
231
+ **Florian Forster:** Yeah. So that's the thing I think went great with the cloud service. So free tier is definitely a thing we will reiterate on, even though you could run it on your own. I mean, it's not like there is like a feature gap, or something like that.
232
+
233
+ **Gerhard Lazu:** Okay. What about the things that you wish you knew, before we \[48:29\]? \[laughter\] The "Oh, f\*\*k!" moments. And that will be bleeped, but...
234
+
235
+ **Florian Forster:** \[laughs\] Yeah, there are many. Honestly, there are so many. I'm going to choose to focus on the operation side of things for the first moment. It's really like, don't try and build many funny things, even though there are great open source tools around, and everything is ready. Just try and relax a little and use ready-made services in the beginning. Because we thought "Okay, let's run our own Kubernetes, or GCP", and stuff. "Let's get more control of it." Yeah, it was going great, but the added cost and the added slowness you have while maintaining our own Kubernetes and stuff - it's not worth it. So that's really -- just use turnkey services to begin with. And at one point, be ready to make decisions to change that stuff to more enterprisy side of things... Because \[unintelligible 00:49:36.04\] application to Netlify, and Vercel, and calls it a day... But I think that's only worth for the start. At one point you want to get more control, more flexibility... You want to create rate limits, you want private IPs, you want to have like the enterprisy things... And you get that way easier if you start focusing on using like Google Cloud, AWS, Azure, whatever is your poison, basically... Just use an infrastructure provider for that.
236
+
237
+ \[50:10\] While reflecting back, I feel like that would have helped us decrease some of the drag along operation efforts. And as well, don't run Cockroach on your own; just use the cloud service. Why not? You must really have valid and specific requirements that do not allow you to do that before you make that decision. So that's really a big thing on that end.
238
+
239
+ **Gerhard Lazu:** Okay.
240
+
241
+ **Florian Forster:** Other things? Yeah, our startup lesson is like "Don't assume things. Always validate things. Talk to your customers, talk to potential customers whether a feature is really needed." And we built, for example, too many features in Citadel. We assumed too many things in the past, and so we now strip some of the things that nobody actually needs. And you need to have like data to make that judgment, but other than that, it really comes down to "Check first whether somebody needs something", and not only one guy, but also multiple guys, and then build it, and not the other way around... Because you get a lot of dead code that you need to maintain, and it can have bugs... And yeah, so that's really a lesson we're mentioning.
242
+
243
+ **Gerhard Lazu:** Yeah. It's a good thing that you are pruning some of this stuff. Because what usually happens - you never touch it. You add it, but then you never touch it again. And that too often ends up in some very big messes, that no one wants to touch, ever... And then things just die like that, you know?
244
+
245
+ **Florian Forster:** I mean, as soon as you see somebody raising a concern, or a bug over something, and you think, "Is that feature even being used?" And you can't really validate to yourself, "Yes, that's actively being used." You should invoke a discussion if you want to rather remove it, because just one person is using it. It's just maintenance effort.
246
+
247
+ **Gerhard Lazu:** I'm wondering, on the operational side of your cloud service, how do you know when something doesn't work as expected? How do you know that there is a problem, basically, is what I'm asking?
248
+
249
+ **Florian Forster:** Let's say I am a huge fan of using existing data to get a sensation whether something's healthy or not... By which I mean we try and avoid active health checks; like, we don't ping Citadel from the outside world and figure out whether a service is available or not. I'd rather use the logfiles and throw them into some kind of analytics engine to figure out "Okay, how many status codes do we have, in what variety? How many calls? What's the latency?" Because that gives you a broad understanding of the health of your service across regions. Because if an error rate is growing, and that's being tied into a region, you'll normally know it's more an infrastructure-related problem. If the error rate is growing across the globe, it might be more on the storage side of things, because that's a global thing... And so you get quite fast a broad sensation of how things are.
250
+
251
+ And the other hugely important thing to us is like having traces, like open tracing. We use Google Cloud Tracing for that stuff, to get a sensation on whether releases change things, or smallish changes, or A/B tests also... Because sometimes we create a new branch in our open source repository, and refactor some of the code, for example, get user by IDs being revoked with a new SQL statement, and then we deploy it to production to get some traffic to it, and see how latency shifts around.
252
+
253
+ So it really comes down to observability based on the data you have around, but not checking it actively, because that gives you a wrong sensation. Because you're always checking happy paths there. I mean --
254
+
255
+ **Gerhard Lazu:** \[54:03\] Yeah, true. Yeah, that's a good point.
256
+
257
+ **Florian Forster:** Yeah, you can check a login page. I mean, filling in fields, pressing a button. It will work. But what happens if it's not working because the user has strange extensions in the browser, because they have strange proxies in their environment, because mobile connections are reset... So it does not reflect the natural world.
258
+
259
+ **Gerhard Lazu:** Yeah, that's a good point. Okay. So we're still at the beginning of 2023... What are the things that you're looking forward to? ...for Citadel, for your cloud service, for what you do, for the space, for the wider security space. You can answer this whichever way you want. That's why I'm giving you a wide berth. Pick whatever resonates with you the most.
260
+
261
+ **Florian Forster:** We have identified some needs for our customers who want to have more control over their general user experience... Because currently, people use the hosted login page from Citadel, which can be white-labeled, and you can basically change everything, and customize it... But still, it's like the \[unintelligible 00:55:11.09\] for you. Even though it's a good approach, because we can include all the security policies into it, and verifications, and stuff... But there is a huge demand in the developer space for having like only authentication APIs. So they basically can send us a username, password, and we will respond by true or false. So people want to create their own login experiences, as well as their own register experiences. And that's a thing we will tackle in the next few months, in the next coming months. We want to extend our APIs so that people can build their own login pages.
262
+
263
+ And during that phase, we will also change our login, because now it's built with Go, and Go HTML templates, and that's kind of not so beautiful as it could be. There might be a point where we change that to Next.js, to get like a more SDK-y approach, so that we kind of build our login with Next.js, and we will provide you the template that you can clone and create your own login without our intervention.
264
+
265
+ So Citadel might become more of a headless identity system in one place, where we just provide you the means to have different components that you can deploy, or if you don't want them, you can get rid of them. So that's kind of a natural evolution path we see there.
266
+
267
+ And the other big thing we will change in 2023 is we will extend our actions concept more. Basically, actions are -- you can invoke customized JavaScript codes in Citadel at certain points. Like, if a user is created, you can call a CRM system with some JavaScript code, you can fetch some information from there, and the whole action engine will be reworked that we can allow for more flexibility. So you basically can think of it like a GitHub Action; you can subscribe to events, and then execute something. And that's a thing we encourage quite heavily, even though it has a steep cost to be paid in regard of runtime security. I mean, running foreign code in a system is always not so funny to do...
268
+
269
+ **Gerhard Lazu:** Oh, yeah...
270
+
271
+ **Florian Forster:** We do a lot of pen testing, and testing in general... And that's the reason why you can't really run it everywhere currently in Citadel, because we want to reduce the threat surface, to get our bearings whether the engine works, and everything... So that will be a subject to be changed in 2023. So those are the two biggest, prominent points I think you will see from us on that end.
272
+
273
+ There is more underlying stuff, especially around machine learning and things, because we -- I mean, Citadel is built in an event sourcing concept, so we have a lot of data available... And we want to give our customers the option to train data, threat prevention models with their own data, to compensate for signal-based risks... I mean, that's kind of a little bit on the academic side of things, and we are working with research partners on that end. But bringing value from the data is a huge thing that we wanted to provide our customers.
274
+
275
+ **Gerhard Lazu:** \[58:28\] Yeah, okay. Okay. What about the cloud? Anything for the cloud that you have planned?
276
+
277
+ **Florian Forster:** Some things... \[laughs\]
278
+
279
+ **Gerhard Lazu:** Some things. Great. That's a great answer. We can move on. All good.
280
+
281
+ **Florian Forster:** I'm not sure much I should already disclose. \[laughs\]
282
+
283
+ **Gerhard Lazu:** No, no, all good. Let's move on. Not a problem. All good. \[laughs\]
284
+
285
+ **Florian Forster:** Let's see... A thing I can definitely disclose right there is the -- we are strongly considering opening additional regions, because we see now where traffic is originating from, and we are considering to expand our footprint on that end... And also a thing that will land in our cloud eventually is a feature I did not disclose yet. It's basically an event API. It's a simple API; you can basically fetch everything that changed in Citadel, with the whole body to it, like first name changed to whatever, because that gives the developer a great way of backpressure processing of things that changed, so they get a proper change track of everything.
286
+
287
+ And that thing - I mean, the event API will land at one point in our cloud service, but that needs some rigorous testing to be sure that all the inner fencings of our cloud service work out well, not that customers see the wrong data, and stuff. So that's a thing. And we are experimenting with different CPU profiles in our cloud service to reduce some of the latency we see, especially during hashing operations... it's one small \[unintelligible 00:59:55.17\] But there is like a limitation in Cloud Run's offering; you can't have really high CPU containers without high memory, and we don't need so much memory. Normally, Citadel uses 500 megs of memory; we don't use more. The rest is being handled by the storage layer and caching.
288
+
289
+ So yeah, that's a thing that needs untangling... So either we can use resources better, or we can somehow influence to have more CPUs in there, to get better latency. Yeah, that's an ongoing experiment. We always try to wiggle out some things.
290
+
291
+ **Gerhard Lazu:** Yeah. Okay. Well, that sounds like a very good plan for 2023. Let's see how much of that actually happens... As we know, the plans are the best.
292
+
293
+ **Florian Forster:** \[laughs\] Yeah, yeah,
294
+
295
+ **Gerhard Lazu:** Everything's gonna be amazing, and then reality hits, and then you realize that half the stuff you thought would work will be impossible. So that's my favorite - like, get it out there, see what sticks, see what doesn't, keep making those changes, those improvements, drop what doesn't make sense... Whatever.
296
+
297
+ **Florian Forster:** It's really testing things. And there is also some discussions around a reiteration of the pricing. For example, a thing we are currently testing, as well as deciding, is whether we want to give away a domain for free, for example. Because currently, if you want to have like a production domain with Citadel Cloud, you need to pay 25 bucks a month. But we are strongly considering whether we want to provide that domain name for free to developers, because it reduces drag along their journey. I mean, they want to start poking around, they want to use it... And if our cost attached to that is not so high, it's no real problem. And since Google changed some of their offering in the TLS space, you can get like customer certificates quite easily, without a huge cost profile. So a certificate would cost you only like 10 cents per month, per customer... So it feels like the right way to do it, but it's not yet a done deal.
298
+
299
+ **Gerhard Lazu:** \[01:02:06.03\] Okay, that sounds exciting. Not as exciting as your takeaway, because I'm sure that you had the time to think as we were discussing this, about the one key takeaway that you want listeners that got as far as this in the conversation with us. So honestly, this was very eye-opening for me, to see how maturely you're thinking about some very hard problems, like distributing code globally, latency, and the scale that you're thinking... You're saying a few seconds is too slow. And to me, it's like "Whoa, what?" Sometimes requests take longer than a few seconds, because it happens, because maybe you're on a phone, or you're in a watch, or whatever the case may be. And then your CPU isn't as fast, or the cellular doesn't work as well as you may think.
300
+
301
+ So to me, from what I'm hearing is operationally, you're very advanced. And you've tried a couple of things, and you've seen a bunch of things that didn't work out very well in practice, even though the promise is there, and the marketing is working well for certain things... So you have a lot of like -- I think street-wise, it's called; you're street-wise. You've been out there, you've tried a few things, and you know what works and what doesn't for you. So, again, for the listeners that made it thus far with us - and thank you all for sticking - what would you like them to take away from our conversation?
302
+
303
+ **Florian Forster:** I think the most important thought - in my space, to be specific - is don't think of the authentication system as two input fields and a button. Because that's plainly under-estimating the amount of effort in depth that goes into such topics. And so I would encourage every listener to always think thoroughly across the reasoning, whether you want to use just a framework in your application, or just create your own login thing... Because there is a huge attached cost to that in regard of operational security. Because you need to maintain it, you need to pen test it, you need to prove the security of the processes, and all that - let's call it dirty plumbing work, to make it happen.
304
+
305
+ And so I really encourage everybody, please use some kind of turnkey solution that's battle-tested... Even though if you use a framework from your language-specific thing, don't build it your own; you will hurt yourself at one point. And you don't need to take my word for it, but a general agreement in the industry is that it's better to have somebody deeply committed to the topic of authentication and authorization, to have them working on that, and not you. Just use something, and build great features and great products. So that's really my one thing I really want to get across to everybody.
306
+
307
+ **Gerhard Lazu:** It just shows that your head is where your heart is, right? ...which is authentication, authorization... I mean, if you build a company around it, you really have to believe it that thing... So you're committed through and through. And that continues to be top of your mind, which is important.
308
+
309
+ **Florian Forster:** Yeah, I mean, it's such a huge -- authentication and identity space, in general, it has a huge depthness to it. And it always feels like you can easily do that. But as soon as you start poking into the space, you will see there is like a huge amount of time flowing into it. I mean, the OAuth threat framework is like 60 pages. I recently did something for the Swiss government, which will come out in a few months... It's like 120 pages on just things you should consider when building something like that... And it starts with things like XSS, and CSPs, and... You need to care for that. It's just -- the depth is the problem, basically.
310
+
311
+ **Gerhard Lazu:** Yeah. Okay. So it may seem simple when you consume it as a service, but there's a lot that goes into it. And if you think you can do it - sure, but there are dragons there... So at least just be aware of them. Just don't get eaten without knowing what you're getting yourself into.
312
+
313
+ **Florian Forster:** Yes. I mean, you can always poke... If you want to make funny games, just go to a login page where you have a user, throw in your username, throw in your password, and if the response is coming back faster than 500 milliseconds, I can tell you something is broken already, because no solid password hashing algorithm will return a resulting that fast. Otherwise, they run huge CPUs. I mean, I call it the Xeons and stuff. Otherwise, you can't get that hashing through so fast. But yeah, that's just a sensation. That's a thing I always, I poke around login pages. \[laughs\]
314
+
315
+ **Gerhard Lazu:** Well, Florian, thank you very much for joining us today. It was an absolute pleasure hosting you. Thank you. It was a great conversation, and I look forward to the next one.
316
+
317
+ **Florian Forster:** Thank you. Likewise. I really liked your questions. And you see, I'm still laughing...
318
+
319
+ **Gerhard Lazu:** Yeah, exactly. So I've done something right. Thank you very much for that. Until next time. See you.