url
stringlengths
36
73
transcription
stringlengths
31
481k
title
stringlengths
8
99
duration
float64
95
31.1k
uploader
stringlengths
4
44
upload_date
stringlengths
8
19
description
stringlengths
0
4.04k
datetime
stringlengths
26
26
https://www.youtube.com/watch?v=15DqlUTgYYM
Our speaker today is a Silicon Valley veteran who has forecasted tech trends, launched companies, and leads community of AI founders and investors. Currently, he's leading the AI fund at Blitzscaling Ventures, investing in early stage AI startups. The firm is based on the famous book called Blitzscaling by Christian Reid Hoffman. Please welcome Jeremiah Aoyang. Thank you. What an honor to be here in the center of Silicon Valley. This is where innovation births and we think about the future. The future we're going to talk about today is the future of AI agents, agents, and agents. I'm Jeremiah, general partner at Blitzscaling Ventures. I started tech research firms here in Silicon Valley. I was a forester industry analyst forecasting the future, and I've worked at tech companies all over the valley. What I love to do is think about how does this next generation of technology impact the next wave, and what does it mean to business, society, and us. I lead a community called Llama Lounge. Hundreds of AI founders assemble from around to the globe and I get to meet the top founders to understand what are they working on and it gives me this ability to see what's around the corner. And what we know is that the biggest industry right now in AI is the AI agent space. It's destined to grow at a 10 times every five years. Right now it's estimated at 5 billion, which will go to over 65 billion, and then it estimated 500 billion just within the next several years. This is the hot market growing within the AI space. But what the heck is an AI agent? Well, let's start with the basics. You probably today use an AI co-pilot. You know, it's inside of your apps. It's integrated working with you in a real-time way. Next, you probably noticed that some tools like Microsoft have AI assistants that run through different apps synchronously or asynchronously working together within a suite of applications. The next phase are the AI agents. Now they're different. They can work when you're sleeping. They can work across multiple apps and they have a few characteristics. First of all they can self-task. They don't always need a prompt, they have a memory, they can recruit other agents and they learn and improve over time. You know you got to think of them as living creatures, young children, junior employees, assistants, pretty soon you might have 10, 20, 100 different AI agents working for each of you like you're a billionaire that has all these assistants. The AI agents are the future and it leads us to the next phase which would be artificial general intelligence which is equal to human capability and thinking and then eventually to something like super intelligence which Nick Bostrom's book talks about at over a thousand times human intelligence. So it's on the evolutionary path of these AI entities. Now I'm here to share you our thesis. Thesis number one, right now people use the internet. You physically go out and find different websites to get information. You fill out tasks and you order flights, you order e-commerce, you get things done. Even inside of your enterprise you have to fill out even say even it's even inside of your enterprise you have to fill out expense reports and time cards all of these things are wasteful and not great experiences in the future your AI agents are going to go out and do those tasks for you the dominant entity on the internet and the intranet will be AI agents, not humans. This is a big change because it means they will go out there and fill out those tasks and they will bring the information back to you. You as humans, we no longer need to go to the internet with AI agents. They will bring it back to us. Thesis number two, the information will be reassembled in the way that you want, in the way that you want, when you want, in a multimodal way, whether it's text, video, AR, VR, whatever time of day that you want, and the amount of information that you want. It's no longer up to the web designer. It's no longer up to the web designer. It's no longer up to the website or the app designer. It's up to you and your 100 agents. So those are the thesis. But let me give you some examples of what's happening today. You probably heard about Agent Force, which launched last month in Silicon Valley by Salesforce. They have three different AI agents for enterprise to interact with the end consumers. They've got a marketing agent, they got a customer care agent, and they have a sales agent. And they're all being working together in orchestration with a tool they call Atlas. So really Salesforce brought this to life for every enterprise to see how they can use their own agents on the business side. Of course, HubSpot as well. They have agent.ai, a marketplace of different agent tasks that's being integrated into their CRM. And so we can see more things will emerge into this space as well. Also, we'll see new forms of payment happening. This is Skyfire, one of our portfolio companies, where agents can hire other agents to get tasks done. Imagine that, a new economy that births with a new type of identity. You've heard of the term, know your customer, but in the near future, we'll have a new identity for agents called, know your agent, and it'll be certified to you. And it can conduct its own transactions on behalf of you. Another example is the leader in the enterprise space Crew AI, an open source platform that enables developers to build AI agents that connect and can do a numerous tasks that solve business problems. On the consumer side, let's take a look at this real-world example from Multion Palo Alto based consumer AI agent company. Let's kick that video. So in this example this browser-based extension is doing all the tasks for the human. The human does enter the actual prompt, it could be text or voice, go to Amazon and order these two books for me. And what happens is amazing. Now the human is hands off. The human can drink tea, do some yoga, sit on their hands. All of these actions that you see right in front of you, finding that book is being done by the AI agent serving the human. The human doesn't have to go to amazon.com. The AI agent does it for them. So it's grabbing those books and you can monitor the transactions and it's putting it into the shopping cart right there so that's an example of it conducting that commerce on your behalf so you got to think about that if you run marketing or e-commerce your future customer is not a human it's gonna be an AI agent and what type of agent is it representing a human or is it an identity that says, I'm part of a human and it's an abstracted identity? Or is it completely anonymous agent? We'll see all these different levels of agents. So you can see it's completing that transaction. Of course, we need to have review systems to ensure that doesn't buy a thousand of one books and then your front porch is filled with books. So here's another example. Go to Google Calendar and invite Eric Schmidt to an AI brunch. All right, so go do that multi-on. And it's doing that. So you could do a text-based prompt or a voice-based prompt. Then it's going to a Google Calendar, web-based. It's setting it up. It's filling it in using LLM technology with natural language and it's finding the email address, inserting it and then it's sending that invite. Boom! Maybe Eric will show up. So that's the second use case. Now let's find another use case. Hey, Multion, check my Google Calendar, find out what my next appointment is and book me an uber all right so here it goes so the human is hands-free yet again these are tasks that humans just don't need to do let's get our agents to do these tasks so it finds out that the next event is V's birthday in Palo Alto and it identifies that information then it's going to open up a new browser tab of course you know it's going uber.com fills in the information boom orders that car into the physical world so that's some examples but what does this actually mean well gartner forecasts that search engine traffic is going to drop 25 percent because humans will turn to chatbots and agents to get the things done and that's just gonna happen in 2026 that's devastating news for search engines also Bill Gates this quote is a fantastic one he says that AI could really change the way that the internet works. Whoever wins the personal agent, that's the big thing, because you will never go to a search site again, bang, Google. You'll never go to a productivity site, bang, IBM. You'll never go to amazon.com. I think Bill's probably been waiting to say that for like 25 years. But here we go. So to bring things to a close, here's how I see this world changing this is my forecast this right here is a decision funnel every day you make decisions for personal use every day you make decisions at work every day we are collectively making decisions and the way that we do it today is broken we have to be exposed to ads we use Google search we use other searches and they know the answers they So the way that we do it today is broken. We have to be exposed to ads. We use Google search. We use other searches. And they know the answers. They don't tell us. They show us 10 blue links, and we're exposed to ads. Then we go to this websites. The information that you want is at the bottom of the website. Now you're exposed to more ads. You leave that website. Your information has been tracked. Now you've got retargeting ads following you around. It's not really serving you. And then finally you collect that information, you compare the product, and then you make a purchase, which might be on that site or another website. It's a mess. The internet is actually not a great experience. And this is going to change. In the near term, we're going to collaborate with AI agents, we'll ask them to find this information and bring it to us. You can already do this in Perplexity and ChatGPT, and soon those creatures will become unbounded from those apps and become living creatures like AI agents, and you'll ask it to compare these things. And then finally, what we're going to see is the AI agents are going to be proactive. They're going to connect to your SMS, to your email. You might enable it to listen to your conversations with permission to understand what do you need and proactively seek out the things that you do. Proactively fill out your expense report. Proactively to book your Uber ride to your next meeting. It's like everybody will have their own personal assistant and executive assistant, every human on the planet. And it means we're gonna shift from a way of making decisions from a manual way to a hybrid way where we're collaborating with AI agents to a future way where it's actually starting to become autonomous. Wow, it gave you a lot of information. Let me summarize it to you in five final points. AI agents bring the information to the humans. Let me summarize it to you in five final points. AI agents will bring the info to humans. You no longer need to traverse the internet. It'll bring it back to you and assemble it in the way that you want. Have it your way. Oh boy, it means that media models, revenue models and e-commerce will be embedded right into agents and to LLMs. We're gonna see a completely restructuring of the internet that we know it. These billion dollar companies here in Silicon Valley, some of them will topple. The AI agents will completely change the media, marketing, and advertising space. Now big companies aren't going to sit by the wayside while this disruption happens. They're going to launch their own AI agents like I showed you examples from Salesforce and HubSpot and Crew AI. And they're also going to launch their own APIs that communicate directly with your consumer side agents for the fastest transactions possible. We'll also see something happen inside of companies that we really haven't been able to think about. AI agents will help us with our productivity. And then soon, just watch, they will start to become your colleagues. And eventually, potentially your manager and even your customer and someday the agents could become your competitor and that breeds this final example where the AI agents become like their own living entities where they can autonomously trade amongst each other buying and selling using Skyfire and other technologies then they learn and govern and reproduce like their own species without human intervention. So I came to you today to talk to you about the future of AI agents. What does it actually mean? Agents, agents, agents will be throughout all of our lives. Thank you so much. I'm Jeremiah.
AI Agents: The Next Digital Workforce | Jeremiah Owyang
808
Speak About AI
20241115
AI agents are revolutionizing the digital workplace. Silicon Valley veteran Jeremiah Owyang reveals what's really happening with autonomous AI agents and why they matter for your business. From his unique position running Llama Lounge, Silicon Valley's premier AI startup community, Jeremiah shares insider insights on: - Why AI agents are different from chatbots - Which industries AI agents are disrupting first - How companies are already using AI agents - What's coming next in autonomous AI - Real examples of AI agents at work As a General Partner at Blitzscaling Ventures and advisor to Fortune 500 companies, Jeremiah offers a rare glimpse into the future of AI agents - straight from Silicon Valley's frontlines. Perfect for: Business leaders, entrepreneurs, and anyone interested in staying ahead of the AI revolution. 🔔 Subscribe for insights on AI's future Book Jeremiah for your next event. #AIAgents #FutureOfWork #ArtificialIntelligence #DigitalTransformation
2024-12-11T14:47:25.706917
https://www.youtube.com/watch?v=uAwjR4scCLY
What happened when I was at Yahoo many years ago in the mobile iteration is that they recognized that this is not just a revamping of the existing technology, which they did to us at Yahoo at first. Oh, just use the web APIs. No, it's going to be a transition. It's going to be a shift. That's why you have material design now. That's why you have, you know, you know, these teams at Apple that want to make sure that the experience is meeting what the expectations are. So developers with designers can design these applications that are going to work really super well but we don't have that defined in the future yes we are going to be able to have something using ai in that development process so that all of my software is going to match well with whatever type of design language that the designers come up with and that's not just on the unit on the ui it's the ux so that i can define an experience from the product management's perspective and be able to have a way of implementing that quickly and deploying it i hope that's sort of my dream. But in the meantime, there, you know, this paper is, I've been waiting for something like this. This is a great paper to start with. Well, I'm going to get off my soapbox. No, I really enjoyed that, Mike. And you said a couple of really key things to me that sort of I've noticed as well. And the first thing is the chatbot era will be looked back on like the DOS era, you know, before we had GUIs. That's the first thing. But there was a phrase you used over and over again in that discourse then, and that was with me. I want this done with me. And that really comes the the crux of that paper in many respects because it was you know that that that bi-directional communication of watch and this is coming from that paper what should this agent achieve how should how should they achieve it what tools should they be calling and more more than that is the education part as well. What can this agent actually do? What is it currently doing? Is it doing it with me? And ultimately, did it achieve its goal at the end? And am I happy with that? And that all takes some kind of UX design. And as you rightly said there, this is so nascent. The paper is extremely readable, and I'm very glad that they've made this call for design patterns and principles, because nobody is an expert in this yet. So someone's got their hand up, it looks like. Simon. Yeah, just a few thoughts maybe to get people thinking. So when we talk about the first thing, I'm extremely, my perspective, I'm extremely skeptical about this whole thing. I think it's just a hype cycle. We've all been here before. But like, the integration piece, I think is pretty, it's a lot harder than we kind of make it out to be. But also, there's like this fundamental problem with interacting with an agent that can kind of do whatever, which is intent. Intent resolvers are extremely difficult. And if you get it kind of right, it's a terrible UX. It needs to be 100% accurate. And there is no way to know if you've gotten it right or wrong until you've made that guess. And so like, when we think about UX, we'll think about like, yeah, it's a lot harder. Getting this right is a lot harder than we think even in a very narrow scope context. Like, yeah. And it's, I think it's a human thing that unless you are a human talking to a human, that you understand the context and you understand the people, place, products, and processes, which is human, you are unable to understand the intent. The same way that everybody on this call would describe the same tasks differently. And I probably wouldn't be able to guess what you're talking about until you started talking about it. So that's just, I think, you can't just like, we'll figure it out eventually. It's a lot harder than we think it is, I think. No, I think you hit on some really important points there. And one is that the user experience for me is I'm going to expect to be much different from you. And I'm expecting that these systems are going to be able to learn how I interact. That, you know, at 8.25 in the morning, this is the way that you can expect me to be communicating. And what I would be expecting is like the rush to get to the kids to school, the time, like the way I interact right now is going to be very different from, let's say at five o'clock tonight. and that the agents should be able to customize my interaction and the dynamically work based on yes by the way you have a flight to catch by the way you know knowing what the like you said is is understanding what the intent and having some way of being able to collect the feedback from that i mean in mobile in mobile, we did that for many years. I mean, it was, you know, analytics. Every single phone, every single app is just spewing tons of analytic information that can be used to then be able to provide the feedback to the agency. Well, Mike, I'm going to push back on analytics about like this agent stuff because exactly because it's non-deterministic, the being like analytics and maybe I'm in the wrong, the wrong round table. And then I'll stop talking after this, but like fundamentally that, that you do not know what the person is going to say. You do not know how your user is going to interact with it. You also don't know what the person is going to say. You do not know how your user is going to interact with it. You also don't know how the processing is happening because it's like kind of black box and other ones. And so you effectively do not know what your product is outputting. And from a product management perspective, that puts you, like you don't know what the user experience is, which is just so wacky, like such a wacky premise to come up with. And then also, if you want to try to analyze that, you're unable, just because of the variabilities and the non-deterministic nature of these interactions, you can't have a really nice flow because like, maybe this person is using the same five words, but in a different way than the same person who used the same five words. So like, you need the intent. It's just so complicated. And these aren't sci-fi problems. I am having this problem right now at work. And it's not with an agent. It's just with an intent-based chatbot. And so it's just like it scales in just this crazy way that I think we're underestimating how hard it is. Absolutely complex. Yeah, I must admit, given this some thought as I have been, I'm not underestimating it, that's for sure. It's pretty scary stuff, to be honest. And I agree with you because I'm dealing with large language models and I'm trying to analyze intent. And the only way I can do it is kind of if people start repeating the same question. I know I haven't nailed it. I'm moving from the deterministic. Put in X, get out Y every time. Now it's like, I'm not sure. So much of it is an education piece as well. But the whole world's kind of trying to engage with this. You know, hey, you're dealing with probability now so yeah it may it may not work you know but if we're if we're gonna get i i mean as i say uh stripe released an sdk so that large language models can can do financial transactions on your behalf this is it's kind of like uh that like, I actually, sorry to toot my own home, but this is how I concluded the post on the Microsoft thing. Yeah, building powerful AI agents is one thing. Ensuring they're transparent, trustworthy, and aligned with the human need and expectations is another. And I think that everybody that's like AI agents, AI agents, is not considering this aspect probably most have gone off to do a technical talk they'd probably prefer to be there then consider hold on a second the thing that might really hold us back is the human aspect so i'm there with you is what i'm trying to say simon i'm dealing with the same same stuff and it's tough it's hard yeah yeah okay anybody else anybody got any comments anything they want to add to it anything anything. Hi, everyone. Hi. I come from a very different background. I just started diving into AI like a couple of years ago. I come from an operations background, finance, trading, and all that. So I built my first AI agent, which was very static, not built on any framework, not Autogen, not any other framework. That was like sometime. And then I was wondering now when I listen to what everyone was saying right now, what we expect from an AI agent differs from each. Like everyone has a different expectations. What's an ai agent differs from each like everyone has a different expectations what what's an ai agent what does it do what because it also depends on what features are we giving this ai agent yeah and once we give uh ai agent some features we program it to do something we our expectations of ai agents will differ let's say today I gave it some features. Okay, I see some potential. I expect something different from, let's say, someone was using the same AI agent yesterday, for example, and they were using a different, let's say, the older version of the same AI agent, for example, or compared to a different AI agent developed by someone else with different capabilities, for example. So we don't have a unified expectations. So our experience, there's no, let's say KPIs or something to measure the experience or what should be the standard or the benchmark for experiences we get from AI agents, because each AI agent is built differently and for a different purpose as well. So there are so many variables going on and there are no unified frameworks. Everyone, like every now and then there's a new framework coming up and some frameworks are getting updated or all of that, you know? And also AI agents are being implemented for different reasons. For example, some AI agents I've been working on, there's one project I was trying to prove solopreneur can work so one person one man show and surrounded by AI agents multi-agents even using your multi models you know so my expectations when I'm using those AI agents which I'm using them for internal purposes to handle work to do things autonomously and all that would be different from someone, let's say using a custom GPT, for example, with some features, for example, let's say, or an AI agent for the development as a front, let's say, a front office functionality to handle customer service, you know? So how do we exactly define what's a good experience from a bad experience related to ai agents so i think it's very difficult to narrow down or set a benchmark or something like that this is my input on this thank you oh yeah i totally agree with that uh but ultimately in the end you're either going to have a good experience, a mediocre experience, or a bad experience. Whether it's a program that is a mobile app, or it's on your computer, or it's an agent that has an interface that you're working with. And what your feedback is, I mean, one-star review when I had a mobile apps, oh my goodness, the VP would be at my desk. You are going to have, and that was my point back in terms of analytics, you are still going to have a way of providing feedback to whoever the provider of the agency is to say, this is not working. I didn't get my trip vacation to LA set up properly with this agency. I don't want to use this agency anymore. Perhaps I should use another travel agency. So yes, it is going to be, again, you're right, there is no clear definition, frameworks, KPIs, and ways of being able to manage this. And yes, because a lot of this is behind this is a non-deterministic LLM that is making decisions. It's not something of somebody can sit down and create a test suite with 1500 different interactions and knowing exactly what words would be said in every single time. We're not doing that. So, and again, the expectations also is, I want my experience to be different from stewards. And we have different ways that we'd like to see working with a travel agent. And that's good because now we can do that, right? If anyone's ever tried to create a customer support chatbot before from scratch or from any of the frameworks where they give you this, you know, tree structure that you have to put all the questions in and try and help your users, it's horrible. But if you have a way of leveraging LLMs in an agentic mechanism where you're using the reasoning and you're being able to get those types of responses where, again, you can by building in guardrails as well as being able to build in ways of modifying what the inference and the response that you're giving to the person so that you can have that experience. And in the end, I'm either going to give the guy a five-star review or a one-star review. And that's going to determine whether or not that agency is working. Thanks, Mike. Gil, did you want to say something? Yeah, thank you. I just want to add to what has been said so far. I think we do have KPIs and metrics I just want to add to what has been said so far. I think we do have KPIs and metrics to make sure the user experience. As Mike said, it's irrelevant for the end user how we create this magic behind the curtains, right? We can make sure maybe not the outcome, the output, maybe that is the tricky part, but the outcome, we do have metrics for that. We have a task success rate, time on task, error rate. Those are outcomes that we can measure and see if this system or this tool that we are creating is providing value to an end user right so just wanted to point out that and then and this is more kind of like a question sometimes i get confused when we talk about ai agents versus an ai assistant right In my understanding, AI agents are part of an ecosystem around the model that enables the model to do more things, right? It's sort of like plugins, right? It's not something that the end user needs to be aware of or that brings value directly to the end user. There is a middle tier in there. In my opinion, that's what I would call an assistant. And the assistant is the one that is going to interface with the user. It could be voice, it could be an actual chatbot UI, or it could be also just automating tactical tasks that are repetitive and boring so that the end user doesn't have to take care of them anymore. It doesn't have a UI, right? And the AI agents are helping with that, right? So from the assistant point of view, that's a tricky part in my opinion. That's where we need to create these experiences that connect to the end user, that relate to the end user. But again, I don't feel that that is something related to AI agents. AI agents is a technique, it's an architecture, it's something else that we use when we develop the solution. But that front end that presentation layer that interaction layer is what becomes more more relevant um taking all that work that has been done by the model and the agents and uh presenting it in a way that solves somebody's problem right and one last thought I also think and this is a really personal standing the u.s industry and especially its process its methodology is broken in my opinion my background is in architecture and when you are learning how to become an architect there is a lot of technical learning that comes with that you need to understand construction techniques you need to understand uh structural analysis right it takes years to to become an architect and the same goes to industrial designers, right? But when it comes to UX, people can do a boot camp in three months, right? And they are able to interview users, right? And create documentation based on that and maybe some models, right? But most of them don't understand the technical side of things. They don't know how to manipulate this medium software, right? But they, most of them don't understand the technical side of things. They don't know how to manipulate this medium software, right? For which they are designing. And I think that that's another issue. There was always a gap between design and development. And now with these new technologies, if designers are not more aware of how these things work for instance agents right and if they don't understand that interaction design is the u.s skill that overlaps a hundred percent with ai agent workflow design and that things like finite state machine theory is relevant because it's the glue between interaction design and the tech that supports it right then we have a problem right then this gap is just going to become bigger so we keep talking about the user-centric side of things and in my opinion most of the time is virtual virtue signaling why because if i spent hours interviewing people but i don't understand how to take advantage of this technology and the technical constraints too right in order to provide a solution that nails their problem then i am just making drawings up in the air and having a lot of back and forward with developers. So I wish we can go back to what we had before. You know, back in the 2000s, designers knew how to code. They would create amazing animations on Flash and things like that. They were aware of HTML and CSS when they were aware of html and css when they were designing um and we we came from something called human computer interaction right and i feel that that's the part that is missing so designers should get a bit more involved into understanding computer science understanding how a model works um why guardrails are important, because that's part of the user experience, how we can get that done, how agents work, and how they can manipulate these different workflows. You don't need, you don't have to design just user flows, you can design agent flows, right? And that's a responsibility for the designers. So anyway, just wanted to share those thoughts and see what you guys think about that. But I think that's what is making it hard to solve. As we've only got the 15 seconds left, two things that I really do agree on. The first is that where there's no definition of AI agents, it's difficult. And the second one is users won't care how we architect it, just like you said, they just want a result. So. Right. Thanks, everybody. Thank you.
Agent UX // Roundtable // Agent Hour
1,309
MLOps.community
20241209
A bi-weekly "Agent Hour" event to continue the conversation about AI agents. Sponsored by Arcade Ai (https://www.arcade-ai.com/) This Roundtable is facilitated by Stuart Winter-Tear, Head of Product - Genaios
2024-12-11T14:48:55.459481
https://www.youtube.com/live/6Ivmza-3mVM
either work for me but i'm i'm not like a stickler about either uh look at the background and the music man hold on we're not are we live we are live all right cool um so it's on and it is public and things are live nice i like it uh so today what are we doing what we're going to be talking about sequel mesh and we're using ducktb and we're just before i hit go live ben was just talking about how hard it is to use ducktb in production so can you tell us exactly why that is? Uh-oh. I don't hear you, Ben. I don't hear you anymore. Oh, shoot. Sorry. Oh, there it is. I thought it was the music. It's not a classic live stream. It's not necessarily that it's hard to use in production. It's hard to use as your production database, right? Because Ducdb makes it obviously incredibly easy to query anything from anywhere under any circumstances. And we're gonna do a lot of that today. But fundamentally based on at least the open source Ducdb, there's no really great way to be writing to locations that aren't your local file system system unless you're just writing arbitrary parquet files for some teams it might be sufficient to just write parquet files as your database but like probably not you probably either want like a database like a duckdb.db file or a like iceberg lake house or a delta lake house something like that and duckdb can read from all those places remote it can read from a remote duckdb file but it can't write to it and so when you want to start writing you you sort of move to mother duck which i don't i don't intrinsically have a problem with i think that's awesome like i think that's a great that's a great business model and i'm i'm happy to pay them if they've made my life so easy to get started and to get moving. So I think that's a cool model of how open source can work. And you tried to hack around it, right? I tried to hack around it. I tried to mount S3 as a local file system and it worked. It created the lock and it worked fine. It was just obviously incredibly slow. I also mounted DuckDB. I mounted a volume to modal and then made DuckDB as an endpoint where I could send post and get requests for writes and reads. And that also worked, but then you can't hook it up to anything like SQL Mesh or Prefect or SQL Alchemist because it's just an arbitrary rest endpoint that that you created that's like not a an official thing so that was a little happy too um so you're saying like duckdb can't write to a remote duckdb file and it can't write to a remote like iceberg or delta lake file and that's a problem when you want to like collaborate with people like is that when that becomes a problem? The second two of us want to share the same DuckDB, like we can't because we can't both make rights to it. Exactly, exactly. But a model that I've been thinking about a lot, which I think is really, really cool, is maybe you're using mother duck or maybe you're using Snowflake with a Delta backend or an Iceberg backend. DuckDB, but with all of those tools, you're paying for compute seconds. So when you write the query with Snowflake, you're paying for the compute for Snowflake to do it, which is fine on the writes. But on the reads, you can also just connect a local DuckDB instance to your iceberg and Lake house and just actually query as a data scientist, just using DuckDB against that dev environment. And now all of your analytics are free, as long as your MacBook can handle it, which like all of them can at this point. So that's a really cool model too wait and i think i've heard about this and when mother duck came out they were talking about how that was something that people loved so basically you're saying say that again because i'm not sure i fully understood it but i like that it's free yeah like like if you have your prod and your dev iceberg lake houses, or maybe you have a virtual one using SQL Mesh, that's maybe for a different conversation or later in this conversation. Is that what we were today, by the way? I mean, hopefully. That would be a good start if we got there. If you are using, let's just say, Snowflake for conversation's sake, and Snowflake is not using internal tables, but it's using external lake house tables, and you're able to, from an authentication perspective, connect to your dev lake house environment, you can query that environment using DuckDB because DuckDB has native Iceberg and native Delta extensions. The Delta one is built in. You don't even have to install it. You could just select from scan Delta and pass in your Delta Lakehouse S3 URL. And so now as a data scientist, if I'm doing analytics, or if I'm the business analytics team, or if I'm the data analytics team, any other team that's a read-only system building dashboards with Sup superset, for example. In theory, they could all go through DuckDB, and you're not paying for that even for a moment. And just to go crazy, I don't know if this is legitimate. It's not, because obviously you're going to run out of storage and memory really quickly. But just to be crazy, you could use PyCafe, which runs on the user's web browser and install DuckDB with Wasm or Pyodide if that works. Because DuckDB has an example of this. And now it's free. It's like now you have a Solara app running on PyCafe, clearing with DuckDB against a lake house that is essentially free because it's just S3 bucket storage. It crashes your browser. It crashes your browser for sure. But it's a very cool, it's just a cool model. It's cool to think about. So like you said, if you have some Iceberg files or some Delta Lake files sitting in S3, you can point DuckDB to a remote URL. What is that remote URL? Is there a single file that is the catalog and that's what you're pointing to? Or do you have to point it to some set of all the tables in the bucket? Yeah, I've only done it so far against a table, which would be like the path in S3, like the bucket slash essentially the next prefix, which would be the table name, and then it has all like the part files. But now you're just selecting from tables, and you treat it like any other set of tables, right? You're joining them. So select star from S3, call in my bucket slash my table, join another table, be on A.ID equals B.ID, something like that. That's insane. Just creating it like a table. So I guess you'd have to, so in the simple case, you'd have to, or like in the naive setup, you'd have to know the location in S3, at least of like the full, the tables, but I don't think that's unreasonable to do. And you could hard code that. Like I have these 10 tables that my, that are like gold that my data engineering team made. And really, I just have the names of those folders. And then maybe you have an S3 bucket, and each of the tables is like in the root. Yeah, I don't even know if that's any different from any other database. I mean, you have to know your tables at some point. Like, I don't know. Yeah, I'm not sufficiently a Delta expert to know, or a DuckDB, maybe expert to know, like, can you register a path as a table name based on how much DuckDB does? I kind of think, I don't know, maybe? I have no idea. But, yeah, I don't know. But it's the same kind of thing, even if you're putting in paths. With Snowflake and whatever, you can introspect, you can type show tables, and it'll know how to list them. I'm kind of doubting that would exist, the way we're talking about it. Maybe it does. You can get the schema of a parquet file. But if we're talking about 10 tables are like 10 sets of parquet files you know like yeah it's a we we should bring on like we should bring on like toby or or like simba uh because they're they've become such experts in in lake houses like i would like to bring someone in who who really understands catalogs super well and can tell us like the benefit of i know the iceberg catalog has benefits where it has like locking and schema evolution, things like that. The Delta doesn't have that external catalog and like why, why you might want to choose one or the other. And I would imagine though, with those abstractions of the lake house over parquet files, you're getting information where you can get introspective. I guess the hard, the hard thing here is guess the hard thing here is there's no collaboration. Let's say you have these 10 tables and a data scientist can use DuckDB to read them and write whatever SQL queries they want. It's like they can't give anyone the results of those queries. So it's like... Screenshots. That's right. Well, I think that's where you i think but well i think that's where you create like dashboards right that that's that's where you talk about like what what should data scientist outputs be and like i mean some are apps pie cafe apps or like private pie cafe apps or just like weights and biases reports i think like i think weights and biases reports are super under under you they're so cool you can even i mean i guess hyper query got popped by deep note i don't know if deep note does this but like building those kinds of reports and showing the queries that you ran live in the report to get there i mean that's that's super valuable even if you're not creating a view on top of it like just that analytics is useful or Or you train a model and part of your artifact of that model is the queries that you ran to get there. But like, where does the data live for those reports? Like, is it on someone's laptop or is it like? Well, the resulting data is in the report, but the resulting data could be tiny because it's just, it's showing the results of the query and not necessarily the data in the query. Like a bunch of aggregations. So it's like, I'm going to show you prices. I'm going to show you the, for item on the RuneScape market, which we're going to look at today, like I'm going to show you its highest price every week, which is like eight data points if we look at eight weeks in the past. Okay, that makes sense. What are weights and biases reports, Ben? Weights and biases reports are pretty cool. They're, they're pretty old at this point. Like if you screen a model or you run a sweep, um, you have in weights and biases, all of your pretty graphs and things. And those are really great for like very technical people who know what they're looking at, but they can be incredibly overwhelming for people who don't and so you can like click a button in weights and biases it'll create a report which is like a kind of like a live pdf with the tables and the graphs that you want and you can fill that in with markdown it almost looks like a notion page but with all the charts that are live and interactable and embedded in the way that you want them to showcase the model that you've developed. It's like moving from PowerPoint slides and screenshots to a lot like you go to that, let's say, and you have the graph. And you can click on that graph, get back to the run. Or at the bottom, it's like maybe you can even do sample predictions. And you can click on the model and see the model and the regimen. It's all live and connected in a way that's really intuitive. That's cool. What are we doing today? Sorry, go ahead. What are we doing today? Okay, so Ben knows more about SQL Mesh than I do. So I think Ben is going to kind of teach me SQL Mesh. This is the second time. He already tried to teach it to me once. And we're gonna, we were looking for some fun live data that we could pull to like bring in with SQL mesh and like do stuff with it. It's already fairly clean, which is a little sad because if the data is clean, like what transforms can we show? But like, maybe we'll just make up some dummy transforms just to show you how they'd even be done so the data we have today is from runescape which is a game that a lot of people have played it's kind of like world of warcraft and in the game like there's a marketplace so so in the game like you're leveling up like you can go level up your mining by like going and mining a bunch of bronze and And like, you can level up your, your smithing by like smelting the bronze in like a furnace. And so as you go and acquire these items, um, there's a place in the game that you can sell the items just like Facebook marketplace. Like you can, you can sell it to other players. So like if I have a bunch of bronze items, I can stick it on this marketplace called the Grand Exchange and I can list what price I'm selling it to you for. And then other people can come and like they can either buy or not my bronze at my price. And so like this is hilarious because it's a real marketplace. I mean, it's basically like Facebook marketplace. And for all these transactions that are happening from all these players playing RuneScape, RuneScape exposes an API where you can get like aggregated values of like the high and the low price for every item in the last five minutes. And so like, so you could totally like, you could like the same kind of system you could build from this, you could use on like the actual stock market in the real world. You could like, you could use on like the actual stock market in the real world. You could like, if this thing can accurately forecast the prices of items on the RuneScape market, it's not unreasonable. You could take something very similar and predict the price of stocks on the stock market. So it's like- Not financial advice. Exactly. This is- Do not use any of this on the stock market and then blame us if you do lose money is there like a runescape analog of that disclaimer like don't lose don't blame losing all your gold on us yeah um so yeah how about we so so ben and i set up a live share so ben's mostly gonna drive um we set up like a uv environment here so we can start by just showing off the two endpoints we'll hit from runescape and then maybe starting to pull in some of that json and get it into sql mesh cool share your screen ben yeah and uh since we were talking about it a second in case any if there's actually anybody listening to this in case they are and they know a lot about duct tv and they're yelling at you can create a table through a delta scan in duct TV and so you can just create tables as uh queries and and then just start querying your tables which is very cool um yeah okay let's just maybe even start with nothing I think one thing also maybe to call out is that hopefully this makes sense, what we're doing and is able to be extended and we can find potentially a more interesting data set. that has everything, GitHub CICD, model training, retraining, monitoring, like orchestration, all of that stuff over the course of however many streams we do and however long that takes. We're starting with this data set because this is like an intro to a bunch of things. We're going to intro creating an MLOps-focused repo, Python package management, like CICD, SQL Mesh, maybe Prefect. In a future one, we'll introduce Feature Form and XenML. There's a bunch of tools. And so while the data is important to make it interesting, it's not everything. And so we might change the data over time, but the tooling, the point is that it should be easy to plug any data into this and kind of model what you're looking for. I want to add too, like, so first off, reiterating what Ben just said, we're going to build an entire ML platform end to end multiple times. Well, like, if the ML platform has like an orchestrator, we'll swap it out with a few different orchestrators. I love that. Prefect, and then we'll do XenML, and then we'll do Metaph and then we'll do metaphor. Like we'll do all of them or a bunch of them. But also I feel like a lot of these things, a lot of MLOps tutorials and stuff that you find, they'll, they'll skip straight to training and serving maybe with like a Kaggle data set or something. But like they completely ignore the data engineering side. And I think that's actually led to a lot of MLOps engineers just being completely weak at data management and data modeling, which is not good because actually like feature creation is probably where the biggest value add, I think, in ML projects is. So true. And not just feature creation, but like you don't necessarily need to be the one, like neither Eric nor I are great data scientists are great data scientists like neither of us are the ones who are coming up with phenomenal features but it's if you're going to be an enelops engineer it's really important to to know that process and to understand how to support that process in an easy way like column level lineage when things are breaking how to run backfills how to scale those backfills like that kind of stuff is in that world of being in the middle of those two spaces that's a pretty important one to be good at even if you're not the one creating uh the features and every job i've had that's been part of my responsibility even if i'm not the one training the models last comment too like um i think data engineering in general is like a black box to data scientists and mlobs people and that has the problems i was saying and that that's where sql mesh comes in right now today for this project like sql mesh is this tool that like at least analytics engineers would use but also probably data engineers would use and like you'll bring in like raw data and you'll use a real you'll use a tool like sql mesh or dbt to like bring that data from raw to bronze, silver to gold. And now it's like this clean model to data that like the data scientists, like that's kind of like the handoff point to the ML folks is like these nicely made tables. And so anyway, so we're starting today with kind of like the data engineering side first. So yeah, so yeah, I'll, I'll explain SQL Mesh maybe a tiny bit more. Like I said, we're going to look at other tools as well. Like another one I'm stoked about is Feature Form. They're both, they're very different in so many ways. But if you boil down, like they're both helping you build like a unified data platform kind of thing. SQL Mesh is focusing on making sure you don't run queries multiple times, how you're sharing that environment across multiple people. They talk about virtual environments and we'll look at that. And it's really cool the way they handle it. Feature Form is more focused on like, all right, in reality, things need to scale pretty aggressively. We're just going to bake that in from the beginning to make, like right now, I'm going to start with a local DuckDub instance. And the idea is you can switch to another one easily. But nonetheless, I am on a DuckDB instance. So it's local. So it's not necessarily scalable. But okay, so we have a bunch of things here. I feel like if a newer engineer might look at this repo and say, I don't even know how you got to this step. Like this in and of itself feels pretty overwhelming. It has gotten so easy to create Python projects that are like well-formed. So if we look at the base repo, this is just, this was an empty GitHub repo. Eric made it an hour ago. There was nothing in it. It was a readme with no text. All you really have to do to get started with a repo is just run uv init, that's just lib, if it's going to be a library. It creates your PyProject. It creates your lock file. It creates your Python version. And it creates your SRC and then the name of your project, and then an init file and a Py.Type for type enforcement. Why don't we just start from scratch? Because I think we just ran two commands to generate all this boilerplate. Sure. Still don't want to lose it. Nice. Cool. All code is I don't have I don't have push access to the repo you created oh I'll fix that here we go I was clearly trying to make it so you couldn't save your boilerplate code and then yeah I'll start over you you just commit and stash it locally and then yeah um all right fine fine fine fine all right we have an empty repo i'm even gonna remove uh the virtual environment okay so if you don't have u, go to the website and figure out how to install it. You want to install it globally, with Brew or PipX or whatever thing. They even have a curl, I think, to install it. You just want to install it globally. Global init lib creates a repo, creates an SRC, and an init inside my project, a Py project, and the readmeam is already there. Python version is going to be 3.10. They default to 3.12, but Eric's afraid of Python 3.12, so we're using Python 3.10. And it's pretty empty, but you don't have to think about anything. It filled everything out for you. So when you're using UV, a really easy way to do it is just prefix everything with uv and then it'll find the right environment you don't have to think about anything so we want a couple of libraries in here we want to add uh what did i add we want to add steeple mesh with their web extension want to add duct to be one add polars um right now that's probably enough sequel mesh will update your PI project, create the lock file, install your things. You said SQL mesh will update it. UV will update. UV will update everything here. And then we want to add, I like having IPython in here. You can have my PI, you can have PI test, et cetera. And it'll add it in a group dependency. So when you install from your lock file to build your docker image that you push to prod these won't get installed but when you're local you can install them and then if you ever want to sync you just run uv sync and then you know there's a bunch of extra cli commands but now we have an environment we have a bunch of code here nothing is here so we'll move into our project and we'll move into src sql mesh runescape and there's nothing here there's a pi that typed which is if you're distributing this project out and you type in force you type into all your code which you should be doing those type things will propagate and other people will get the type safety that that you worked on um and then you're in it so now we're in here and we want to create a sql mesh project that's quite simple as well sql mesh is a cli and we're going to use the sql mesh from our uv virtual environment to make sure we're running the right one we're going to init and the only parameter we're going to provide is duckdb that could be that could have been postgres that could have been um the big query that could have been spark they have like a bunch of different back ends and the idea is that it it's very straightforward uh nice to switch between them so we'll run uv init sql mesh.db and i did this inside of our project so it lives inside there and it created a bunch of things it created audits macros model seeds tests we'll go through all of them like in a moment but maybe not yet um we have a config.yaml which tells us the type of connection we can have multiple connections um we have our model defaults which is our our duct db dialect and in here you can add other things like global variables like secrets um other configurations uh we'll get maybe if we have, we'll get to that later in this session or the next one. SQL Mesh has the coolest GitHub Action CICD flow I've ever seen. And it's quite simple to add it. And another project, I have it, and we can bring it in if we get to that point. But it's super cool. Okay. Any questions, Eric or Demetrios, maybe before we try to model some data? I think this is a good time to step back and introduce SQL Mesh, what it'll be doing. Goobie T came out into the data engineering scene a couple of years ago, and basically what I let you do is render Jinja templated strings into SQL queries. And it got really popular. And I think the reason it became popular is because it suddenly, it took people who used to write a ton of SQL queries and it got them into Git. So now like all these analytics teams who were writing SQL queries all day, they weren't just losing those queries to the ether because they were typing them in some sort of UI and clicking a save button. And then if the UI goes down, you lose it all. Suddenly, all their SQL queries are like version control, which is great. And then also now there's like a standard way to collaborate across teams and even across companies. Like if you move from company to company to company, if all those companies are using DBT, suddenly you find yourself knowing to ask where's our get repo with our sql queries and where's our dbt command to like render these things so basically it's standardized the process across a lot of analytics teams like writing and storing sql and collaborating on it um and then i think what the founders of sql mesh say i mean they you can every time anyone mentions ZBT on, on social media, like you, you can expect, like you basically just summoned them to your thread and they will, they will. So, so I think like rendering SQL with a templated string thing is cool, but it's, it's still pretty limited and has a bunch of problems. And I think one of the reasons that this took off is because I don't think that the folks who originally were the target audience of dbt were software literate enough to realize that what dbt was giving them actually wasn't that amazing. Using Jinja to render SQL as something any of us could do. Like any of us could have done it at any time. And like, so it's just kind of funny. I'm not the huge, I'm not the biggest DBT fan. It does offer like other things, right? It allows you to reference models with other models, and then it builds out the entire DAG for you. Like it also is a DAG orchestrator of SQL queries, which Equalmash also does, but it is a DAG orchestrator of SQL queries, um, which SQL mesh also does, but it is worth calling out that DBC certainly paved the way. Yes. In the same way that Prefect added so much on top of airflow and I am a huge fan of Prefect and we'll use Prefect here. I think SQL mesh is adding a whole bunch of stuff on top of DBT loads of things. My favorite of which is definitely virtual model, virtual environments and the way they let you handle backfill and forward filling data at different increments. Yeah, like once there was a standard way to render SQL queries from Jinja, it was like, okay, then DPC started building things on top. They made a cool little UI. You could visualize the flow of queries running into each other. So they were able to add some cool features on top of that. But I think there were still some, and this is like what Ben just brought up, there were still some fundamental problems with dbt that dbt wasn't solving. So one was people were kind of upset with the, with their ginger, like being like with their queries being filled with ginger syntax like that, that can like lead to some messiness and painting yourselves into some corners and also airflow is a tool that a lot of people don't love but one thing that airflow does really well is it manages state so like if i if i could have an airflow dag that every day so like i could have a streaming system and like we're gonna look at this runescape data and there's transactions happening on the runescape marketplace every day so like it's reasonable that I could want to pull in a fresh batch of data from the RuneScape marketplace every day. And once I pull in that fresh batch of data, I might want to do some standard transforms on that to clean it up. So it would be nice if I brought in some data on Monday and cleaned it up that on Tuesday, it'd be nice if I brought in some data on Monday and cleaned it up, that on Tuesday, it'd be nice if I didn't have to reprocess Monday's data. It would be nice if I could just run my transformations on the data from that day. Because if I'm processing all my data in all my history at once, then every day my queries to clean all this data are going to get more and more expensive. So Airflow is really good at this. Airflow can remember if I run a transform DAG on like data from a specific date range, Airflow will remember the date range that I call this transform on. And that's great because then in the future when I want to run Airflow on all my data, it'll remember that this date range has already been done and it won't reprocess it. So that's like idempotency, idempotent so, air flow was great for that. DBT doesn't have any inherent concept of state. And so DBT doesn't help you not reprocess yesterday's data or the data from the day before. SQL mesh does. So like there's two things minimum that I can say SQL mesh is already doing better than DBT. So what is it like got rid of Jinja and actually added some plugins that are really nice. So software engineers tend to like it. Secondly, it manages state. It remembers what data has already been processed. That alone can save you a bunch of money. But then there's some other cool things too. So that is where SQL mesh lives. We are going to be using SQL mesh to do what I just said, bring in some data and run some transforms. And then I think we'll show you today how you can do this thing to not reprocess old data. Yeah. Yeah. Yeah, I think that's a good start. So if I run the SQLMS UI, I want to see if there's anything in here in the default project because I think they give you three models. And I think they tie into each other. And so as we change them, we can see how they're changing. But I actually haven't run anything yet. I don't have a database, so we'll see if there's anything here. So what you can see is this is just our repo. Very nice. You have an environment. You can plan. You have, wait a second did I not run UV this is our other sequel mesh project and I do not know oh these are just I actually don't know how that's here. Is your browser, did your browser cache this somehow? Is using your local cache? Maybe you have a hard one. I haven't opened this in forever. I don't know what these tabs are. Maybe these tabs are something special. I have literally no idea. We can get it. I don't have an answer to that. But for these three models, which are actually the models in this project, you can see the code, which just maps directly to the code in these models. And you can see, so for example, let's look at one of them. So the seed model, that's supposed to like, you don't need to have a seed model, but what it will do is like, it'll essentially point to where data is, it won't actually run a query. And then if you look at the incremental model, which inherits from the seed model, and you even get column level lineage like that, which is pretty awesome. And you can see where the event date column came from. You can see that all we're doing is we're just kind of in this way like dbt, we're just referencing not a table, we're referencing a SQL mesh model that we're pulling from and you can reference six or seven or a million and it'll build that out. And then again with the full model, you can see the full model is pulling from the incremental model um and again you get the column level lineage uh if there is any i guess in this case yeah there's item id and so that builds that that here. So the item ID maps to the table above it. So that's what you start with a project with SQL mesh. You kind of get stuff out of the box. We're going to replace a bunch of these. Now, the seed model comes from some CSVC data. That's fine. My complaint with that would be like, how real the situation is that that you're gonna have some csv data that really is gonna seed your entire project inside of your github repo it feels a little bit unrealistic so i'm actually just gonna delete this model entirely um and i'm gonna remove remove the CSV data entirely. What I will do is we'll quickly maybe talk about full models versus incremental models. This goes back to what Eric was just talking about with how SQL Mesh maintains state. By default, it will store the state in the same database as your actual data. So right now all of our data is in DuckDB, but also our state is in DuckDB. The state is super, super small. Any production like Postgres environment will be able to support it. The SQL mesh people will definitely suggest that you take that data out of your analytics database and put it like out of DuckDB and put it into a Postgres table like a transactional uh system um it's pretty easy to configure that we're not going to do that here for simplicity but that's what they recommend for production um in that state database uh we will have all the things about all of our models so and and when they ran so sqlmash has the concept of a kind and we're going to look at two or three today we're going to look at full and incremental full is like as you'd expect you run this query and it runs this query in full at the schedule in which you run it. An incremental model will not run it full. It will, when you define an incremental by time range, and there's incremental by others, but I think time is the simplest to think about because you might be getting data every day. You define your time column, and then you can optionally define when to start. You might have data back to 1980. You might not want it from 1980. You don't have to do anything special in your code, right? Like the query stays so simple. And then what happens is SQL Mesh gives you the start date and end date globals. And those globals will update over time based on SQL Mesh's state and this just ensures that your time column, it ensures that your query is only incrementally pulling data from every time stamp that it, from every cron that it's scheduled for. So one thing that's really cool about this is like let's say you're like, if you, if you use the enterprise SQL mesh, you can have that schedule all of your models and have them run. But if you aren't, and you're using the open source and maybe you're scheduling them with Prefect or running them on your own, um, you have a cron daily, but that doesn't necessarily, like this isn't, this doesn't live anywhere off my laptop. you don't run it it won't run so what does that mean what it means is like if i don't run this for three days and now i want to do a sql mesh run to catch up all my data three days later it knows that i want this to happen in a daily window as opposed to for example an hourly window or a weekly window and so it will fill every full, it'll run this model once for every full cron window of time. And so like, if, yeah, yeah, it's, it's super cool. Right. So like if today is, is December 13th, let's say the last time I ran, this was December 10th and I run SQL mesh run today. It will run for all of September, December 11th and all of December 12th, but it won't run for December 13th because our cron is a daily, and December 13th has not completed yet, so it won't run for December 13th. If this was hourly, it would run to December 13th up to 10 a.m. Eastern Standard Time, but not 10 to 11 because it hasn't finished yet. At 1101 or at 11 o'clock, if I run it again, it will know and it will fill in the gap. Does that make sense? Yeah. 100%. What's up? So I was thinking, now I've started talking, so I'll just say it. I wanted to give the Eric restate. What was cool to me about this is when I saw this cron thing, I was like, oh, so where's the place in the cloud that this is running? Because clearly that cron statement is me telling this DAG that I want to process a group of data every day. And yeah, I think if you were using their cloud or if you were using something like Prefect, you could schedule a process that would run this SQL query that we see here every day. But you also don't actually have to schedule this thing in the cloud to run every day. You could run this on your laptop once a month, and then your laptop would break that last month into one-day ranges and then just go do 31 versions of this query all at once. Or maybe it wouldn't actually do 31. Maybe it would actually just do the last month's query intelligently. But I think it could break it up into the day ranges for you. That's cool because you don't have to use a cloud scheduler, at least in the early phases of bringing this onto a team, which is kind of fun. It's very cool and you can even um take it a step further which we're not going to get to today but we'll get to in another time you can imagine let's say you are doing this with duck db or let's say you really actually have to do some some serious stuff and and you can't do it with duck db and you have to do it with Polars or even Pandas, but let's, let's talk about Polars. Um, you might be worried because that brings all the data onto disk. I mean, Polars has a bunch of scanning capabilities and lazy frames and that's awesome, but nonetheless, like you might have to do something and that thing, uh, might have to run and it might pull in just way too much data. And let's say your cron is daily. You're like, I cannot fit a day of data on my machine. There are even more parameters when you're specifically doing incremental by time range, which is definitely my favorite model kind to use, where not only are you going to do it on the cron level of day, you can tell it to break that day up into n hours and minutes and seconds and how many you want to run in parallel so let's say it's incremental by time and in fact i like can i can be parallelized i can tell it to do the daily cron but break it up into one hour windows and run 12 at a time and now it will do 12 one hour windows at a time. That's so cool. And yeah, and infill the data correctly back into my database, which is super cool. Oh, yeah, I forgot to say, in the Airflow world, when people talk about backfilling, this is what they're talking about. Backfilling is breaking previous time. If the last time I ran this query was in January and now it's March and I haven't run this DAG since, backfilling is the process of taking from January to March, splitting that into some smaller time ranges and then just processing all those in their own little batches as though we had run one batch at each step. This is very imprecise language. No, that's not wrong. I mean, I always, when I used to use Airflow, I always found it, I always found backfills unbelievably confusing. Like I could not figure out how to do backfills in a reasonable way. Honestly, even with Prefect, I haven't tried three, but on two, I struggle to think about like how to handle backfills. But SQLite's sort of like, it just does it. It does it for me. And it breaks down each model into such a simple unit that it makes it so easy to do the backfill. And that's something I definitely appreciate. And once you do it in dev, you don't have to redo it in prod. We'll talk about that. So let's pull in some data. Let's start in iPod on Shell. So here's pull in some data. Let's start in I-Python, Joe. So here's why we're starting with DocDB. We are looking at some RuneScape data, and for right now, we'll see if we can pull in the mappings, the five-minute and the one-hour, create three different models for those, and then create a Python model that aggregates those together in some useful way and spits out a table that we will call train data nice do you want to show people like what what the mapping yeah exactly so so so here's here's i mean this data is like we're starting with this because look it's as easy as it can be to get some stuff out of it it's json data it's it looks like json lists of dictionaries and this is hard to work with and hard to think about certainly we could pull this in with requests like python requests and json it and put into a panda's data frame but because pan because duckdb is just awesome we can do a duckdb.query and i i grabbed the column names but originally i just did select star and that's how i grabbed the column names, but originally I just did select star and that's how I got the column names. It's the same thing. If I run star or not star from readjsonauto and you literally pass in the get request and it will run this and you will very quickly get your table in your data frame that you're looking for. So that's about as easy as it can get. And so- your data frame that you're looking for. So that's about as easy as it can get. You just ran a SQL query on an HTTP request. That was awesome. Yeah, it's ridiculous. It's ridiculous. And someone might very reasonably say, this is so silly. Why would you do this? There's no reason to do this. And I will actually push back and say that there's totally a valid reason to do this. This is literally the valid reason to do this. This is a mapping table. This table does not change. Unless they add a new item to the game. Sorry. Yes, unless they add a new item to the game. What is this data? So the RuneScape marketplace, there's like 3,700 items on the marketplace that you can buy and sell. And so this endpoint we just hit gets us the list of all the items that exist at Runescape today, which sometimes the game adds new items. They come out with a new kind of armor and they'll add an item to this list, this mapping. But basically this is a table that has the names and the IDs together. And then another endpoint we're going to hit later is for a given item ID, what is the price in a certain time range? Right, exactly. And so we're going to set it up like this. We're going to create our model and we're going to call it our runescape mapping. This is a full model because if we change the query, we want to rerun this in full, right? This is a full model because if we change the query, we want to rerun this in full, right? This is like our base mappings table. And it's really easy to pull it when we do it like this. And so we'll grab these columns just like this. We'll grab it from the rejson auto, and we won't even group anything. We will just select these columns from here. And we do sql mesh ui or plan or apply like it'll know now that this model maps to these columns in the metadata table and it won't rerun this query even every day because it knows that it hasn't changed if i change it and i remove a column it will know that it's a breaking change and it will tell me i have to re-run it and it will tell me which tables are going to break as a result because it will know which tables query the members column of this model we'll get to that in a bit let's run this i mean i mean let's get to running this as soon as we can whatever yeah so this is a full model running it daily the grain which is the primary key effectively is the id and we have an audit which is search positive ids what is an audit you get one by default you can write as many as you want an audit is just a sql query and it will fail your job will fail if this fails so we're asserting that all of our ids are uh greater than zero if this returns anything that means an id is less than zero our model will fail great let's build our incremental model incremental makes sense so we're going to do we're going to start with the hourly data because where is it one hour uh we're going to do that because I know that this is hourly and makes it really easy for me. We're going to do the start. We're going to set the start to 2024, December 13, today. I know what you're building to, and this is going to be unreal when you see this. It's going to be so cool. I do not know what you're building i also don't know what i'm building expectations are so high maybe yeah if you don't end up building do what i thought then i'll just tell you and it'll be awesome okay uh hold on let's see what this table looks like because i've never actually seen it unfortunate it does not map it out as we want and i like as an array so i was hoping it would map it out as an array there's got to be a way like oh it's it's is it json b it's just a bunch of json's one after another well i wonder can i do like oh data is that like a thing because no i don't know the syntax for getting this to the next level. You said you didn't need chat GPT. I'll search for it. Search for it. Let's start with this. Oh, what if we... Wait. Oh no, because it's the one timestamp for all this data. What happens if I do select data? Is it gonna be smarter now? Okay. All right, we will do in our incremental model, we're gonna select ID. Maybe this is actually fine because it'll give us a reason to do, it'll give us a reason to do the Python model. We'll do this from here. A, no, A, join SQL Mesh example dot, what do I call this? It matches the name, makes it easier. Runescape mapping. SQL mesh example.runescape. So I get that autocomplete, which is nice. B on a.id, a.Data, B. What do we have from our Runescape data that we want? Let's say low out B.I.L. PyL B.examine. Let's start with that and do that. Okay, so now we have our SQL mesh and this is not gonna be called incremental model. This is gonna be called like hourly scrape, for example. And we'll rename this to hourly scrape. Okay, now we have our two models. If we run uvrunsqlmeshui, go to our UI, and we can see in our models, hourly scrape. Event date are missing in the model. Oh, I must have. Oh, it's not ID event date. It was timestamp. This is the hourly scrape. OK, cool. Yeah, I just didn't update the name from the example. Still have an error. Partition by key event date. Oh, thank you. Time stamp. Does it do a reload? I wonder if you need to put those quotes on line 8 too in the grain. Maybe so. You hate it when column names map to... Keywords. Keywords. Anything? Where's event date? Oh, on line 4 4 time column event date what is that oh let me see the stream gets so grainy sometimes it's hard to see yeah oh is it dang i was gonna say maybe you can there we go make it bigger too yeah i'm on my macbook 13 inch so it's I don't have that much space. But how's this? Can you see this? That's great. It's got to be gigantic on your side. Yeah, that was good. So here we go. So check this out. We have the full thing. So we have our reJSON auto is giving us our high alt column which is then mapping to this high alt column here um that's so cool so cool and you can also see it does not know the data types of a lot of these columns uh which makes sense because it's not coming from a table it's coming from a json uh and And so SQL Mesh has no way to figure that out. We could tell it, and we're not gonna do that right now because we have no time, but we could. So just a note. And then once you tell it, if it changes data types, it'll know things. So we could do a plan in the UI, but I think it's better to do it in the CLI. And so we'll run UVMesh SQL Mesh plan, dash dash, plan dev. Sole model, sole model, where do I still name that? Okay, so we have tests. We're gonna comment out the test because the tests are based on the old data that we didn't, that we updated. But what's cool is that you can write tests and you can write input and output data and you can assert what the output looks like and you'll get an error oh come on okay great so uv run sqMesh plan dev, it figures out everything that's going on. And what you'll notice is that it creates some schemas of our schema name that we created SQLMesh example. And that comes from just the name of the model, underscore, underscore dev, because we just did a dev plan. You could have named this anything, but it reasonable to name it dev. And then our model names it model it knows what models need backfills because it stores that in the metadata and it know and since we set the start time to december 13 um this is what i was talking about before um the runescape goes from December 12 to December 12 because it is a daily cron and December 13th hasn't finished so it won't go to December 13th this one is an hourly cron the hourly scrape it's hourly and so it goes up to the most recently completed hour 14 whatever UTC and it won't do the next hour because our hour hasn't finished. So we're going to do a full backfill. Enter the backfill start date. Could be a year or blank to backfill from the beginning of history. We'll backfill from the beginning of history. And we'll backfill up until right now just by sitting done. And we'll apply backfill to all of our tables. And it failed because we're doing things live. TableAuth does not have a column ID. Let's fix it. Red.jsonAuto does not have an ID. What did it have? Do you remember? Oh, right. It didn't have an ID. Oh, we're not going to be able to combine them. Okay. We're not going to be able to join them in thing. We're going to have to parse that JSON and combine them in Python. Okay. No problem. We will grab this. We'll just grab these three columns. Select data timestamp. Let's see if we have time to finish all this. I put a query in Slack. You could try. It tries to like, chatPT gave it to me. The response of the API is a single JSON object. And this query that ChadGPT gave me, I think, can reach into the JSON object and grab each key as a row. Okay. We can try that in a second. But I want to get through building the dev environment just to showcase one really quick thing. So what we just did right now to keep it simple is we just pulled data from the hourly scraper through the timestamp windows that we that we wanted. And so we have two tables that are if we went back to the UI, they're no longer connected. And we can have a Python model that merges them. And so I reran SQLMesh plan dev from history to now. It created the tables and then it evaluates the models. It creates them and then it virtually updates dev. So what does that mean? It means we can do a UV run SQLMesh fetch df. Fetch df. run sqlmesh fetchdf and then sqlmesh example underscore dev dot what do we call this rune runescape mapping and what it will do is it will reach into the DuckDB table, and it will grab the table for us, and now this exists. Obviously, we haven't done any joins yet, so it's a little unfortunate, but if I now run uvrun SQL Mesh plan, and I don't run it, and I'm going to plan, so that means for production, if I don't give it an environment, what it's going to do is it knows what prod is missing, which is our two tables. And you can see it says apply a virtual update. So all it's doing, it did not run a single query. We didn't pay a single dollar here because it knew that it found that an environment in our SQL mesh world, one of our virtual environments was already up to date with the code. It just did a pointer swap. It grabbed those schemas, the underscore, underscore dev schemas, and renamed them to the non-underscore dev schema. And so now prod exists and it's up to date with dev. And so what that means in practice is you're building tables. Let's say we build a new table now, a new model, and it joins them together, creates a new feature, and that feature is used for modeling. We can run all of that locally, connected to our virtual, wherever our server database is, with SQL Mesh, do the plan, make sure the query runs, open up a PR, and set up your CICD such that when that PR is merged, SQL Mesh does a plan on prod. Since it was already planned on dev and it was up to date with the database in one of your virtual environments, it just does a pointer swap. And now prod is immediately up to date with the things you were doing locally after the PR passes and the tests all pass. And so you don't have to repay for any of those queries because they've already been done and validated and you trained your model which i think is like one of the coolest components of sql mesh there's one question coming through um which is how comparable are the tests to something like great expectations i think they are a lot like great expectations i think that so there's two things there's audits and tests um while we're doing this um grabbing any model that i've already built from python because the syntax you know just to keep the syntax easy audits will let you like audit um very simple things like a column is positive, something like that. Tests literally let you define full input output tests. So given this is my input, SQL Mesh knows how to query from that and run the incremental model. These are literally the outputs I expect. So it's like, I would say even, I would say an audit is like grid expectations. And that you can define a thing that you expect to happen. And then tests, I would say are closer to like pie tests where given this input and you run this model or this function, this is the output you expect. I would say that that's a good comparison there. Let's see if we can do this really quickly get rid of this is the batch size and batch concurrency I talked about but we'll do that another time make this hourly the columns are I'll do this quickly enough in my head um these are those speed coding oh i, I have them in here. Yeah. Let's grab low-alk and high-alk. So what we'll do in our Python model, get rid of this, get rid of this. We'll return a pandas data frame. Mapping. This is hourly. It's great. And now what we're going to do is we have our SQL mesh in Python. You can create these Python models as well. And they're very similar. You name them the same. You define the columns that will be returned and then the same kind of model kind and cron and then you just define python you could do whatever you want um so the mapping table was sql mesh example dot runescape mapping the hourly scrape was sql mesh dot hourly scrape the where clause with python you do actually have to insert yourself which is a little unfortunate but it is what it is um you connect to uh you connect to uh your database um like your sql mesh actually i don't even think you have to do that and you can do like a context query um have to do that and you can do like a context dot query yeah context fetchdf great um it's simpler so mapping df this will give you a Python data frame. And you'll do like this, select, what do we want to grab? Low alc and high alc. Low alc, high alc id from mapping, where clause. And then the hourly scrape will be a fetchGF from timestamp data. This is timestamp. This one doesn't have that. This has the timestamp. Now we have these two data frames, and we can merge them and maybe do other things with them. I don't know. I'm going to be able to do this quickly enough because I don't know the data well enough. But should we stop here or should I keep going? Eric, what do you think? I feel like I'm moving too quickly. I feel like it's fine to keep going a little bit um okay if you want if you want to jump that's fine because to be honest this whole stream was kind of our first test run anyway so that's true um one thing i want to try is that that endpoint where you can fetch the hourly prices. It gives you the most recent hour whenever you hit that endpoint. But it can accept a timestamp as a query parameter. It'd be kind of awesome to have SQL Mesh create timestamps for each range to pass in and then call that endpoint a bunch of times. Yeah, I agree. OK, we have our df1. And then df2 is equal to db.query. Select the timestamp data from SQL Mesh example.hourly scrape. Oh no. Is it because... 12? oh oh that's kind of confusing it's because this p is this this this route already only gives me the last hour so this isn't really an incremental model this is actually just a full model which is what i was saying so what you can do is like here there's a parameter i think it's just time stamp equals a thing and then like this could be the start date like let's let's do that like okay um yeah how do we like do we need like wrap this in double quotes or something like How do we do a... No, no, no. I think it's fine. Wait. Oh, wait. Oh, it's not a format. This isn't Python. This isn't Python. So this is... We don't even need these curly braces. But this has to be UTC times. Is there a way we can cast... I don't... I think it already is done in UTC. Oh, is it? Is there a way we can cast? I think it already is done in UTC. Oh, is it? Okay. But I don't think this is going to work because it's in a string. I see what you're saying. Do we need a – oh, okay. I do that. I have no idea. Cool. Let's just try running. Let's just try it. There's a trailing a all right do we need this where clause um i don't think it does anything well that yeah that was my point about it not being a full model, it's an incremental model. Yeah. This is the equivalent. Running this HTTP request is the equivalent of adding this where condition to selecting the data. I think this is an incremental model. Yeah, no. This is an incremental model. That's fair. No, but this would actually be, I think, end date. I think let me read what the API says says because i think i think it's the start of the time range is what it is um yeah time stamp if provided will display the time stamp field represents the beginning of the one hour period being averaged so i think so this time stamp parameter gets us the beginning of a time range. I think this does need to be start date. So it's the beginning of the range. Well, it seems like that might work. All right, let's try it. If this works, this is so cool. That would be very funny. I found processing. Oh, okay. I failed processing. Oh, okay. Because how does that hourly actually work? Is it supposed to be like a number? Yeah, that it's like a UTC thing. There's a, I can give you a sample yeah if you go to that runescape docs page um they have a sample one you can just click on yeah see there yep um okay no what didn't time okay Can we cast that as UTC or something? Cast. I just don't think that's how you concatenate strings. It's very Pythonic and I don't think that's legitimate. It doesn't have a cluster operator. Yeah, and I don't know how to. I'll run this through chat GPT because I bet it's a plus operator. Yeah, and I don't know how to. I'll run this through chat GPT because I bet this is a quick one. Yeah, I bet it's quite easy. Because if we get this, I feel like this is a good stopping place. Like, check it out. We have an API. We can use SQL Mesh to break up the date ranges for us. Super cool. Yeah. I agree. But actually actually right now you can just do this to get the latest hour just to see it complete and then we can always edit back sweet sweet oh it's pipe pipe. Let's see. Yeah. To concatenate strings, you double pipes. Yeah, drop that in there. Yeah, sorry, one sec. It's given it to me as integers, too. Live coding. What is happening? It doesn't look like. Snapshot. Where did I write snapshot? Nowhere. I need to yield. I need to yield. I need to return mapping. Yes. I think I just added it. Do you want to try running it? Some casting the start date as units. Yeah, so this is a good example of something it knows what we changed right so you can see it it sees that we changed this it sees that and it sees now what we do this this is such a breaking change i mean it might be actually i mean this is not a breaking change that's kind of funny but yeah and it knows that we need to update the hourly script and it knows that we need to update dev because dev is dependent on the hourly scrape nice cool backfill yes no what happened two times down i don't i what is this using duckdb sql dialect I think so that's another thing to call out is you can write these models in any SQL dialect and they'll work like right because they have sql clot under the hood yeah like the sql mesh folks they are like the ultimate sql trance pilot thing that can translate any sql dialect to any other single i would just what if we just do this Oh, here we go. Oh, did you get it? Oh, was it already Unix time? I just want to cast a date, please. I know, it's so silly. Okay, that worked. No way, wait, I just lost the... No, it worked. That worked. So now we just have to figure out what's wrong with the Python model. Nice. It says execute got an X-Way. Oh, snapshot. You just have to figure out what's wrong with the Python model. Nice. It says execute got an expert. Oh, snapshot. Where is snapshot? I don't know where that's coming from. SRC slash. These are logs. For people watching, I'm sorry. I can't see the YouTube comments. So you're probably, you might be asking things. We're just ignoring you. But next time, I think Demetrius is watching it, right? He left the call. Oh, no. Okay. Let's stop here and let's come back and then we'll figure out. I don't know what it's actually talking about with respect to this. I don't know what it's talking about with snapshot. Because. Yeah. Oh, wait a second. Is that like. Oh, do you have to define it? Is it like built in? Yeah, maybe. Do you want to use snapshot any equals none or something? Yeah. Let's see. Oh, maybe you need to provide keywords. Oh, okay. That's unfortunate. Okay, let's try it. What a lovely SQL mesh plan we have there. So like timestamp is bad again, why I think we need to. That's why. Yeah, what a pain. Yeah, I know terrible column names. All right. Does it have to be like, what SQL language is being used in this FetchDF? Is it DuckDB? Yes, because our engine is DuckDB. I think it knows to do DuckDB. Oh, context.fhdf is a SQL mesh API. Yeah, it comes with your Python model. That's super convenient, actually. From mapping? Oh. Oh my gosh. Timestand preference and select call. Timestamp reference in the select call. Timestamp from hourly scrape. Isn't that timestamp? Can't be referenced in what's defined. Did hourly scrape execute successfully? Yeah. OK. Just get the motion. H.J. Okay. It's cool that even though we're getting an error in the Python model, it doesn't seem to be rerunning the upstream model. So even as we're developing and running into errors, it's like saving us from spending tons and tons of money on our lake house. Yeah, that's the whole thing. And also just time wasted. Yeah. Because this is like a pretty real set of development. I would totally need to go into the same debug cycle if I were working right now. Let's see what happens if I do a select star. Oh, what's interesting is that this returned nothing. It seems like that returned nothing. Oh, yeah. If you look at the data frame. Let's comment it out and just do this for a moment and we'll come back to it. Okay. Oh, you know, I think this hourly fetch, that HTTP endpoint is getting us, it's returning us one single JSON object at every... Yeah, but that shouldn't matter. It does return two tables. Like if I do... Oh, it does work. Okay, cool. Yeah. But if I do uvrun ipython and import DuckDB, DuckDB.qu dot query select data timestamp you use the ipython terminal a lot oh i love it yes yeah so that so that worked fine. But it only returns us a single row, which is like this giant object containing the entire table. Well, that's why I wanted to do... Oh, dude. So, oh, is it? Oh, my God. It's because it's double quotes for... Oh my god, I hate myself. I think it's this. I love that. You don't just click on the token and do a quote. You always manually put the quote on both the start and the end. Yeah. I'm not an expert. Everyone's got their own little weird guesses. Not a VS Code expert. Let's see. Oh, yeah. Okay. You could do... Wait, what failed? Hold on. I think it's in your... Oh, it's in the model. I think you're single quoting timestamp in the model config in hourly. Yeah, go up. Timestamp is in single quotes there. Is that the whole thing again? On line 8 as well, there in single quotes there is that the whole thing again on line eight as well there's single quotes yeah i would think that this one shouldn't maybe it does matter let's say i i don't know it is a bit of a dsl yeah a little bit column timestamp referenced that exists but i cannot be that supposed to find what column timestamp referenced that exists. But I cannot be that as far as it's fine. What? Seeking SQLMesh of Sample Train. Well, maybe we're not qualified to be doing this stream. Maybe we should have had a SQLMesh. Oh, we certainly should have had a SQLMesh. There's no question about that this feels a lot like the other one i built i don't know what the difference is well this is where if someone's in the youtube chat and we can't see the comments they're like you idiots like they they probably have the answer like let's find out i don't know how to see the comments though i feel like we're way over we should uh just kill it yeah kill it i mean come back and it's just like if people want to drop they can so i feel like it's fine if we just stay on the stream and um and i mean we can go as long as we want. Anyone else can drop it. In the future, we won't do them this long, but this is kind of like a practice round. So if you want to see. I was in a comment. It's very funny. OK. Yeah, I think we should call it, I feel like we're, we can go back and do a little bit of prep and make sure and figure out what we were doing wrong here and then explain it on a fresh call. Sick, sick. Well, see you guys. Whoever joined, thanks for being on our first one. So, see you later.
Exploring SQLmesh
4,738
MLOps.community
20241214
SQLmesh is an up-and-coming tool that addresses some of the shortcomings of dbt. Ben has been playing with it to see how it might fit into an ML Platform. Join us live to ask questions and hear Ben's perspective after doing a short PoC with prefect and SQLmesh.
2024-12-17T01:38:50.210832