"## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","The speaker, Sreeram Kannan, is introducing the concept of EigenLayer (IG Layer) as a coordination engine for open innovation. He explains that IG Layer is a new type of platform that brings together various parties to achieve a common goal, similar to how governments and institutions coordinate to create secure systems. Kannan discusses the two types of positive-sum games: innovation and coordination. Innovation is when one takes something non-resourceful and turns it into a resource (e.g., air becomes oxygen). Coordination is when multiple parties work together to achieve something greater than the sum of their individual parts. He uses the analogy of a country's government acting as a coordination layer on top of which a free market economy operates. Kannan wants to apply this concept to digital platforms, making them more secure and reliable through decentralized trust. IG Layer allows users to ""re- stake"" (i.e., re-commit) their existing staked Ethereum (e) to new middleware or services. This enables the creation of a wide range of new services that can build upon each other, increasing innovation and security within the ecosystem. Kannan explains that Shad security is strictly better than traditional security models because it combines multiple parties' security resources, making it more resistant to attacks. However, this also means that users give up some control over their individual assets. The speaker mentions that EigenLayer's upcoming version will include a feature called ""insurance,"" which allows users to buy protection against potential slashing losses. This is done by burning a portion of the slash funds and replacing them with insurance tokens, ensuring that users have adequate protection against security breaches. Kannan outlines the three stages of IG Layer's launch roadmap: 1. Stage one: EigenLayer staking (already live) – users can re-stake their e natively or using liquid staking tokens. 2. Stage two: Launching the entire ecosystem, including operators and services, except for payments and slashing. 3. Stage three: Adding payments and slashing to the platform. The speaker is excited about the new possibilities that IG Layer offers and invites the audience to explore the potential of this coordination engine for open innovation." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","Here are the 10 questions based on the text: Here are 10 questions based on the text: 1. What is the main focus of Sreeram Kannan's talk at the Breaking Summit? 2. What does ""positivism game"" refer to, and how do innovation and coordination fit into this concept? 3. How does the EigenLayer project aim to bring together different parties to achieve a certain goal? 4. What does ""stakers"" refer to in the context of the EigenLayer ecosystem? 5. What is the difference between staking e and re-staking on EigenLayer? 6. How does the shad security system provide a hardening of security when security is pooled across different services? 7. What is attributable security, and how will it be implemented in upcoming versions of EigenLayer? 8. What are the benefits of using an application that utilizes multiple AVS built on EigenLayer? 9. What is the current launch road map for EigenLayer, and what stages does it consist of? 10. When can users expect stage two (main net) to go live, according to the EigenLayer launch plan?" "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","1. What is the title of Sreeram Kannan's talk? The title of Sreeram Kannan's talk is ""The Coordination Engine for Open Innovation""." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","2. What kind of game is innovation, and how does it work? In the context of Sreeram Kannan's talk, innovation is described as a ""positive sum game"" or a ""prototypical positive sum game."" This means that it is a type of game where all participants can benefit and create value together, rather than competing against each other in a zero-sum game. According to Sreeram, innovation works by taking something that was not previously valuable (a ""non-resource"") and transforming it into a resource that has value. He gives the examples of turning air into spectr or oil into energy as illustrations of this concept. Innovation is also described as a process where ideas are exchanged and combined to create new and better solutions, with both parties benefiting from the exchange. This is an example of a positive sum game because it creates more value overall than what existed before, rather than simply redistributing existing value. The idea that innovation is a positive sum game has implications for how we think about innovation and its potential impact on society. It suggests that by working together to create new solutions and ideas, individuals and organizations can collectively create more value and achieve greater success than they could alone." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","3. What is another type of positive-sum game that exists in coordination? In addition to innovation, another type of positive-sum game that exists in coordination is collaboration. Collaboration is when multiple parties work together to achieve a common goal or outcome, where the collective effort creates a more valuable result than any individual party could have achieved on their own. This type of positive-sum game is similar to innovation, but instead of creating new resources, collaborators are working together to create something that didn't exist before. Examples of collaborative positive-sum games include: * Open-source software development: A group of individuals or organizations contribute code and expertise to develop a product that benefits everyone. * Crowdsourced problem-solving: A community comes together to solve a complex problem, sharing knowledge and resources to achieve a common goal. * Cooperative business ventures: Multiple businesses partner together to create a new market or product, sharing risks and rewards. In these examples, the collective effort of multiple parties creates a positive-sum outcome that benefits everyone involved. The value created is greater than what any individual party could have achieved on their own, making collaboration another example of a positive-sum game in coordination." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","4. How does Ethereum act as a coordinator for digital platforms? According to the text, Ethereum acts as a coordinator for digital platforms through its staking mechanism, which is expanded upon by EigenLayer. In this context, ""coordination"" refers to bringing together different parties that work together to achieve a certain goal. In the case of Ethereum and EigenLayer, the coordination is enabled by allowing stakers to participate in a decentralized trust system, where they can create new kinds of services on top of the underlying network. This coordination enables the creation of new digital platforms that can compose with each other, leading to higher velocity of innovation. The text compares this to the cloud era, where software as a service solutions (SAS) allow for faster development and deployment of applications without having to build everything from scratch. In particular, EigenLayer enables the building of various services on top of Ethereum, such as rollup services, co-processors, and new cryptographic methods. These services can be used by end-users to build their own digital platforms, which can then interact with each other through the underlying infrastructure provided by Ethereum and EigenLayer. Overall, Ethereum acts as a coordinator for digital platforms by providing a decentralized trust system that enables the creation of new services and platforms on top of its network." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","5. What is permissionless programmable staking, and why is it called so? Permissionless programmable staking is a concept introduced in the context of the EigenLayer project. It refers to a type of staking mechanism where users can create custom conditions and promises on top of their existing staked Ethereum (e) balance. The term ""permissionless"" indicates that anyone can participate in this staking process without needing approval or permission from anyone else. This is in contrast to traditional staking mechanisms, which may require specific permissions or authorizations to participate. ""Programmable"" refers to the fact that users can define custom conditions and promises using smart contracts. These smart contracts are used to specify additional rules or constraints on top of the standard staking protocol. This allows for a high degree of flexibility and customization, enabling users to create tailored staking experiences that suit their specific needs. In essence, permissionless programmable staking enables users to take control of their staked Ethereum balance and use it as a foundation for more complex and customized applications, while still benefiting from the security and decentralization provided by the underlying protocol." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","6. What are the two dimensions of programmable trust created from the EigenLayer ecosystem? The two dimensions of programmable trust created from the EigenLayer ecosystem are: 1. Economic Security: This is provided by the staking mechanism, where users can stake their e (ether) and commit to running a certain set of rules or conditions. This ensures that if someone tries to attack the system, they would need to have access to a significant amount of e. 2. Decentralization: This is achieved through the use of a shared pool of validators (stakers), which provides collusion resistance. By using a decentralized set of node operators that run on Ethereum, EigenLayer can borrow this decentralization and provide it as a service to other applications. These two dimensions of programmable trust are distinct from each other and are created through the EigenLayer ecosystem." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","7. What kinds of services can be built on top of EigenLayer, and how do they categorize them? According to the text, EigenLayer enables the building of various services, which can be categorized as follows: 1. **Rollup Services**: These include services that offload computation from Ethereum and are able to scale the entire Ethereum stack. Examples include: * Data availability services: publish inputs and outputs of computations * Decentralized sequencing infrastructure: replace a single Central sequencer with a decentralized one * Faster Bridges: reduce lag before settling onto Ethereum * Me Services (e.g., encrypted mol): handle me that occurs in the rollup ecosystem 2. **Co-processors**: These allow users to run AI applications, database queries, or other computations on top of EigenLayer and bring back the results to Ethereum. Examples include: * AI applications: access running all kinds of outputs from an AI application * Database queries: run SQL queries on major databases and bring back inputs and outputs * ZK services: use Zero-Knowledge (ZK) cryptography to perform computations on private data 3. **New kinds of cryptographic methods**: EigenLayer enables the development of new cryptographic methods, such as: * Trusted execution environments (TEEs): access TEEs from various hardware manufacturers * Secret sharing: encode a secret and send it through the network without revealing it to any single node 4. **Proofs of various kinds**: These include: * Proof of location: attest that a node operator is located in a certain geographic area * Proof of mashhood: prove the number of distinct devices used by a user (e.g., Apple phones or Android devices) 5. **Event-driven actions**: EigenLayer enables the creation of event-driven applications, such as: * Liquidation events: trigger an action when liquidation occurs in a certain protocol 6. **Based sequencing**: this service allows Ethereum L1 to order transactions for rollups, providing faster pre-confirmation and potentially slashing. These categories are not exhaustive, but they represent some of the innovative services that can be built on top of EigenLayer." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","8. How does Shad security differ from segregated security, and what benefits does it offer? Shad security is strictly better than segregated security. Shad security is achieved when the same pool of validators (in this case, Ethereum stakers) supply their security to multiple applications simultaneously. In contrast, segregated security provides attributable security, where each application has its own unique security, represented by a certain amount of stake. The benefits of Shad security include: * **Rigidity and hardening of security**: When the same pool of validators supplies security to multiple applications, it creates a harder target for attackers to exploit, as they would need to compromise the entire pool rather than just one application. * **Economies of scale**: By sharing the same pool of validators across multiple applications, there is no need to purchase separate insurance for each application. This results in economies of scale and reduced costs. * **Elastic scaling of security**: Shad security allows for elastic scaling of security, where the same pool of validators can provide varying amounts of security to different applications as needed. This ensures that security is always available when required. Overall, Shad security offers a more robust and scalable approach to security, providing benefits such as increased security, reduced costs, and flexible scaling options." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","9. What is the current launch plan for EigenLayer, and how has it been divided into stages? According to the text, the current launch plan for EigenLayer has been revised from its original three-stage plan. Here's an overview of the current stages: **Stage 1: EigenLayer Staking (already live)** * Users can stake their e natively or using liquid staking tokens. **Stage 2: Entire Ecosystem Launch** * This stage will include: + Stakers + Operators + Services (Igan da, Data Availability Service, etc.) * All of these components will be launched except for the economic part (payments and slashing). * Everyone can come together and start creating useful services. **Stage 3: Add Payments and Slashing** * This stage will add the payment and slashing functionality to the ecosystem. * Not much detail is provided about this stage in the text, but it implies that it's a crucial aspect of the EigenLayer system." "## Speaking style of Sreeram Kannan ## Speech pattern of Sreeram Kannan uh good morning everybody Welcome to The reaking Summit it's my uh real pleasure to welcome you all here uh I'm sham uh I started this uh Ian layer project you know two and a half years back and it has been uh really exciting for us to see all the progress from there um what I'm going to do in today's uh talk is try to set the context for why we doing what we're doing what is our vision for what we want to build and how we can all partake in actually okay um how we can all work together in actually making this happen um as you can see the subtitle of this talk is the coordination engine for open Innovation um really this is where uh I want to give a little bit of uh orientation around this uh title um to start with uh you know if you think about it if you zoom out there are only certain kind kind of positive some games what's a positivism game you know when we're engaging in Collective action certain kinds of games are win- win or positive sum so that there's a net positive created out of these games and if you think about it fundamentally there are only two kinds of prototypical postive sum games number one is innovation Innovation is when you take something and make a resource out of a non-resource right you can take air and you make spectr from you take oil you make energy you take sand and you make silicon like these are innovations that transform like one kind of a resource to another you know a non-resource into a resource really um highly positive for everybody and Innovation is also like this that if I have an idea and you have an idea and we exchange it both of us have two ideas clearly positive sum unlike other resources which are finite and you cannot create more of so Innovation is one stere typical prototypical or postive some game there's another one which is also very powerful coordination coordination is when many parties come together and create something which is greater than the sum of the parts right if they work together if we all work together we can do something which is much bigger than what we can all do just going our own separate ways um and if you look at these two different types of posm games there's Echo of this structure everywhere that you look you know in working systems so you can think of like you know these two structures interlay and work together with each other in very powerful ways in different examples um and one example is let's say you know you take a country like the United States um on the you know or any you know successful country for that matter you'd see that basically the government acts as like a coordinatEigenLayer on top of which there is a free market a competitive economy which can be built on top of it which is you know akin to open Innovation and what we're aspiring to here is for to to do this for digital platforms essentially coordination via bringing decentralized trust who brings trust you know trust trust is created through like this decentralized Collective you know in our vision ethereum and EigenLayer work together to actually create this on top of which anybody can build arbitrary new digital platforms which can compose with each other and you know in our in our ecosystem we call this AVS you know actively validated services or you can also think of these as uh like a decentralized version of software as a service that we um we see in the cloud okay before I go in into explaining uh you know what all we can do with this kind of a platform I'm going to start with the basics like for some of you who may not be aware uh just a couple of minutes so I in I lay what we do is we bring together a variety of parties so I mentioned being a coordinatEigenLayer what does it mean to be a coordination layer you need to bring together different kinds of parties that work together to actually achieve a certain goal and in our case uh it is mainly the first side of this is stakers so what happens is in ethereum right what you do is you go and stake your e stake your e what does it mean to stake your e you put it into a contract and then make a promise that you will hold to the conditions and the Covenant of the ethereum protocol what EigenLayer does is to make this much more expansive so we call this reaking reaking is you stake your wreath and then you're adding on a additional conditions taking on additional covenants making additional promises uh that's what you you know it's now popularly called reaking in fact we're calling this the reaking summit but if you want to be really precise you would call it permissionless programmable staking that's really what it is what do I mean by that so you take the e that's staked in ethereum and then subject yourself to additional programmable sets of conditions so when you take it into I layer you're basically saying hey I'm going to run any kinds of new middleware Services actively validated Services whatever you want to call it but essentially what what you're doing is you're saying hey I'm I'm taking my eat and normally when I'm staking I'm promising that I'm running the ethereum protocol correctly but now I'm going to promise that I run all these Services correctly okay um and when somebody wants to build an ABS essentially they're talking we're talking about building two things number one they can build arbitrary software you know a container in which they can house and deploy arbitary software and a smart contract so I itself is a smart contract in ethereum but it allows anybody to build new smart contracts that talk to the IG ler contract any new middleware or AVS can build a new smart contract that talks to the a l contracts and this the AVS contract can SP ify the payment condition the slashing conditions and the registration conditions who can register how much do they get paid and how much should they get slashed so that's the overall structure of how you build uh how we are able to use Aon layer to actually take the underlying decentralized trust from ethereum and then Supply it to any kinds of new middlewares or services that can then be built on top you can think of this as the kind of open InnovatEigenLayer anybody can build these new new kinds of services okay so in the last slide I call this permissionless programmable staking right why is it programmable staking because you're staking and then other people permissionless can create these middlewares and services that can consume your staking and then create new kinds of services based on that so you can think of Ian lir as being a paradigm for programmable trust okay so you know at the base of all of this we have the I layer Shad security system they're calling it you another way of thinking about it is a Shad security system why are we calling it Shad security the same stake or the same pool of validators are actually sharing that security to a variety of different applications so that's another like model for thinking about this there are really two things that power this Shad security system on the one side we have the eat staking people can stake eat and this provides a certain amount of of Economic Security Economic Security means if you know that if your service is not run correctly you you will be able to slash a certain amount of e there's also a certain amount of decentralization you because you're borrowing the same set of node operators that you know run something like ethereum you can borrow the decentralization and this gives you a a certain amount of collusion resistance that these are distinct operators you know neutral set which which is actually participating to validate your service so these are the two dimensions of programmable trust that are uh created from the I lay ecosystem and now what can you do with this you can actually start uh building a variety of different things and one way to like root this thing is to take an analogy from like the pre crypto or the web2 world and you know you can think of in the cloud era the you know if if you think back to 1995 and you want to build an application you have to build your own like you know server stack you have to build your own you know uh authentication payments database everything yourself as well as building whatever application you want this is what you would have done if you wanted to do web application development in 1995 in 2023 that's not what you would do you would go basically use a cloud service you there is a bunch of software as a service solutions SAS Solutions on top like o like mongodb like um you know stripe all these things and then you know when you want to build an end user application you just concatenate these pieces correctly and then you can build whatever application you want leading to much higher velocity of innovation how can we kind of see an echo of this in the uh crypto world so you know one can start thinking about what kinds of you know the the middlewares and avss the actively validated services that can be built on top of won layer as something Akin into these SAS services and then end user applications can then build on top of these services so what I'll do next is give you like a little bit of idea of what kinds of services can be built on top of wag so you can categorize them in many different ways here are a few so number one is rollup services like categories of services so if you think about the ethereum road map one of the biggest things going on in the theum road map is the rollup Centric road map the idea that that is going going to be lots of rollups these rollups offload computation from ethereum and are able to therefore scale the whole ethereum uh stack and in the rollup era there's lots of rollup adjacent services that you know may be interesting and we're seeing a bunch of them being built you know we're building the first one ourselves Igan da the data availability service the way to think about this is when you're offloading computation you still need a place to publish the inputs and outputs of said computation you know if I publish the inputs and outputs of the computation anybody else can then verify that I'm doing the computation correctly so that's called a data availability or a data publishing system we're building Igan da as a data availability system on using IG layer but there's lots of other rollup services that we're seeing emerging in uh on the I lay ecosystem for example uh rollups have a single Central sequencer which orders all the transactions can we instead build a decentralized sequencing infrastructure on top of Ian layer rollups take a certain lag before they settle onto ethereum you may want faster Bridges and there's a variety of different Bridges being built on Ian lir when how to handle the me that occurs in the um rollup ecosystem you may want to build all kinds of interesting me services for example I want to say build an encrypted mol for a rollup so which means you need a bunch of nodes these node needs needs to participate in some kind of threshold cryptography so that when you send a transaction no one node is able to actually see the transaction it's encrypted but then after the transaction is included then it you can actually decrypt it so you can build me services on EigenLayer and another category that we've seen emerge is watchtowers you know if you have not one or two or three optimistic rollups but thousands of optimistic rollups which is there we going towards you have to make sure that there are people who are actually watching what's going on in these rollups and trigger a fraud alert or a fault alert when such a thing happens you need a neutral set of nodes to do this so again you know a new category that we're seeing on ier so this is rollup Services another category which I I'm quite excited about personally is the family of co-processors how do you think about a co-processor you're sitting on ethereum and then let's say you want to run an AI application and then get the output of such AI application onto ethereum this would be an example of a co-processor you know you on ethereum you're in the evm programming environment but I want to access running all kinds of other outputs you know maybe you want to run a Linux machine uh and a program you know for which you made a commitment and then you want to say that hey if you run this program then this is the output and then bring it all back to ium to be an example of a co-processor you want to run a database a SQL query on a major database and then you want to say the inputs the outputs of said SQL query you want to bring it back to ethereum you want to run like a ZK service and then you want to bring you know the outputs of such cryptography all of these could be examples of co-process we're seeing many of these uh show up on ION layer the next category is you know new kinds of cryptographic methods um you know I'll talk about the ioner service which is a new uh service that we are building later but there are things like trusted execution environments I want to run like a trusted execution environment committee a trusted execution environment is a hardware device which has certain kinds of you know uh there is a little bit of trust assumption in the manufacturer like Intel and uh AMD and and Android all of these different Hardware manufacturers have different te environments but you know to be able to access te networks on you know on ethereum is a very interesting use case you know things like secret sharing I want to take a secret and encode it and send it through the network so that nobody has access to the secret but it's spread all through the network um you know more more General version of that is the secure multiparty computation or you know fully homomorphic encryption we're seeing all of these new categories emerge on EigenLayer um there's also other kinds of things that one can do um you know bring proofs of various kinds into uh the ethereum ecosystem what kinds of proofs am I talking about suppose you want to know like where a node operator is located a prove of location you may want to get uh an attestation that basically promises what the prove of location of a certain uh node operator is and and one way to do it is have a decentralized group of nodes which ping each other through the native peer-to-peer Network to actually then figure out what the Ping latencies are you you know there are systems like this being built proof of mashhood which is a new kind of idea from automata which is basically the idea that I want to know like how many distinct devices that you know somebody is logging in from a distinct machine a distinct Apple phone or a distinct Android uh you want to have proofs of identity I want to log into a https server and then you know get the authenticated certificate into ethereum you know there's a bunch of protocols like reclaim building this um there's also you know so all these other services are things you would want irrespective of the fact that these are particularly ethereum stakers right they need a certain amount of Economic Security they need a certain amount of decentralization but there's also the fact that because we're doing restating of e it's the ethereum block proposal that are participating in the ecosystem and you can start doing interesting things on the uh on the ethereum side for example managing me on the ethereum L1 you can start thinking about event driven actions whenever certain sets of things are triggered you have to actually you know for example whenever there's a liquidation then that liquidation has to be taken and these these kinds of event driven actions for example improve the usability of these platforms massively because you know imagine that like you're running a def platform and you need to calculate the time to uh you know you need to calculate how much over collateralization you you need it's basically the time to liquidation which is actually determining the over collateralization factor and by reducing the time to liquidation you can actually get very tight systems um another system which is you know new newly proposed is the idea of based sequencing where like you know from Justin Drake the idea that ethereum L1 itself can actually do uh ordering transactions for rollups but when you're doing that one of the things you may want to do is how do you get like fast pre-confirmation and if there is ethereum Stak by the block proposers on ethereum then and and they're restak on agal then you could basically start doing things like pre-confirmation they make a certificate that hey I am going to include your transaction and send it to you right away in an instant and then later if they don't they get slashed so these are the different examples I this is not an exhaustive list but the but the types of things that we starting to see on on Ion lay and the way we think about it is the systems that build natively on EigenLayer are like the SAS Services which means they are infrastructured pieces and end user applications will then concatenate a bunch of these pieces to actually build usable applications and we talking about how do you take crypto to a billion users one of the things you have to think about is what what set of like functionalities do they need and that's that's where we think that IG layer will play a role is the core functionality layer and then applications will just mix and match these different pieces to then get the end us of functionality that you want okay so that's a brief overview of what the scope of the project is and we're talking about to be the coordinatEigenLayer for open Innovation this is really what we mean many of these things we had no idea that these could be done on EigenLayer so these are all emergent you know lots of people here have actually come up with many of these different things and it's it's amazing for us to just sit and see that once you allow this coordinatEigenLayer what all can then emerge out of it okay so um in the next couple of minutes what I'll do is briefly touch upon what is the fundamentals of the shad secur system um when when people think about reaking they're thinking about something like hey I'm reusing the same e some kind of Leverage or some other concept and I just want to dispel some of these myths here so what is the core functionality of what is actually uh what i l is actually doing the first point is that Shad security is strictly better what do I mean by that so let's forget that we're reaking from ethereum to EigenLayer let's just imagine that inside I there's a certain amount of each stake but it's supplied to all these Services simultaneously right so one way to think about it is let's say you have $1 billion restak to 1,000 services this is One World another world in which each service has $1 million state which world is better right to to attack anyone service in the other world you just need one uh 1 million whereas to attack anyone service B the the same pool is restak across all these Services you need 1 billion Capital as an attacker to go and attack any one service there is a certain rigidity a certain hardening of security when you pull security together we see this all all through the place right like this is why Nations coordinate you know you don't have City cities don't have armies Nations have armies sometimes even like many nation states coordinate to create alliances that actually work together it's exactly the same phenomenon Shad security is strictly better there is a little bit of downside in that which is in if you had segregated security you have something attributable to yourself each service has that 1 million whereas in this you get a little bit of mixing together of pooling which is good but you know if you also wanted attributable security what we're doing in ier in in the upcoming version not in the version that is already live and launched uh but in the upcoming versions we are working on a design where you can also get attributable security what do I mean by that you know if you have $1 billion stake there's potentially $1 billion to be slashed and some service you know maybe there's a bridge which says hey I'm very very security critical you know if my service goes down or something gets compromised I need at least $100 million of insurance of the slash uh portion so instead of taking the slash portion right now what we do is we just burn it like ethereum does in our V2 what we'll do is we can actually give you a portion of that slash funds and the ability to buy this is called insurance and you pre- buby it and now you not only have the pooled security to attack any one service you need to be able to acquire the $1 billion of capital but to if your service gets attacked you you know how much you can slash uniquely this this insurance is not over provisioned so you it is always guaranteed that you will be able to slash that muchoney so that's a superpower so you can both get the benefits of pool security and the benefit of attributable security you can also start seeing that there are economies of scale which is if you're if you're using an application the application is using several avss built on EigenLayer then you can you don't have to pay 5x if you're using five Services then you don't have to buy insurance separately for each of these five Services you just buy insurance once on igon lay so there is is an economy of scale and then finally there is an elastic scaling of security you know uh Amazon's called ec2 elastic compute right which is I don't know how much compute I'm going to need I'm going to go and buy it from a common pool and there is randomness of how much compute is needed by different people they go and buy the portion of compute that they want there is a similar phenomenon in ier which is the elastic scaling of security there is a large pool $1 billion or whatever amount totally sitting as security now whenever like a different Services there are lots of different Services each service needs a randomly varying amount of security why because you know I'm running an E2 USD bridge when there's a E2 USD price volatility people might want to use more of that there is a BTC to Sol like Bridge or something else also sitting on top of EigenLayer then you want different amounts of security for each of these across time and by having a single layer through which you can buy sh security actually make makes it much better okay so um what I'm going to do from here is just go through the um let me just uh run this through I'm not going to talk about all these things um I want to basically go here talk about our timeline of what we building and when we're going to deploy it um so the um right so the earlier we had divided the EigenLayer launch road map into three different stages and stage one was stakers stage two was going to be operators and stage three was going to be services and instead we've rivid it it now you know in our current launch plan in a different way stage one which is already live is EigenLayer staking like you can restake your e natively or using liquid staking tokens stage two instead of only launching ing for you know operators what we're trying to do is we're going to launch the entire ecosystem IG layer you can have stakers there's operators people can launch Services Igan da or data availability service all of them will go live except the economic part payments and slashing except the economic part everybody all the different sites can come together and start creating useful Services um and in stage three we're going to add payments and slashing so that's our road map currently and you know we are on the stage one is already on the main net uh we will have a stage two test net coming soon you know definitely this quarter hopefully much earlier um which will go on Main net next quarter and then the stage three follows that so that's the current launch road map of uh Ian ler we're really excited about all these different uh new things that can be done across the different sides of ecosystem stakers operators you know um people building new avss rollups consuming a DA lots of interesting things happening there uh you'll hear about some of them today um thank you so much for listening to the first talk","10. When is the stage two test net expected to go live on Mainnet? The user asked about the stage two test net going live on mainnet. In the provided text, it is mentioned that: ""...the stage two test net coming soon you know definitely this quarter hopefully much earlier..."" This implies that the expected timeline for the stage two test net to go live on Mainnet is within the current quarter (assuming a 3-4 month quarter)." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","The text is a conversation between two individuals, likely related to the Ethereum ecosystem, discussing Vitalik Buterin's blog post on restaking and its potential risks to the Ethereum consensus layer. Here's a comprehensive summary of the conversation: **Kannan's interpretation of Vitalik's concerns**: Kannan thinks that Vitalik wants to emphasize that protocols should internalize social consensus rather than externalizing it to the Ethereum protocol. In other words, they should not assume that Ethereum can fork around their problems if something goes wrong. **Concerns about liquid staking**: Kannan discusses restaking risks and views them as a lesser risk compared to liquid staking. He explains that anything that restaking can do, liquid staking can already do, as you can take a liquid staking token (LST) and deposit it into various DeFi protocols or validate new layer 2 networks. **Reasons for the focus on restaking**: Kannan attributes the interest in restaking to its potential benefits of adding new rewards to stakers. However, he notes that people were already doing similar things with LSTs but didn't know how to use them effectively. **Effect of EIP-7514 on EigenLayer**: Kannan believes that EIP-7514's proposal to limit the entry of new validators will slow down the growth of liquid staking tokens. However, he acknowledges that this is a necessary measure to prevent an overflow of validators and maintain the integrity of the consensus layer. **EigenDA (Data Availability)**: Kannan explains that EigenDA is a highly scalable data availability system built on the ideas underpinning the Ethereum roadmap, particularly ""danksharding."" It enables Ethereum validators to participate in data availability without needing their own ordering mechanism. **Liquid restaking tokens**: Kannan clarifies that liquid restaking tokens become illiquid once locked on EigenLayer. However, other protocols are building liquid restaking solutions on top of EigenLayer. **Limitations of using Ethereum's shared security**: Kannan acknowledges that there is a natural limit to how much Ethereum can support in terms of applications and use cases built on top of it. However, he believes that this limit can be pushed further as new innovations emerge and contribute back to the Ethereum ecosystem." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Why do you think restaking is having a moment in the news? According to Kannan, the person being interviewed in the provided text: ""I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting.""" "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. You seem to be referring to the current discussion around restaking and its potential impact on Ethereum. You're expressing a neutral sentiment, indicating that you're glad people are engaged in the conversation but don't have a strong opinion one way or the other." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. Kannan's statement implies that EigenLayer and Liquid Staking Tokens (LSTs) share similar capabilities. However, people were unaware of how to utilize LSTs effectively. This suggests that LSTs had potential but required further innovation or awareness to be fully utilized." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. Kannan is explaining that users were finding creative ways to utilize the EigenLayer protocol, much like people use Ether (the cryptocurrency native to the Ethereum network) in DeFi applications such as lending and borrowing. The ""same set of DeFi parameters"" refers to the various financial instruments and strategies that users employ on these platforms." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. According to Kannan's statement, EigenLayer creates a new category by allowing validation to be done on top of the Ethereum trust network. This enables builders to create various new systems and applications, such as: 1. New layer 1 networks 2. Oracle networks 3. Data availability systems 4. Any other system on top of the Ethereum trust network By doing so, EigenLayer internalizes all the innovation back into Ethereum or aggregates it into Ethereum, rather than each innovation requiring a whole new system. In essence, this allows for a more efficient use of resources and a stronger ecosystem, as all these new systems and applications can benefit from the existing infrastructure and trust established by Ethereum." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","So I think that narrative is quite attractive. Kannan seems to be saying that the idea of building on top of Ethereum's trust network and creating new use cases without requiring a whole new system is an attractive narrative. This suggests that he believes EigenLayer's approach to using Ethereum's existing infrastructure as a foundation for new applications is a compelling one, allowing for innovation while minimizing the need for duplicative systems or fragmented ecosystems." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","What makes EigenDA unique compared to other data availability systems? EigenDA is a highly scalable data availability system built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. What makes EigenDA unique compared to other data availability systems such as Celestia and Avail is that it doesn't require its own ordering; all ordering is done on Ethereum. In contrast, existing protocols like Celestia and Avail are built to be chains by themselves, which means they have their own ordering mechanism. EigenDA, on the other hand, leverages the existing consensus layer of Ethereum to handle ordering, allowing it to focus solely on providing high scalability for data availability. This design choice enables EigenDA to achieve greater decentralization and flexibility while minimizing the need for additional infrastructure, making it a more lightweight and efficient solution compared to traditional data availability systems." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. According to the text, EigenDA (Ethereum Data Availability system) is a highly scalable data availability system that is designed as the backbone of EigenLayer. It is built on the same ideas that underpin the Ethereum roadmap, particularly ""danksharding""." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. This quote from the conversation highlights a key distinction between EigenLayer's approach to building a data availability system and that of other protocols like Celestia and Avail. In essence, this means that: 1. **Ethereum-adjacent**: EigenLayer is designed to be an extension or complement to the Ethereum network, rather than a separate blockchain in its own right. 2. **First principles thinking**: EigenLayer's approach involves starting from scratch and designing a data availability system that leverages the strengths of Ethereum, such as its consensus mechanism and ordering of transactions. 3. **Not building a chain by itself**: Unlike Celestia and Avail, which are designed to be standalone blockchains with their own ordering mechanisms, EigenLayer focuses on providing a data availability layer that can tap into the existing ordering and consensus provided by Ethereum. This approach allows EigenLayer to benefit from the established trust and security of the Ethereum network, while also providing a more flexible and scalable solution for data availability." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. In the context of building a data availability system adjacent to Ethereum, Kannan explains that one key requirement is for Ethereum validators to participate in the system." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” You're referring to the additional benefits of using EigenLayer in conjunction with Ethereum. When you go beyond just getting Ethereum nodes to participate, you open up new possibilities for building more complex and innovative systems that can leverage the shared security and trust of the Ethereum network. In this context, Kannan mentions ""first principles thinking"" which implies a deep understanding of the underlying technologies and their implications. He's saying that by designing EigenLayer with the principles of Ethereum in mind, you can create a data availability system that doesn't require its own ordering mechanism because it relies on the existing consensus layer of Ethereum for ordering transactions. This allows for more efficient and scalable systems to be built, as they don't have to duplicate the same functionality (ordering) that's already provided by Ethereum. Instead, they can focus on providing other valuable services, such as data availability, that complement the capabilities of the Ethereum network. By doing so, EigenLayer becomes a powerful tool for building innovative applications and services that can tap into the collective security and trust of the Ethereum community, without having to reinvent the wheel or duplicate functionality." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. This statement by Kannan suggests that the consensus layer of Ethereum provides a shared security mechanism, which enables ordering and validation of transactions. This allows for building data availability systems (like EigenDA) that leverage this existing infrastructure, rather than requiring their own internal ordering mechanisms. In essence, instead of rebuilding or replicating the consensus layer within each new application or protocol built on top of Ethereum (such as Celestia and Avail), you can utilize the existing consensus layer to achieve data availability. This reduces redundancy, complexity, and the overhead associated with building separate consensus layers for every individual use case. This approach also enables greater scalability, as it allows multiple applications and protocols to share the same underlying infrastructure (the Ethereum consensus layer) and take advantage of its ordering and validation capabilities." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. This quote from Kannan suggests that EigenDA (the highly scalable data availability system developed by the EigenLayer team) has a unique architecture that leverages Ethereum's ordering and consensus mechanisms, rather than trying to replicate them. In other words, unlike protocols like Celestia and Avail, which are designed as standalone blockchain systems with their own ordering and consensus rules, EigenDA is built as an ""Ethereum-adjacent"" data availability layer that relies on Ethereum for its ordering and security guarantees. This design choice allows EigenDA to inherit the scalability and security benefits of Ethereum's architecture, while still providing a highly scalable and flexible data availability system that can be used in a variety of applications." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","What happens if new validators can no longer enter the network as quickly as they used to? According to Kannan, if new validators can no longer enter the network as quickly as they used to (due to EIP-7514 limiting entries of new validators), it will make the growth of LSTs (liquid staking protocols) slower. This is because there's currently a long entry queue for validators, and slowing down the rate at which new validators can enter will only make that queue longer. However, in the long term, if the total staking of Ethereum cannot grow, it may bound the yield or return that stakers are getting from their ETH staking rewards. In other words, if new validators are not allowed to join the network quickly enough, it could limit the overall growth and potential rewards for existing stakeholders." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. Kannan's statement here is that EIP-7514, which limits the number of new validators entering Ethereum, will also apply to EigenLayer in a similar way. This means that the growth rate of new validators on EigenLayer will slow down. In other words, with EIP-7514, it will take longer for new validators to join the network, and this will have an impact on the number of new applications and use cases that can be built on top of EigenLayer. However, Kannan believes that Ethereum's shared security model is flexible enough to accommodate a large number of additional use cases over time, but there are ultimately limits to how much it can support. This is a nuanced perspective because it acknowledges the limitations imposed by EIP-7514, while also highlighting the potential for continued growth and innovation within the Ethereum ecosystem." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","There's a long entry queue right now, and people don't want to wait that long. Kannan mentions that the long entry queue is due to a high demand for new validators, but they have to wait in line to enter. This is making it difficult and less appealing for people to stake their ETH." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. According to Kannan, limiting the entries of new validators with EIP-7514 will make it slower for new validators to enter the system. This means that the growth rate of Liquid Staking Tokens (LSTs) associated with EigenLayer will also be slowed down. However, he acknowledges that this is a necessary measure to prevent an overflow of validators that could put pressure on the Ethereum consensus layer, and thus compromise its security." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Is EigenDA just like in-house AVS, actively validated service? No, EigenDA is not just like in-house AVS (actively validated service)." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Kannan is explaining that the team behind EigenLayer has made a conscious decision to prioritize decentralization and shared security, particularly for the system's backbone. To achieve this, they are developing a highly scalable data availability system called EigenDA. EigenDA is built using ideas from Ethereum's roadmap, specifically danksharding, which is an approach to scaling data storage on the blockchain. By leveraging these concepts, EigenLayer can create a decentralized and secure data availability layer that leverages the collective power of Ethereum validators. The goal here seems to be creating a robust and flexible infrastructure that enables the efficient management and sharing of data within the system, while maintaining the fundamental principles of decentralization and shared security that underpin EigenLayer." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. This statement highlights a key difference in design philosophy between EigenLayer and other projects, such as Celestia and Avail. The goal of EigenLayer is to build an Ethereum-adjacent data availability layer, meaning that it leverages the existing Ethereum network's consensus mechanism and ordering of transactions. In contrast, Celestia and Avail are designed as separate blockchains with their own consensus mechanisms and ordering protocols. This means that they have more control over how data is processed and stored on their respective networks, but they also require a separate set of nodes and validators to operate. By building an Ethereum-adjacent data availability layer, EigenLayer aims to: 1. Leverage the existing Ethereum network's security and scalability 2. Reduce the need for additional resources (e.g., nodes, validators) to support a new blockchain 3. Take advantage of Ethereum's established developer ecosystem and infrastructure However, this approach also means that EigenLayer may not have as much control over how data is processed and stored on its layer, compared to a separate blockchain like Celestia or Avail. The ""first principles thinking"" mentioned in the statement suggests that the designers of EigenLayer are starting from scratch to create a new type of data availability layer that is tailored specifically for Ethereum. This approach requires careful consideration of how to design a system that works seamlessly with the existing Ethereum network, while still providing the benefits and features needed for a scalable and secure data availability layer. Overall, this statement reflects the innovative and forward-thinking approach taken by EigenLayer in designing an Ethereum-adjacent data availability layer." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. According to Kannan, if you're building a data availability system adjacent to Ethereum, it's beneficial to have Ethereum validators participate. EigenLayer enables this participation by allowing Ethereum validators to contribute to the new data availability layer." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Kannan explains that beyond just wanting Ethereum nodes to participate in a data availability system, there's more to consider. The idea is not just about using the Ethereum network for ordering and consensus, but also about building a system that leverages the existing infrastructure and security of Ethereum. The key point being made here is that Ethereum already has a built-in consensus mechanism and transaction ordering, so why build another system that needs to replicate those functions? By leveraging Ethereum's existing capabilities, it becomes possible to focus on other aspects of data availability, such as storage and retrieval, without needing to reinvent the wheel. In essence, Kannan is suggesting that instead of building a new, standalone data availability system, it's more efficient and effective to build one that integrates seamlessly with Ethereum, allowing users to tap into the existing security and infrastructure of the network. This approach can help create a more streamlined and scalable data availability solution, which aligns with the goals of the EigenDA project." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. This statement by Kannan highlights a key aspect of the EigenLayer's design philosophy. By leveraging Ethereum's existing consensus mechanism and transaction ordering, the EigenLayer data availability system can focus on providing efficient data storage and retrieval without duplicating functionality. In essence, this approach allows the EigenLayer to build upon the trust network and security that comes with being an Ethereum-adjacent protocol, rather than trying to establish its own separate ordering mechanisms. This strategy not only simplifies the development process but also enables the EigenLayer to benefit from Ethereum's existing scalability solutions, such as sharding (as mentioned in the context of ""danksharding""). By reusing and leveraging these existing components, Kannan suggests that the EigenLayer can create a more efficient and scalable data availability system without introducing additional complexity or duplicating efforts." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. According to the text, Kannan states that EigenDA (EigenLayer's data availability system) doesn't have internal ordering. Instead, all ordering is done on Ethereum, leveraging the existing consensus mechanism and transaction ordering provided by the Ethereum network. This design decision allows EigenDA to build upon the trust network and scalability of Ethereum, while avoiding the need for its own separate ordering mechanism." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","What happens if liquid staking tokens are locked on EigenLayer? If liquid staking tokens are locked on EigenLayer, they become illiquid. According to Kannan's response, the problem that the liquid restaking tokens are trying to solve is having a restaked position that remains liquid. However, if these tokens are locked on EigenLayer, their liquidity is lost." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. To clarify, when Kannan says ""liquid restaking tokens are trying to solve"", he is referring to the concept of maintaining liquidity while still participating in staking. The goal of liquid restaking tokens is to allow users to maintain a liquid position even after staking their assets. In this context, Kannan's statement implies that EigenLayer does not support this type of liquidity preservation. When users lock their liquid staking tokens on EigenLayer, they become illiquid and cannot be transferred or exchanged as easily. However, it's worth noting that the development of liquid restaking solutions is still ongoing in the broader Ethereum ecosystem, with other protocols building upon top of existing infrastructure to address this issue." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. This statement from Kannan suggests that the EigenLayer protocol is not directly involved in creating liquid restaking tokens, but rather, other protocols or developers are building on top of EigenLayer to create their own versions of liquid restaking. In essence, EigenLayer is providing a foundation for others to build upon, rather than developing it themselves." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","Are there limits to how much Ethereum shared security can support? Yes, according to Kannan, there is a limit to how much Ethereum shared security can support." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? Kannan is saying that this is a similar type of question, implying that the issue of ""how much Ethereum can support"" in terms of building new applications on top of EigenLayer is analogous to a question about Ethereum's capacity for hosting smart contracts. This analogy suggests that just as there may be a limit to how many smart contracts can be built and executed on Ethereum due to scalability, security, or other constraints, there may also be a limit to how many new applications or use cases can be built on top of EigenLayer without putting excessive strain on the underlying Ethereum system." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. Kannan is saying that the system they've created with EigenLayer, where users can build new applications and use cases on top of it, leverages the shared security of Ethereum to a greater extent. By doing so, more people are contributing to ETH staking, which in turn increases rewards for stakers and potentially increases the value of ETH due to its growing utility." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","So over time, this can start to accommodate more and more. That's a statement made by Kannan, indicating that while there may be a limit to the number of applications and use cases that can utilize Ethereum's shared security, it is possible for this limit to be pushed further over time through advancements in technology and innovation. In other words, as long as new solutions and protocols are developed that can efficiently utilize and contribute back to Ethereum's network, it is likely that the system will continue to accommodate more applications and use cases, at least up until a certain point. However, this does not mean that there will never be a limit. Eventually, as the number of applications and users grows, it may become necessary for the Ethereum network to undergo significant upgrades or changes in order to continue supporting the increasing demands on the system. The exact nature and timing of these potential limitations is difficult to predict with certainty, but Kannan's statement suggests that he believes that the Ethereum system has a high degree of flexibility and adaptability, which will allow it to accommodate more use cases over time." "Q: I saw Vitalik’s blog post about overloading the consensus layer, and how restaking, in his view, could pose systemic risks to Ethereum. I'm curious to hear your take on his take? Kannan: One of the things I think he wants to kind of lay out is that, “Hey don't externalize, and don't create something that, assuming that if the protocol goes wrong, Ethereum is going to fork around it.” I think that is a pretty reasonable position from Ethereum, that you build protocols and the protocols have to internalize social consensus rather than externalize it to Ethereum. So I read it as to not overload Ethereum social consensus, which is used only for forking the chain. And don't assume that you can build a protocol that, and because you're too big to fail, Ethereum can fork around that. So that's how I read it. And I think it's a pretty obvious statement in our view. But I think it has to be said, somebody has to say it, so it's good that Vitalik went out and said it. Because what we don't want is for calls to deploy code that is not properly audited, doesn't have internal security controls, and then the Ethereum community has to now work hard to figure out how to retrieve it. I think a lot of people after reading the article have been talking a lot about restaking risks. I want to make it super clear: anything that restaking can do, already liquid staking can do, so I view restaking as a lesser risk than liquid staking. Q: Can you expand on that? Kannan: Basically, you can take a liquid staking token and then deposit it into complex DeFi protocols, or you could just deposit it into validating a new layer 2, or a new oracle or any of these things. So anything that restaking can do, liquid staking can already do. Because you know, you have the LSD [short for liquid staking derivative] token, and you can do anything with it. And one particular thing you could do with that is, of course, go and validate another network. So I view restaking as just one particular use case of liquid staking, but actually reducing the risk of that one particular use case. Q: Why do you think restaking is having a moment in the news? Kannan: I don't know. I'm glad people are talking about it. Of course, anything that adds new rewards to stakers is something interesting. I said anything that could be done with EigenLayer could be done with LSTs, but people didn't know what to do with these LSTs. They were doing exactly the same thing that people are doing with ether, which is lending, borrowing, the same set of DeFi parameters. I think one thing that EigenLayer did is by creating this new category, that validation, if I can borrow the Ethereum trust network to do new things: I can build a new layer 1, I can build a new like oracle network, I can build a new data availability system, I can build any system on top of the Ethereum trust network, so it internalizes all the innovation back into Ethereum, or aggregates all the innovation back into Ethereum, rather than each innovation requiring a whole new system. So I think that narrative is quite attractive. Q: I was just reading the news about EIP-7514, which is a short term solution for solving the overcrowdedness of validators, by limiting entries of new validators. How does that affect an EigenLayer? Kannan: I think mostly, it means the same thing for EigenLayer that it means for liquid staking protocols, that there is going to be a smaller rate at which new validators can enter. There's a long entry queue right now, and people don't want to wait that long. And making it slower is going to just make the new growth of LSTs slower. But I understand fully that this is a super important thing for Ethereum to be conservative and not have an overflow of validators that may not be able to be handled by the consensus layer. But in the long term, if the total staking of Ethereum cannot grow, one of the things that happens is the total yield or the return that stakers are getting is bounded by the Ethereum staking, whereas in the presence of restaking there is a possibility for them to get some of these additional rewards. Other than that, it's pretty similar. Q: You were making the point that EigenDA is just like in-house AVS (actively validated service) – explain what it is: Kannan: What we decided is, in order to keep this system of shared security, in order to keep EigenLayer as decentralized as possible, we want to make sure that there is a highly scalable data system at its backbone. And that's what EigenDA is, it's a highly scalable data availability system, built on the same ideas that underpin the Ethereum roadmap, particularly what is called danksharding. Our view is that building an Ethereum-adjacent data availability layer requires first principles thinking, whereas Celestia and Avail are built to be chains by themselves. If you're building a data availability system adjacent to Ethereum, you'd want Ethereum validators to participate. So that's just one part of the story. Of course, EigenLayer enables that. But then you go beyond that, and then you see, “Oh, it's not just you want to get the Ethereum nodes to participate.” Ethereum already has consensus built in, and Ethereum gives you the ordering of the various transactions. So you should build the data availability system, which doesn't need its own ordering. Whereas all the existing other protocols like Celestia and Avail, are basically chains that have to do their own ordering; we built a system which doesn't have internal ordering; all ordering is done on Ethereum. Q: Liquid restaking tokens - once your liquid staking tokens are locked on EigenLayer, they become illiquid? Kannan: That’s correct, the problem that the liquid restaking tokens are trying to solve is, can I just have a restaked position, and then still keep it liquid. So you can take that receipt token of liquid restaking and then transfer it. We are not building this kind of liquid restaking but other people are building liquid restaking on top of them. Q: I think your comment was, you want to use the Ethereum shared security for as many things as possible. I'm curious, now that there's also people building on the back of what y'all are doing, is there a natural limit to how much that you know, Ethereum can support? Kannan: This is a similar kind of question that one could ask already at the applicatEigenLayer of Ethereum: How many applications on Ethereum are smart contracts and how many smart contracts can be built on top of Ethereum? So it's the same thing with EigenLayer because people staking and running new applications, but now they do it much more flexibly and programmably with these aliases on top of EigenLayer, all contribute back to Ethereum. Their ETH staking increases rewards, ETH itself potentially increases in value because of all these additional use cases. So over time, this can start to accommodate more and more. But there's absolutely a limit.","But there's absolutely a limit. Kannan's statement suggests that while the Ethereum network can accommodate and utilize more decentralized applications (dApps) and services built on top of it, there are ultimately constraints to how much usage it can support. This is likely due to a combination of factors such as: 1. **Scalability limitations**: The current design of the Ethereum blockchain may not be able to handle an exponentially increasing number of transactions, validators, or dApps. 2. **Security risks**: As more applications and services are built on top of Ethereum, the risk of security vulnerabilities, congestion, and potential attacks increases. 3. **Governance and maintenance**: The Ethereum network requires continuous governance, maintenance, and updates to ensure its stability and functionality. Kannan's statement implies that while there may be a ""limit"" to how much usage the Ethereum network can support, it's not necessarily a fixed or absolute number. Rather, it's a dynamic threshold that will depend on various factors such as technological advancements, community engagement, and market conditions. In other words, the limit is not a hard cap, but rather an upper bound that will continue to be pushed upward as the Ethereum ecosystem evolves and adapts to its growing usage and complexity." "the most successful outcome for i l would be web 3 offers a brand new petri dish our job is to buy Great Tech at great prices AI is also libertarian enable use case that people haven't been able to do today the next hundreds of millions of players they will come to the market through mobile my personal reason why I could be bullish in the next five months is hey everybody welcome back to another episode of The Block runch podcast I'm your host Jason Choy I'm the founder of the angel fun tangent um and everything we discuss on this show is not Financial advice and not reflective of our respective company opinions now one of the biggest busws of this year is reaking and the go-to project in this vertical is obviously EigenLayer now today even before the launch of the token Ian leer has already attracted almost $8 billion in funds deposited this makes it the fourth largest protocol by total value locked according to defi Lama so already I'm getting a lot of messages from people saying it's either the most most transformative thing to happen to ethereum but also increasingly more people are talking about potential systematic risk that comes with the concept of reaking so as an angel investor in igen ler these are the questions that I've also thought a lot about over the past few months so I'm really really excited and grateful for siram the founder of EigenLayer to come on the show to chat with us today so welcome to the show man hey thank you so much Jason really excited to be here looking forward to dive into uh the depths of the project absolutely so I actually stayed up last night and I went through a lot of your tweets and a lot of the transcripts for your podcast interviews and I realized there's actually so so much to go over but I guess before we dive into the meat there i' love to kind of di the clocks back a little bit with start with the origin because I know you've been interested in PP networks before 2008 uh and doing your PhD there and then you moved on to computational genomics I think later and then stumbled in the crypto in 2017 and I heard that you almost began your journey by building on cardano first so I'm curious you know how did you go from that to deciding to build this kind of reaking idea yeah no actually like we didn't you know we didn't Begin by trying to build on cardano uh the story goes back even further so you know in in 2017 2018 you know around end of 2017 actually when I got to uh know about blockchain my first reaction was you know is this some kind of like a speculative bubble that you know and last time you know I worked on peer-to-peer wireless networks that didn't turn out well so I I was a bit skeptical of the whole premise because you know centralized systems are more efficient and they have a way to like outrun decentralized systems so that was the kind of starting point but you know the when it turned for me is when we realize that uh blockchains can basically help us coordinate in the absence of trust and just like the internet is the information super iway blockchains and crypto could be our coordination super iway way that was the kind of like operating model and and once I could see that okay actually if if trust is so Central if we can have a neutral decentralized source of trust that can be like really valuable but you know the the particular problems we are working on at that time was you know how do you scale a consensus protocol what properties can a consensus protocol have you know how would you build uh data availability or an oracle or other systems with various properties what is the game theory for some of these things what are the mathematical guarantees this was kind of how we got introduced into it and the obvious kind of thing we wanted to do was to get like you know Bitcoin or ethereum to use some of these ideas and you know we were so far away from the space and had no real uh connections that we found it quite difficult to kind of understand like and and interact with the crypto space and uh you know it appeared that the most kind of like uh the the only way we were seeing that people from the academic background where getting into blockchain and crypto was actually going and starting hey you know here's a new consensus protocol I'm going to build a layer one blockchain around it and we thought okay there's no other way this is the only way to like take an idea at the level of a consensus protocol because you can't build it as a smart contract on ethereum because the whole point is to change the way that the the nodes come into consensus and so I said okay maybe we should build our own like you know uh layer one and we were actually you know we took a very purist approach and we said like let's first actually build the whole system before we even do a fund raise so we had a project called Trifecta where we built the block built a blockchain you know we were running it on 100 nodes at that time there's an MIT crypto economic summit video where we demonstrated running at like you know hundreds of thousands of transactions per second but that project never launched and that's why you know if if you're a listener and wondering where is this trifecto blockchain it doesn't exist it was uh it it it it only has had a uh short Lifeline you know lifetime as a kind of proof of concept so I because you know we the questions that we got asked from investors and other people was hey okay you you got these 100,000 transactions per second where is the user base where is the ecosystem how are you going to build all of this out and I said I don't know you know if blockchain's a thing you know fast blockchain is a thing and here is the thing that actually does it and from from that point actually you know one of the things that I realized is kind of like a fundamental problem for starting a new like you know decent L protocol is the source of trust right I couldn't just take it and throw it on top of ethereum if it was like that that would have been so amazing right like I just just like you write a smart contract and throw it on ethereum Let the blockchain like supplies the trust so this is when I came up with this model that you know it's very simple once you state it basically like a general purpose smart contract blockchain is like selling trust and earning fees in return for selling trust and selling trust to whom to Applications right so applications are choosing to live on top of a blockchain because it's consuming the trust okay so once I understood this economy the question was like how can we do this more generally like you know more than just smart contracts can we kind of like take the same set of nodes same economics and some more transfer it and we Tred all kinds of things can you build it on bitcoin you know can you build it on ethereum uh and the the original set of ideas we had were not really related to reaking it was some kind of like a complex cryptoeconomic game but the the core idea was hey now I know I can run a different virtual machine on top of a different different blockchain what would I do with that and this was you know in 2021 and one of the things we knew we could do was to run evm on nonm blockchains and so this was the kind of starting point of the kind of ion lay project is we said oh let's bring evm to all the non evm blockchains and you know as a starting point we we took cardano because cardano was probably it had a barly functional smart","Here is a comprehensive summary of the text: The founder of EigenLayer, a blockchain project, joined The Block Crunch podcast to discuss his project. He began by explaining that he was initially skeptical of blockchain technology but became interested in 2017 when he realized that blockchains could enable decentralized coordination without trust. He started working on peer-to-peer wireless networks and then moved on to computational genomics before entering the crypto space in 2017. His first attempt at building a project, called Trifecta, aimed to build a blockchain capable of hundreds of thousands of transactions per second. However, this project never launched. The EigenLayer founder realized that a fundamental problem for starting a new decentralized protocol is finding a source of trust. He came up with the idea that a general-purpose smart contract blockchain can be thought of as selling trust and earning fees in return for selling trust to applications. He understood that more than just smart contracts can exist on top of a blockchain, and he wanted to transfer this set of nodes and economics to other blockchains. This led him to work on bringing the Ethereum Virtual Machine (EVM) to non-EVM blockchains, which became the starting point for the EigenLayer project. The founder's goal was to enable decentralized applications to run on top of any blockchain, regardless of its type or functionality. He saw this as a way to create a more open and inclusive ecosystem for decentralized technologies. Overall, the discussion highlighted the challenges and complexities involved in building a new decentralized protocol, and the importance of finding innovative solutions to address these challenges." "the most successful outcome for i l would be web 3 offers a brand new petri dish our job is to buy Great Tech at great prices AI is also libertarian enable use case that people haven't been able to do today the next hundreds of millions of players they will come to the market through mobile my personal reason why I could be bullish in the next five months is hey everybody welcome back to another episode of The Block runch podcast I'm your host Jason Choy I'm the founder of the angel fun tangent um and everything we discuss on this show is not Financial advice and not reflective of our respective company opinions now one of the biggest busws of this year is reaking and the go-to project in this vertical is obviously EigenLayer now today even before the launch of the token Ian leer has already attracted almost $8 billion in funds deposited this makes it the fourth largest protocol by total value locked according to defi Lama so already I'm getting a lot of messages from people saying it's either the most most transformative thing to happen to ethereum but also increasingly more people are talking about potential systematic risk that comes with the concept of reaking so as an angel investor in igen ler these are the questions that I've also thought a lot about over the past few months so I'm really really excited and grateful for siram the founder of EigenLayer to come on the show to chat with us today so welcome to the show man hey thank you so much Jason really excited to be here looking forward to dive into uh the depths of the project absolutely so I actually stayed up last night and I went through a lot of your tweets and a lot of the transcripts for your podcast interviews and I realized there's actually so so much to go over but I guess before we dive into the meat there i' love to kind of di the clocks back a little bit with start with the origin because I know you've been interested in PP networks before 2008 uh and doing your PhD there and then you moved on to computational genomics I think later and then stumbled in the crypto in 2017 and I heard that you almost began your journey by building on cardano first so I'm curious you know how did you go from that to deciding to build this kind of reaking idea yeah no actually like we didn't you know we didn't Begin by trying to build on cardano uh the story goes back even further so you know in in 2017 2018 you know around end of 2017 actually when I got to uh know about blockchain my first reaction was you know is this some kind of like a speculative bubble that you know and last time you know I worked on peer-to-peer wireless networks that didn't turn out well so I I was a bit skeptical of the whole premise because you know centralized systems are more efficient and they have a way to like outrun decentralized systems so that was the kind of starting point but you know the when it turned for me is when we realize that uh blockchains can basically help us coordinate in the absence of trust and just like the internet is the information super iway blockchains and crypto could be our coordination super iway way that was the kind of like operating model and and once I could see that okay actually if if trust is so Central if we can have a neutral decentralized source of trust that can be like really valuable but you know the the particular problems we are working on at that time was you know how do you scale a consensus protocol what properties can a consensus protocol have you know how would you build uh data availability or an oracle or other systems with various properties what is the game theory for some of these things what are the mathematical guarantees this was kind of how we got introduced into it and the obvious kind of thing we wanted to do was to get like you know Bitcoin or ethereum to use some of these ideas and you know we were so far away from the space and had no real uh connections that we found it quite difficult to kind of understand like and and interact with the crypto space and uh you know it appeared that the most kind of like uh the the only way we were seeing that people from the academic background where getting into blockchain and crypto was actually going and starting hey you know here's a new consensus protocol I'm going to build a layer one blockchain around it and we thought okay there's no other way this is the only way to like take an idea at the level of a consensus protocol because you can't build it as a smart contract on ethereum because the whole point is to change the way that the the nodes come into consensus and so I said okay maybe we should build our own like you know uh layer one and we were actually you know we took a very purist approach and we said like let's first actually build the whole system before we even do a fund raise so we had a project called Trifecta where we built the block built a blockchain you know we were running it on 100 nodes at that time there's an MIT crypto economic summit video where we demonstrated running at like you know hundreds of thousands of transactions per second but that project never launched and that's why you know if if you're a listener and wondering where is this trifecto blockchain it doesn't exist it was uh it it it it only has had a uh short Lifeline you know lifetime as a kind of proof of concept so I because you know we the questions that we got asked from investors and other people was hey okay you you got these 100,000 transactions per second where is the user base where is the ecosystem how are you going to build all of this out and I said I don't know you know if blockchain's a thing you know fast blockchain is a thing and here is the thing that actually does it and from from that point actually you know one of the things that I realized is kind of like a fundamental problem for starting a new like you know decent L protocol is the source of trust right I couldn't just take it and throw it on top of ethereum if it was like that that would have been so amazing right like I just just like you write a smart contract and throw it on ethereum Let the blockchain like supplies the trust so this is when I came up with this model that you know it's very simple once you state it basically like a general purpose smart contract blockchain is like selling trust and earning fees in return for selling trust and selling trust to whom to Applications right so applications are choosing to live on top of a blockchain because it's consuming the trust okay so once I understood this economy the question was like how can we do this more generally like you know more than just smart contracts can we kind of like take the same set of nodes same economics and some more transfer it and we Tred all kinds of things can you build it on bitcoin you know can you build it on ethereum uh and the the original set of ideas we had were not really related to reaking it was some kind of like a complex cryptoeconomic game but the the core idea was hey now I know I can run a different virtual machine on top of a different different blockchain what would I do with that and this was you know in 2021 and one of the things we knew we could do was to run evm on nonm blockchains and so this was the kind of starting point of the kind of ion lay project is we said oh let's bring evm to all the non evm blockchains and you know as a starting point we we took cardano because cardano was probably it had a barly functional smart","What is the origin of EigenLayer? EigenLayer, a decentralized storage and data availability layer, was conceived by Siramir in 2021. The project originated from the founders' experience with peer-to-peer wireless networks, computational genomics, and their initial exposure to blockchain in 2017. Initially, the founders were skeptical of blockchain's potential due to centralized systems being more efficient. However, upon realizing that blockchains can facilitate coordination without trust, they became interested in developing a consensus protocol. They attempted to build on existing platforms like Bitcoin and Ethereum but found it challenging due to a lack of understanding and connections within the crypto space. The founders took a purist approach by building their own layer one blockchain called Trifecta before even considering fundraising. Although Trifecta never launched, it provided valuable experience in developing a blockchain that could support hundreds of thousands of transactions per second. Through this process, Siramir recognized the importance of trust as a fundamental issue for decentralized protocols. They realized that selling trust through a general-purpose smart contract blockchain could be a viable business model, earning fees by providing trust to applications. EigenLayer was conceptualized as an evolution of these ideas, focusing on bringing EVM (Ethereum Virtual Machine) to non-EVM blockchains and creating a more generalized version of the Trifecta project. By doing so, they aimed to enable developers to build complex cryptoeconomic games and transfer the same set of nodes and economics across various platforms. EigenLayer's origin is deeply rooted in Siramir's experiences with blockchain development, their exploration of decentralized systems, and their understanding of the importance of trust in building successful protocols." "the most successful outcome for i l would be web 3 offers a brand new petri dish our job is to buy Great Tech at great prices AI is also libertarian enable use case that people haven't been able to do today the next hundreds of millions of players they will come to the market through mobile my personal reason why I could be bullish in the next five months is hey everybody welcome back to another episode of The Block runch podcast I'm your host Jason Choy I'm the founder of the angel fun tangent um and everything we discuss on this show is not Financial advice and not reflective of our respective company opinions now one of the biggest busws of this year is reaking and the go-to project in this vertical is obviously EigenLayer now today even before the launch of the token Ian leer has already attracted almost $8 billion in funds deposited this makes it the fourth largest protocol by total value locked according to defi Lama so already I'm getting a lot of messages from people saying it's either the most most transformative thing to happen to ethereum but also increasingly more people are talking about potential systematic risk that comes with the concept of reaking so as an angel investor in igen ler these are the questions that I've also thought a lot about over the past few months so I'm really really excited and grateful for siram the founder of EigenLayer to come on the show to chat with us today so welcome to the show man hey thank you so much Jason really excited to be here looking forward to dive into uh the depths of the project absolutely so I actually stayed up last night and I went through a lot of your tweets and a lot of the transcripts for your podcast interviews and I realized there's actually so so much to go over but I guess before we dive into the meat there i' love to kind of di the clocks back a little bit with start with the origin because I know you've been interested in PP networks before 2008 uh and doing your PhD there and then you moved on to computational genomics I think later and then stumbled in the crypto in 2017 and I heard that you almost began your journey by building on cardano first so I'm curious you know how did you go from that to deciding to build this kind of reaking idea yeah no actually like we didn't you know we didn't Begin by trying to build on cardano uh the story goes back even further so you know in in 2017 2018 you know around end of 2017 actually when I got to uh know about blockchain my first reaction was you know is this some kind of like a speculative bubble that you know and last time you know I worked on peer-to-peer wireless networks that didn't turn out well so I I was a bit skeptical of the whole premise because you know centralized systems are more efficient and they have a way to like outrun decentralized systems so that was the kind of starting point but you know the when it turned for me is when we realize that uh blockchains can basically help us coordinate in the absence of trust and just like the internet is the information super iway blockchains and crypto could be our coordination super iway way that was the kind of like operating model and and once I could see that okay actually if if trust is so Central if we can have a neutral decentralized source of trust that can be like really valuable but you know the the particular problems we are working on at that time was you know how do you scale a consensus protocol what properties can a consensus protocol have you know how would you build uh data availability or an oracle or other systems with various properties what is the game theory for some of these things what are the mathematical guarantees this was kind of how we got introduced into it and the obvious kind of thing we wanted to do was to get like you know Bitcoin or ethereum to use some of these ideas and you know we were so far away from the space and had no real uh connections that we found it quite difficult to kind of understand like and and interact with the crypto space and uh you know it appeared that the most kind of like uh the the only way we were seeing that people from the academic background where getting into blockchain and crypto was actually going and starting hey you know here's a new consensus protocol I'm going to build a layer one blockchain around it and we thought okay there's no other way this is the only way to like take an idea at the level of a consensus protocol because you can't build it as a smart contract on ethereum because the whole point is to change the way that the the nodes come into consensus and so I said okay maybe we should build our own like you know uh layer one and we were actually you know we took a very purist approach and we said like let's first actually build the whole system before we even do a fund raise so we had a project called Trifecta where we built the block built a blockchain you know we were running it on 100 nodes at that time there's an MIT crypto economic summit video where we demonstrated running at like you know hundreds of thousands of transactions per second but that project never launched and that's why you know if if you're a listener and wondering where is this trifecto blockchain it doesn't exist it was uh it it it it only has had a uh short Lifeline you know lifetime as a kind of proof of concept so I because you know we the questions that we got asked from investors and other people was hey okay you you got these 100,000 transactions per second where is the user base where is the ecosystem how are you going to build all of this out and I said I don't know you know if blockchain's a thing you know fast blockchain is a thing and here is the thing that actually does it and from from that point actually you know one of the things that I realized is kind of like a fundamental problem for starting a new like you know decent L protocol is the source of trust right I couldn't just take it and throw it on top of ethereum if it was like that that would have been so amazing right like I just just like you write a smart contract and throw it on ethereum Let the blockchain like supplies the trust so this is when I came up with this model that you know it's very simple once you state it basically like a general purpose smart contract blockchain is like selling trust and earning fees in return for selling trust and selling trust to whom to Applications right so applications are choosing to live on top of a blockchain because it's consuming the trust okay so once I understood this economy the question was like how can we do this more generally like you know more than just smart contracts can we kind of like take the same set of nodes same economics and some more transfer it and we Tred all kinds of things can you build it on bitcoin you know can you build it on ethereum uh and the the original set of ideas we had were not really related to reaking it was some kind of like a complex cryptoeconomic game but the the core idea was hey now I know I can run a different virtual machine on top of a different different blockchain what would I do with that and this was you know in 2021 and one of the things we knew we could do was to run evm on nonm blockchains and so this was the kind of starting point of the kind of ion lay project is we said oh let's bring evm to all the non evm blockchains and you know as a starting point we we took cardano because cardano was probably it had a barly functional smart","Why did you get interested in blockchain and crypto in 2017? It was a combination of factors that got me interested in blockchain and crypto in 2017. At the time, I had been working on peer-to-peer wireless networks, which didn't quite work out as expected. I became skeptical about centralized systems being more efficient than decentralized ones. When I learned about blockchains, my first reaction was to question whether it was just a speculative bubble, like I'd seen with other technologies before. However, as I delved deeper into the concept, I began to see blockchain not just as a new technology but as an operating model for coordination without trust. This idea really resonated with me, and I started to think about how blockchains could provide a neutral, decentralized source of trust that could be valuable in many different contexts." "the most successful outcome for i l would be web 3 offers a brand new petri dish our job is to buy Great Tech at great prices AI is also libertarian enable use case that people haven't been able to do today the next hundreds of millions of players they will come to the market through mobile my personal reason why I could be bullish in the next five months is hey everybody welcome back to another episode of The Block runch podcast I'm your host Jason Choy I'm the founder of the angel fun tangent um and everything we discuss on this show is not Financial advice and not reflective of our respective company opinions now one of the biggest busws of this year is reaking and the go-to project in this vertical is obviously EigenLayer now today even before the launch of the token Ian leer has already attracted almost $8 billion in funds deposited this makes it the fourth largest protocol by total value locked according to defi Lama so already I'm getting a lot of messages from people saying it's either the most most transformative thing to happen to ethereum but also increasingly more people are talking about potential systematic risk that comes with the concept of reaking so as an angel investor in igen ler these are the questions that I've also thought a lot about over the past few months so I'm really really excited and grateful for siram the founder of EigenLayer to come on the show to chat with us today so welcome to the show man hey thank you so much Jason really excited to be here looking forward to dive into uh the depths of the project absolutely so I actually stayed up last night and I went through a lot of your tweets and a lot of the transcripts for your podcast interviews and I realized there's actually so so much to go over but I guess before we dive into the meat there i' love to kind of di the clocks back a little bit with start with the origin because I know you've been interested in PP networks before 2008 uh and doing your PhD there and then you moved on to computational genomics I think later and then stumbled in the crypto in 2017 and I heard that you almost began your journey by building on cardano first so I'm curious you know how did you go from that to deciding to build this kind of reaking idea yeah no actually like we didn't you know we didn't Begin by trying to build on cardano uh the story goes back even further so you know in in 2017 2018 you know around end of 2017 actually when I got to uh know about blockchain my first reaction was you know is this some kind of like a speculative bubble that you know and last time you know I worked on peer-to-peer wireless networks that didn't turn out well so I I was a bit skeptical of the whole premise because you know centralized systems are more efficient and they have a way to like outrun decentralized systems so that was the kind of starting point but you know the when it turned for me is when we realize that uh blockchains can basically help us coordinate in the absence of trust and just like the internet is the information super iway blockchains and crypto could be our coordination super iway way that was the kind of like operating model and and once I could see that okay actually if if trust is so Central if we can have a neutral decentralized source of trust that can be like really valuable but you know the the particular problems we are working on at that time was you know how do you scale a consensus protocol what properties can a consensus protocol have you know how would you build uh data availability or an oracle or other systems with various properties what is the game theory for some of these things what are the mathematical guarantees this was kind of how we got introduced into it and the obvious kind of thing we wanted to do was to get like you know Bitcoin or ethereum to use some of these ideas and you know we were so far away from the space and had no real uh connections that we found it quite difficult to kind of understand like and and interact with the crypto space and uh you know it appeared that the most kind of like uh the the only way we were seeing that people from the academic background where getting into blockchain and crypto was actually going and starting hey you know here's a new consensus protocol I'm going to build a layer one blockchain around it and we thought okay there's no other way this is the only way to like take an idea at the level of a consensus protocol because you can't build it as a smart contract on ethereum because the whole point is to change the way that the the nodes come into consensus and so I said okay maybe we should build our own like you know uh layer one and we were actually you know we took a very purist approach and we said like let's first actually build the whole system before we even do a fund raise so we had a project called Trifecta where we built the block built a blockchain you know we were running it on 100 nodes at that time there's an MIT crypto economic summit video where we demonstrated running at like you know hundreds of thousands of transactions per second but that project never launched and that's why you know if if you're a listener and wondering where is this trifecto blockchain it doesn't exist it was uh it it it it only has had a uh short Lifeline you know lifetime as a kind of proof of concept so I because you know we the questions that we got asked from investors and other people was hey okay you you got these 100,000 transactions per second where is the user base where is the ecosystem how are you going to build all of this out and I said I don't know you know if blockchain's a thing you know fast blockchain is a thing and here is the thing that actually does it and from from that point actually you know one of the things that I realized is kind of like a fundamental problem for starting a new like you know decent L protocol is the source of trust right I couldn't just take it and throw it on top of ethereum if it was like that that would have been so amazing right like I just just like you write a smart contract and throw it on ethereum Let the blockchain like supplies the trust so this is when I came up with this model that you know it's very simple once you state it basically like a general purpose smart contract blockchain is like selling trust and earning fees in return for selling trust and selling trust to whom to Applications right so applications are choosing to live on top of a blockchain because it's consuming the trust okay so once I understood this economy the question was like how can we do this more generally like you know more than just smart contracts can we kind of like take the same set of nodes same economics and some more transfer it and we Tred all kinds of things can you build it on bitcoin you know can you build it on ethereum uh and the the original set of ideas we had were not really related to reaking it was some kind of like a complex cryptoeconomic game but the the core idea was hey now I know I can run a different virtual machine on top of a different different blockchain what would I do with that and this was you know in 2021 and one of the things we knew we could do was to run evm on nonm blockchains and so this was the kind of starting point of the kind of ion lay project is we said oh let's bring evm to all the non evm blockchains and you know as a starting point we we took cardano because cardano was probably it had a barly functional smart","Can you explain what changed your mind about decentralized systems after learning about blockchains? According to the text, when the speaker first heard about blockchain in 2017-2018, they were initially skeptical because they thought centralized systems were more efficient and could ""outrun"" decentralized systems. However, as they learned more about blockchains, their perspective shifted. The key realization that changed their mind was understanding how blockchains can help coordinate in the absence of trust. They saw that just like the internet is a decentralized information superhighway, blockchains and cryptocurrency could be a decentralized coordination superhighway. This operating model made them see the potential value of decentralized systems. In essence, what changed their mind was the recognition that: 1. Decentralized systems can provide a neutral source of trust. 2. Blockchains can facilitate coordination among entities without requiring central control or intermediaries. 3. The internet and other decentralized networks had already shown that decentralized architectures could be efficient and scalable. This shift in perspective allowed them to re-evaluate the potential benefits and feasibility of decentralized systems, leading them to explore new ideas and projects, including the EigenLayer project they are currently working on." "the most successful outcome for i l would be web 3 offers a brand new petri dish our job is to buy Great Tech at great prices AI is also libertarian enable use case that people haven't been able to do today the next hundreds of millions of players they will come to the market through mobile my personal reason why I could be bullish in the next five months is hey everybody welcome back to another episode of The Block runch podcast I'm your host Jason Choy I'm the founder of the angel fun tangent um and everything we discuss on this show is not Financial advice and not reflective of our respective company opinions now one of the biggest busws of this year is reaking and the go-to project in this vertical is obviously EigenLayer now today even before the launch of the token Ian leer has already attracted almost $8 billion in funds deposited this makes it the fourth largest protocol by total value locked according to defi Lama so already I'm getting a lot of messages from people saying it's either the most most transformative thing to happen to ethereum but also increasingly more people are talking about potential systematic risk that comes with the concept of reaking so as an angel investor in igen ler these are the questions that I've also thought a lot about over the past few months so I'm really really excited and grateful for siram the founder of EigenLayer to come on the show to chat with us today so welcome to the show man hey thank you so much Jason really excited to be here looking forward to dive into uh the depths of the project absolutely so I actually stayed up last night and I went through a lot of your tweets and a lot of the transcripts for your podcast interviews and I realized there's actually so so much to go over but I guess before we dive into the meat there i' love to kind of di the clocks back a little bit with start with the origin because I know you've been interested in PP networks before 2008 uh and doing your PhD there and then you moved on to computational genomics I think later and then stumbled in the crypto in 2017 and I heard that you almost began your journey by building on cardano first so I'm curious you know how did you go from that to deciding to build this kind of reaking idea yeah no actually like we didn't you know we didn't Begin by trying to build on cardano uh the story goes back even further so you know in in 2017 2018 you know around end of 2017 actually when I got to uh know about blockchain my first reaction was you know is this some kind of like a speculative bubble that you know and last time you know I worked on peer-to-peer wireless networks that didn't turn out well so I I was a bit skeptical of the whole premise because you know centralized systems are more efficient and they have a way to like outrun decentralized systems so that was the kind of starting point but you know the when it turned for me is when we realize that uh blockchains can basically help us coordinate in the absence of trust and just like the internet is the information super iway blockchains and crypto could be our coordination super iway way that was the kind of like operating model and and once I could see that okay actually if if trust is so Central if we can have a neutral decentralized source of trust that can be like really valuable but you know the the particular problems we are working on at that time was you know how do you scale a consensus protocol what properties can a consensus protocol have you know how would you build uh data availability or an oracle or other systems with various properties what is the game theory for some of these things what are the mathematical guarantees this was kind of how we got introduced into it and the obvious kind of thing we wanted to do was to get like you know Bitcoin or ethereum to use some of these ideas and you know we were so far away from the space and had no real uh connections that we found it quite difficult to kind of understand like and and interact with the crypto space and uh you know it appeared that the most kind of like uh the the only way we were seeing that people from the academic background where getting into blockchain and crypto was actually going and starting hey you know here's a new consensus protocol I'm going to build a layer one blockchain around it and we thought okay there's no other way this is the only way to like take an idea at the level of a consensus protocol because you can't build it as a smart contract on ethereum because the whole point is to change the way that the the nodes come into consensus and so I said okay maybe we should build our own like you know uh layer one and we were actually you know we took a very purist approach and we said like let's first actually build the whole system before we even do a fund raise so we had a project called Trifecta where we built the block built a blockchain you know we were running it on 100 nodes at that time there's an MIT crypto economic summit video where we demonstrated running at like you know hundreds of thousands of transactions per second but that project never launched and that's why you know if if you're a listener and wondering where is this trifecto blockchain it doesn't exist it was uh it it it it only has had a uh short Lifeline you know lifetime as a kind of proof of concept so I because you know we the questions that we got asked from investors and other people was hey okay you you got these 100,000 transactions per second where is the user base where is the ecosystem how are you going to build all of this out and I said I don't know you know if blockchain's a thing you know fast blockchain is a thing and here is the thing that actually does it and from from that point actually you know one of the things that I realized is kind of like a fundamental problem for starting a new like you know decent L protocol is the source of trust right I couldn't just take it and throw it on top of ethereum if it was like that that would have been so amazing right like I just just like you write a smart contract and throw it on ethereum Let the blockchain like supplies the trust so this is when I came up with this model that you know it's very simple once you state it basically like a general purpose smart contract blockchain is like selling trust and earning fees in return for selling trust and selling trust to whom to Applications right so applications are choosing to live on top of a blockchain because it's consuming the trust okay so once I understood this economy the question was like how can we do this more generally like you know more than just smart contracts can we kind of like take the same set of nodes same economics and some more transfer it and we Tred all kinds of things can you build it on bitcoin you know can you build it on ethereum uh and the the original set of ideas we had were not really related to reaking it was some kind of like a complex cryptoeconomic game but the the core idea was hey now I know I can run a different virtual machine on top of a different different blockchain what would I do with that and this was you know in 2021 and one of the things we knew we could do was to run evm on nonm blockchains and so this was the kind of starting point of the kind of ion lay project is we said oh let's bring evm to all the non evm blockchains and you know as a starting point we we took cardano because cardano was probably it had a barly functional smart","How do you scale a consensus protocol, and what are some properties it can have? Scaling a consensus protocol is a crucial aspect of designing a decentralized system, as it determines the throughput and reliability of the network. In the context of blockchain and cryptocurrency systems, consensus protocols ensure that all nodes on the network agree on the state of the ledger. To scale a consensus protocol, you need to consider the following properties and techniques: 1. **Throughput**: The number of transactions that can be processed per unit time. 2. **Latency**: The time it takes for a node to verify and confirm a transaction. 3. **Scalability**: The ability to increase throughput without compromising latency or security. 4. **Security**: The resistance to attacks, such as 51% attacks or Sybil attacks. Properties of consensus protocols: 1. **Fault tolerance**: The ability of the network to continue functioning even if some nodes fail or behave maliciously. 2. **Agreement**: All nodes agree on the state of the ledger. 3. **Validity**: The agreed-upon state is a valid and consistent one. 4. **Liveness**: New transactions are eventually confirmed. Some popular consensus protocols have the following properties: 1. **Proof of Work (PoW)**: * Energy-intensive, making it difficult to launch a 51% attack. * Throughput: low, as it requires significant computational power. 2. **Delegated Proof of Stake (DPoS)**: * Faster than PoW, with higher throughput. * Security: relies on the number of tokens held by validators. 3. **Proof of Capacity (PoC)**: * More energy-efficient and scalable than PoW. * Throughput: variable, depending on disk space availability. 4. **Byzantine Fault Tolerance (BFT) protocols**: * Can tolerate arbitrary faults in the network. Game-theoretic approaches to consensus protocols consider: 1. **Rational behavior**: Nodes act selfishly to maximize their utility. 2. **Mechanism design**: Designing a protocol that incentivizes desired behavior. Mathematical guarantees for consensus protocols often rely on: 1. **Consensus algorithms** (e.g., PBFT, RAFT) 2. **Game-theoretic analyses** (e.g., Nash equilibrium) 3. **Probabilistic models** (e.g., Markov chains) Keep in mind that the choice of consensus protocol depends on the specific use case, scalability requirements, and trade-offs between security, throughput, and latency. As Siram mentioned, when designing a new consensus protocol, it's essential to consider how the network can achieve: 1. **Self-sustaining behavior** 2. **Scalable growth** 3. **Incentivized participation** By understanding these aspects, you can create a more robust and efficient consensus protocol for your decentralized system." "the most successful outcome for i l would be web 3 offers a brand new petri dish our job is to buy Great Tech at great prices AI is also libertarian enable use case that people haven't been able to do today the next hundreds of millions of players they will come to the market through mobile my personal reason why I could be bullish in the next five months is hey everybody welcome back to another episode of The Block runch podcast I'm your host Jason Choy I'm the founder of the angel fun tangent um and everything we discuss on this show is not Financial advice and not reflective of our respective company opinions now one of the biggest busws of this year is reaking and the go-to project in this vertical is obviously EigenLayer now today even before the launch of the token Ian leer has already attracted almost $8 billion in funds deposited this makes it the fourth largest protocol by total value locked according to defi Lama so already I'm getting a lot of messages from people saying it's either the most most transformative thing to happen to ethereum but also increasingly more people are talking about potential systematic risk that comes with the concept of reaking so as an angel investor in igen ler these are the questions that I've also thought a lot about over the past few months so I'm really really excited and grateful for siram the founder of EigenLayer to come on the show to chat with us today so welcome to the show man hey thank you so much Jason really excited to be here looking forward to dive into uh the depths of the project absolutely so I actually stayed up last night and I went through a lot of your tweets and a lot of the transcripts for your podcast interviews and I realized there's actually so so much to go over but I guess before we dive into the meat there i' love to kind of di the clocks back a little bit with start with the origin because I know you've been interested in PP networks before 2008 uh and doing your PhD there and then you moved on to computational genomics I think later and then stumbled in the crypto in 2017 and I heard that you almost began your journey by building on cardano first so I'm curious you know how did you go from that to deciding to build this kind of reaking idea yeah no actually like we didn't you know we didn't Begin by trying to build on cardano uh the story goes back even further so you know in in 2017 2018 you know around end of 2017 actually when I got to uh know about blockchain my first reaction was you know is this some kind of like a speculative bubble that you know and last time you know I worked on peer-to-peer wireless networks that didn't turn out well so I I was a bit skeptical of the whole premise because you know centralized systems are more efficient and they have a way to like outrun decentralized systems so that was the kind of starting point but you know the when it turned for me is when we realize that uh blockchains can basically help us coordinate in the absence of trust and just like the internet is the information super iway blockchains and crypto could be our coordination super iway way that was the kind of like operating model and and once I could see that okay actually if if trust is so Central if we can have a neutral decentralized source of trust that can be like really valuable but you know the the particular problems we are working on at that time was you know how do you scale a consensus protocol what properties can a consensus protocol have you know how would you build uh data availability or an oracle or other systems with various properties what is the game theory for some of these things what are the mathematical guarantees this was kind of how we got introduced into it and the obvious kind of thing we wanted to do was to get like you know Bitcoin or ethereum to use some of these ideas and you know we were so far away from the space and had no real uh connections that we found it quite difficult to kind of understand like and and interact with the crypto space and uh you know it appeared that the most kind of like uh the the only way we were seeing that people from the academic background where getting into blockchain and crypto was actually going and starting hey you know here's a new consensus protocol I'm going to build a layer one blockchain around it and we thought okay there's no other way this is the only way to like take an idea at the level of a consensus protocol because you can't build it as a smart contract on ethereum because the whole point is to change the way that the the nodes come into consensus and so I said okay maybe we should build our own like you know uh layer one and we were actually you know we took a very purist approach and we said like let's first actually build the whole system before we even do a fund raise so we had a project called Trifecta where we built the block built a blockchain you know we were running it on 100 nodes at that time there's an MIT crypto economic summit video where we demonstrated running at like you know hundreds of thousands of transactions per second but that project never launched and that's why you know if if you're a listener and wondering where is this trifecto blockchain it doesn't exist it was uh it it it it only has had a uh short Lifeline you know lifetime as a kind of proof of concept so I because you know we the questions that we got asked from investors and other people was hey okay you you got these 100,000 transactions per second where is the user base where is the ecosystem how are you going to build all of this out and I said I don't know you know if blockchain's a thing you know fast blockchain is a thing and here is the thing that actually does it and from from that point actually you know one of the things that I realized is kind of like a fundamental problem for starting a new like you know decent L protocol is the source of trust right I couldn't just take it and throw it on top of ethereum if it was like that that would have been so amazing right like I just just like you write a smart contract and throw it on ethereum Let the blockchain like supplies the trust so this is when I came up with this model that you know it's very simple once you state it basically like a general purpose smart contract blockchain is like selling trust and earning fees in return for selling trust and selling trust to whom to Applications right so applications are choosing to live on top of a blockchain because it's consuming the trust okay so once I understood this economy the question was like how can we do this more generally like you know more than just smart contracts can we kind of like take the same set of nodes same economics and some more transfer it and we Tred all kinds of things can you build it on bitcoin you know can you build it on ethereum uh and the the original set of ideas we had were not really related to reaking it was some kind of like a complex cryptoeconomic game but the the core idea was hey now I know I can run a different virtual machine on top of a different different blockchain what would I do with that and this was you know in 2021 and one of the things we knew we could do was to run evm on nonm blockchains and so this was the kind of starting point of the kind of ion lay project is we said oh let's bring evm to all the non evm blockchains and you know as a starting point we we took cardano because cardano was probably it had a barly functional smart","Is reaking's concept similar to Bitcoin or Ethereum? Based on the conversation between Jason and Siram, it seems that reaking's concept is quite different from both Bitcoin and Ethereum. While both Bitcoin and Ethereum are blockchains that rely on a consensus mechanism to validate transactions, EigenLayer's reaking idea appears to be more focused on scaling and efficiency. Specifically, reaking aims to improve the scalability of blockchains by allowing for faster transaction processing and higher throughput. This is achieved through a combination of novel cryptographic techniques and economic incentives. In contrast, Bitcoin's primary focus is on providing a secure decentralized store of value, while Ethereum's focus is on enabling the creation and execution of smart contracts. Neither of these goals appears to be directly related to reaking's objective of improving scalability. Additionally, Siram mentions that reaking involves ""rethinking"" the way transactions are validated, which implies a significant departure from traditional blockchain architectures like Bitcoin or Ethereum. So, while reaking shares some general concepts with blockchains like Bitcoin and Ethereum (e.g., distributed ledger technology), its specific goals and approaches seem to be quite distinct." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","Here is a comprehensive summary of the text: The speaker discusses how their company, EigenLayer, was inspired to build on Ethereum after having difficulty building on top of Cardano's Gano. They realized that building on multiple blockchains was a form of ""replicated effort"" rather than ""aggregated effort"", and therefore wanted to focus on Ethereum, which had a more centralized and powerful community. Initially, the speaker didn't know what to build on Ethereum, but after an interaction with Kyle Samani of Multicoin, they were inspired by the idea of running arbitrary virtual machines (VMs) on top of Ethereum. They also realized that optimistic rollups, while being cheaper than ZK rollups, were still expensive due to data availability costs. The speaker's company then decided to build a killer application, specifically data availability, on top of Ethereum. This was inspired by their own research in the field and they felt confident that they could scale data availability more efficiently than other solutions. After building a successful proof-of-concept, the speaker attended an Ethereum conference in Denver, where they were struck by the community's emphasis on decentralization, permissionlessness, innovation, and censorship resistance. They compared this to the Google developer community and noted the similarity between the two, with both having a strong sense of belonging. The speaker also shares their understanding of Ethereum's greatest exports: liquidity, programmability, and ideology. They believe that their company's I (generalized mechanism for building arbitrary distributed systems) is a project that expands these exports beyond just the Ethereum chain. In addition to discussing their own work on EigenLayer, the speaker also talks about the inspiration they took from other blockchain projects such as Polka Dot, Cosmos, and Avalanch. They note that they were influenced by the idea of shared security, where multiple chains can borrow trust from each other, which is a concept that has been explored in various forms. Overall, the speaker shares their journey of building EigenLayer on Ethereum, including the challenges they faced, the inspiration they took from other projects, and their understanding of the strengths and weaknesses of different blockchain technologies." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","1. What was the original idea behind building a virtualization layer for Ethereum? The original idea behind building a virtualization layer for Ethereum was to create a common source of decentralized trust that could be shared across multiple blockchains, rather than having separate sources of decentralized trust for each individual blockchain. This idea was sparked by the realization that if the fundamental thing is to have a common source of decentralized trust, then building on distinct sources of decentralized trust is replicated effort rather than aggregated effort." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","2. Why did Kyle Samani advise against working on EVM virtualization? Kyle Samani, co-founder of Multicoin Capital, advised against working on EVM virtualization because he considered it a ""piece of the stars"" and not a worthwhile pursuit." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","3. What were some of the limitations and inefficiencies of optimistic rollups in 2021? According to the text, in 2021, optimistic rollups were found to be more expensive than ZK rollups because they incurred high costs for publishing data on Ethereum. This was not aligned with the expectation that ZK rollups would be more expensive due to the need to provide a zero-knowledge proof." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","4. How does data availability contribute to the costs of optimistic rollups? According to the text, data availability contributes to the costs of optimistic rollups because ""all the costs were going into publishing data on Ethereum"". This implies that in order for optimistic rollups to work, a large amount of data needs to be published on the Ethereum network, which incurs significant costs." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","5. What was the key insight that led to the decision to build a data availability layer on Ethereum? The key insight that led to the decision to build a data availability layer on Ethereum was when the speaker realized why optimistic rollups are more expensive than ZK rollups. Specifically, it was because of data availability and all the costs associated with publishing data on the Ethereum network. This realization was sparked by an interaction with Kyle Samani of Multicoin, who said that optimistic rollups would never work due to their high costs. The speaker went back and researched optimistic rollups, only to find that they were indeed more expensive than expected, primarily due to data availability issues. With this newfound understanding, the team decided to build a data availability layer on Ethereum, leveraging their existing research on data availability to create a scalable solution. This decision was driven by the recognition of the importance of data availability in achieving efficient and cost-effective decentralized applications." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","6. In what ways did building on Ethereum align with the principles of decentralization, permissionlessness, and innovation censorship resistance? Based on the provided text, it appears that building on Ethereum aligned with the principles of decentralization, permissionlessness, and innovation censorship resistance in the following ways: 1. **Decentralization**: The speaker notes that the Ethereum community had a set of principles that they were anchoring on, which included decentralization. By building on Ethereum, they were contributing to this decentralized vision. 2. **Permissionlessness**: The text mentions that when the speaker went to an Ethereum conference, they didn't know anyone there, but felt like they belonged because of the community's emphasis on permissionlessness. This suggests that Ethereum's principles of permissionlessness allowed anyone to participate and contribute, regardless of their background or connection. 3. **Innovation Censorship Resistance**: The text notes that the speaker was drawn to Ethereum because it had a set of principles that felt different from other communities. In particular, they mention the idea of censorship resistance, which implies that Ethereum is committed to allowing all sorts of innovation and progress, even if it's uncomfortable or challenging. These principles are reflected in the fact that the speaker was able to come up with their own ideas for projects like Interplanetary Name System (IPNS) and Interplanetary File System (IPFS), which were then implemented on top of Ethereum. This kind of innovation is enabled by a platform that values decentralization, permissionlessness, and censorship resistance. Overall, building on Ethereum seemed to align with the principles of decentralization, permissionlessness, and innovation censorship resistance because it allowed for diverse contributions, empowered individuals to take part in the process, and fostered an environment where new ideas could flourish." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","7. What is the significance of Ethereum's liquidity, programmability, and ideology in relation to decentralized systems? Ethereum's liquidity, programmability, and ideology are significant in relation to decentralized systems for the following reasons: 1. **Liquidity**: Ethereum's liquidity refers to its ability to provide a large and liquid market for trading digital assets. This is crucial for decentralized systems because it enables users to easily buy, sell, and trade cryptocurrencies, thereby facilitating financial transactions and interactions within the network. 2. **Programmability (EVM)**: Ethereum's programmability, or more specifically, its virtual machine (EVM), allows developers to create and deploy smart contracts that can perform complex operations automatically. This enables decentralized applications (dApps) to be built on top of Ethereum, which can execute a wide range of functions, from simple transactions to sophisticated business logic. 3. **Ideology**: Ethereum's ideology is centered around the principles of decentralization, permissionlessness, innovation, censorship resistance, and community governance. These values are essential for creating a decentralized system where users have control over their assets and data. The emphasis on community governance allows for collective decision-making, ensuring that the network remains true to its original purpose. In summary, Ethereum's liquidity enables seamless financial transactions within the network, programmability facilitates the creation of complex smart contracts, and ideology provides the foundation for a truly decentralized system where users can rely on a community-driven approach. These factors combined enable the creation of robust, secure, and adaptable decentralized systems that support various use cases. This significance is further emphasized by the statement: ""Ethereum's greatest exports are three things right it's the liquidity of ethereum, the programmability of if evm, and the ideology of ethereum.""" "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","8. How can staking be used as a generalized mechanism for building arbitrary distributed systems on top of the Ethereum trust Network? Based on the provided text, staking can be used as a generalized mechanism for building arbitrary distributed systems on top of the Ethereum trust Network through the concept of ""staking as a shared security"" or ""shared security"" model. This idea allows multiple projects to borrow the security of a stake from the Ethereum network, rather than relying solely on their own resources. In this model, stakers can commit their stake (e.g., Ether) to validate not just the Ethereum network but also any other project that chooses to borrow the security of this stake. This shared security mechanism enables arbitrary distributed systems to be built on top of the Ethereum trust Network, allowing projects to leverage the security and trust provided by the Ethereum community. In essence, staking as a generalized mechanism can facilitate: 1. **Shared Security**: Multiple projects can share the same staking pool, reducing the individual risk and increasing the overall security. 2. **Decentralized Validation**: Stakers can validate multiple chains or systems, rather than being limited to a single chain. 3. **Flexible Reward Distribution**: Stakeholders can earn rewards for their contributions across multiple projects. This concept is inspired by merge mining, where miners combine their hash power to secure multiple chains. Similarly, the staking-as-shared-security model leverages the collective power of stakeholders to secure multiple distributed systems, fostering a more robust and decentralized ecosystem." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","9. In what ways did observing other chains' approaches to shared security influence the development of I (Layer 1)? According to the text, observing other chains' approaches to shared security influenced the development of I (Layer 1) in several ways: * The idea of borrowing trust from another chain's staking power was first explored on Bitcoin through merge mining. However, this approach had a major incentive problem, where an attacker could attack one chain without affecting their own. * The concept of slashing, which is burning the stake for misbehaviors on the chain, was seen as a powerful mechanism that allowed for more effective management of staking and trust. * The realization came that if you have a general-purpose staking mechanism, you can transfer the cryptoeconomic trust to arbitrary other services much more easily. This idea was influenced by observations from other chains such as Polkadot, Cosmos, and Avalanche. However, it is also mentioned in the text that the team behind I (Layer 1) did not know about these ideas at the time of development, but rather discovered them later on through exploration and research." "contract programming environment at that time and what that meant was and and it was very difficult for developers to build on top of it so if we had evm easily virtualized on top of Gano that would be like a cool thing and you could do this on any blockchain you know it was not specific to cardano but it was a starting point for what we could do but you know as we were thinking about it one of the things that clicked to me was if the fundamental thing is your you want to have a common source of decentralized trust then building on distinct source of decentralized trust is replicated effort rather than aggregated effort and so the obvious place which was powerful and Central for us to build was ethereum and so we wanted to then like think about what we would build on top of ethereum and at that time I actually had no good idea for what to build on ethereum and the the story is you know I had a an interaction with uh uh Kyle samani of multicoin and you know pitching this this idea that you can actually do virtualization of evm or non evm blockchains and Kyle in his characteristic style said you know evm is a piece of you know star I don't want you know you should not be working on that and I'm like okay and then uh you know I said oh no but you could run like arbitary virtual machines on top of ethereum too like if you wanted and then he said no this is some kind of an optimistic type rollup it it will never work I said like why would it work because optimistic rollups are going to be very expensive and I went back and we sat down with the team and we were trying to understand this peculiar statement I didn't know at that time went through all the optimistic rups this is 2021 and found that the fees was much higher and I thought ZK rollups should be more expensive because I have to give a ZK proof so why is an optimistic rollup more expensive it was because of data availability and all the costs were going into publishing data on ethereum and so we then said oh yeah you know now it all fit together we had been doing research on data availability for many many years before that so we said I know how to scale data let's just build that so we said okay we're building on ethereum we'll build data availability on top of it so now we have a killer application so that that was the beginning of how we decided to build on ethereum but also know we wanted to build on etherum we just had didn't have a good use case you know at that time you know we didn't know what programming environment would be interesting I had all these like high level pictures where I'd say oh you can build AI you can build like databases you can build like gaming you know environments all these things but you know you have to start somewhere where you know there is Market traction and we found that with data availability so that's the origin of Ian da and EigenLayer on on ethereum add to this what happened was we said like let's go to eat Denver you know we're going to eat Denver now next week but this was you know 2022 and go to Denver and hang out with all the people that I didn't know anybody there I actually know zero people in in ethereum at that time and I connected with the vi of ethereum which was you know decentralization permissionless Innovation censorship resistance it felt like this community had a set of principles that they're anchoring on rather than you know either individuals or expediency towards profit or even a product which felt very different to me I came back and told a friend who was working in Google at that time like imagine somebody just goes to a Google conference they're not a Google stock owner they're not a Google like you know programmer they're not even a Google really like application developer they just feel like they're a Google person at the end of a Google conf that's how I felt at the end ofen and so we decided to build on ethereum after that I yeah I think the the three biggest lessons that I've learned over the past years of investing in the space is that ethereum's greatest exports are three things right it's the liquidity of ethereum is the programmability of if evm and the ideology of ethereum and it sounds like to me that when I first saw I L I thought okay this is actually a project that helps expand this you know vastly beyond just the ethereum chain itself and I know you describe I as a generalized mechanism for anybody to build arbitrary distributed systems on top of the ethereum trust Network and when my Normie Friends Ask me okay what the hell does that mean I basically explained to them well you can stake your eth and not just validate for the eth network but for you know any project that chooses to borrow the security of this uh you know of this stake so I was actually quite fascinated because uh I feel like I've seen the similar ideas before in like polka do or Cosmos this idea of like shared security so how much of the inspiration came from observing what is being done on other chains what work what didn't work I would say that uh at least our own Journey was uh from we started from looking at Bitcoin uh where we we were looking at hey you know can you borrow trust from Bitcoin and the set of ideas we got exposed to was mostly merge mining the idea that you have a mining common mining uh Power and a lot of energy is expended on mining can you reuse that mining for other like securing other blockchains this idea was called Mudge mining back in in the day and I think even Satoshi wrote about it in one of their no Bitcoin talk or whatever and the thing is um we looked at merge Mining and one of the big incentive problems in merge mining is if you merge mine Bitcoin and some other altcoin the problem is you can attack the altcoin with impunity because I you know if you if there is an attack on the other chain it doesn't do anything to my mining equipment or like my Bitcoin price is not affected by the attack that happens on this other chain or coin so the crypto economics of Bitcoin is very uniquely tied to bitcoin itself which is that I have invested this bunch of money in like buying the mining hardware and therefore if I do some big harm and a majority tries to do an attack then my mining Hardware may become useless you know because you know the the the BTC is not valuable anymore so this was not the case for you know merge Mining and so the realization came from when you do staking the Dynamics are quite different because staking comes with slashing the equivalent of slashing is like going and finding out which miners behave badly and then burning their mining equipment right like that's not even thinkable as a as as a possible strategy right uh but the thing with staking is stake is intrinsically like you know on the chain and therefore you can burn the stake for misbehaviors on on the chain and this was such a powerful concept and then we realized that if you had a general purpose staking mechanism you can transfer the cryptoeconomic trust to arbitary other services much more easily we didn't know about much of the other other stuff like poka do or even layer tws or you know what was going on in Cosmos or avalanch at that time but of course later as you start exploring these ideas you find oh yeah you know","10. What is the key difference between merge mining and staking in terms of incentives and attack dynamics? The key difference between merge mining and staking in terms of incentives and attack dynamics is as follows: 1. **Merge Mining Incentives:** Merge mining allows multiple chains to be mined on the same hardware, but each chain has its own separate incentivization mechanism. The attacker's incentive to mine one specific chain does not affect their mining equipment or rewards from other chains. 2. **Staking Dynamics:** Staking involves validators locking up a certain amount of coins in exchange for the right to validate transactions and create new blocks. In this setup, stakers have an intrinsic stake in the security of the network. If a validator is caught misbehaving (e.g., attempting to double-spend), they risk having their stake slashed. 3. **Attack Dynamics:** Merge mining does not provide strong incentives against attacks on other chains because attacks do not directly impact the attacker's mining hardware or rewards from other chains. However, staking mechanisms can deter attacks through the risk of slashing: validators who attempt to double-spend or otherwise misbehave risk losing some or all of their locked-up coins. 4. **Cryptoeconomic Trust:** The concept of cryptoeconomic trust is unique to staking because it inherently ties validators' incentives (and potential punishments) directly to their actions on the specific network, making them more committed to its security. This is not a feature of merge mining due to its separate incentivization for each chain mined." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Here is a comprehensive summary of the text: The speaker is discussing the concept of ""Actively Validated Services"" (AVS) on the EigenLayer platform, which provides a decentralized infrastructure for building and running various services. They compare this to Software as a Service (SaaS) on cloud platforms like AWS or Google Cloud, where users can write software and run it without worrying about underlying infrastructure. The speaker argues that AVS is a more general concept than other systems in the cryptocurrency space, offering greater flexibility in terms of trust and security sharing. They also mention stakeholders such as stakers who back services with their tokens (e.g., Ethereum) and operators who run networks. To explain AVS, the speaker introduces a new term to describe types of things that can be built on EigenLayer: ""Active Validation Services"". This umbrella term encompasses various decentralized validated services, which require decentralized validation. These services are then managed by EigenLayer to ensure that node operators opt-in with sufficient stake backing them. The speaker emphasizes the importance of innovation and specialization enabled by AVS, drawing an analogy between cloud-based SaaS and blockchain-based AVS. They note how SaaS has led to super-specialization in narrow, software-based services worldwide, which could not have been possible without cloud infrastructure. In conclusion, the speaker defines an Actively Validated Service as a service that requires decentralized validation, which can be built and run on top of EigenLayer's infrastructure, leveraging its accounting and validation capabilities." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","What is the name of the system being described in the text? The system being described in the text is called EigenLayer, as well as Active Validated Services (AVS)." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Is it an intended use for EigenLayer to connect stakeholders and operators with innovators building decentralized protocols? According to the context, yes, it appears that EigenLayer was not intended to be a D5 protocol, but rather to connect stakers and operators with innovators who are building new distributed systems and decentralized protocols. This implies that the primary purpose of EigenLayer is to facilitate interactions between different stakeholders in the crypto ecosystem, allowing them to borrow and share trust, security, and other resources." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Does the concept of AVS (Actively Validated Services) describe a new idea in the crypto space? Based on the context provided, it appears that AVS (Actively Validated Services) is a concept introduced by EigenLayer as an umbrella term to describe various types of distributed systems and decentralized protocols built on top of their infrastructure. The key idea behind AVS is to provide a more general and flexible system for offering and sharing trust and security among different stakeholders, including stakers, operators, and innovators. This concept seems to build upon the idea of software as a service (SaaS) in the cloud, where various services are built on top of a common infrastructure. In this context, AVS can be seen as an extension of the SaaS model into the crypto space, where decentralized validation is used to ensure trust and security. This concept appears to be novel in the sense that it combines aspects of blockchain technology with the idea of software as a service, allowing for the creation of specialized services built on top of decentralized infrastructure. However, whether this concept is entirely new or builds upon existing ideas is open to interpretation. The notion of decentralized validation and the use of blockchain technology for building distributed systems are well-established concepts in the crypto space. What EigenLayer seems to be proposing with AVS is a more systematic approach to categorizing and structuring these types of services, making it easier for innovators to build and deploy new services on top of their infrastructure. Therefore, while AVS may not introduce entirely new ideas in the crypto space, it does seem to represent a fresh take on how to conceptualize and structure decentralized systems and protocols built on top of blockchain technology." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Is EigenLayer's purpose to be a D5 protocol, according to its original intention? No, EigenLayer was not intended to be a D5 (Distributed Ledger) protocol according to its original intention." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Do people typically think of distributed systems as chains or services? Based on the conversation, it seems that many people in the crypto space typically think of distributed systems as chains. However, the speaker from EigenLayer disagrees with this narrow view and believes that these systems can be thought of more broadly as services. The reason for this is that traditional blockchain protocols, such as DLT (distributed ledger technology) or Layer 1 protocols like Ethereum, are often seen as the foundation for all decentralized applications and systems. These protocols are thought of as a chain because they provide a linear sequence of blocks that store data in a tamper-evident and decentralized manner. However, EigenLayer's speaker argues that not all distributed systems can be classified simply as chains. They believe that many other types of services can be built on top of the blockchain infrastructure, such as specialized software applications, protocols, or even entirely new kinds of decentralized systems. In essence, EigenLayer is advocating for a more inclusive and flexible understanding of what constitutes a distributed system, one that goes beyond the traditional concept of a chain. They are proposing to categorize these systems as ""actively validated services"" (AVS), which encompasses a broader range of possibilities than just chains." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","What is an analogy for AVS in the context of cloud computing? According to the provided context, an analogy for Active Validated Services (AVS) in the context of cloud computing is ""Software as a Service"" (SaaS). Just like SaaS allows developers to write software and run it on cloud platforms such as AWS or Google Cloud without worrying about the underlying infrastructure, AVS enables innovators to create decentralized systems and protocols that can be built on top of EigenLayer's blockchain infrastructure, without needing to worry about the security and validation aspects. In both cases, the analogy is that: * In SaaS, you write software and let the cloud provider handle the infrastructure. * In AVS, you write a service (decentralized system or protocol) and let EigenLayer handle the validation and security aspects. This analogy highlights the idea of providing an abstraction layer on top of the underlying infrastructure, allowing innovators to focus on building new services without worrying about the underlying details." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Was the concept of software as a service (SAS) made possible by cloud infrastructure? While cloud infrastructure played a significant role in the popularization of software as a service (SAS), it is not entirely accurate to say that the concept of SAS was made possible by cloud infrastructure. The idea of selling access to a software application over the internet, rather than requiring users to download and install it on their own machines, has its roots in the early days of computing. In fact, there are several examples of SaaS-like models dating back to the 1960s and 1970s. One notable example is the IBM 8100, a computer system introduced in 1968 that allowed customers to access software applications over a leased-line connection. This was essentially an early form of cloud computing and SaaS. Similarly, the concept of time-sharing systems emerged in the late 1960s and early 1970s, where users could pay for access to computing resources and software applications on shared systems. Again, this predates the modern cloud infrastructure we know today. However, it's true that the widespread adoption of SaaS was significantly facilitated by the emergence of cloud infrastructure, particularly from the late 2000s onwards. Cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure made it easier for businesses to deploy scalable, on-demand infrastructure and services, which in turn enabled the growth of SaaS applications. In summary, while cloud infrastructure did not create the concept of SaaS, it certainly played a significant role in popularizing and democratizing access to software-as-a-service models." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Did cloud infrastructure enable open innovation and specialization of SAS services? Yes, cloud infrastructure enabled open innovation and specialization of Software as a Service (SAS) services. The analogy drawn between cloud computing and blockchain-based services (AVS - Actively Validated Services) suggests that the underlying infrastructure, in this case, the cloud or blockchain, facilitates: 1. **Reduced barriers to entry**: By abstracting away complex infrastructure concerns such as hardware provisioning, scaling, and maintenance, innovators can focus on creating value without being burdened by operational details. 2. **Increased accessibility**: The cloud provides a shared resource pool that allows anyone with an internet connection to access the necessary computing resources, democratizing innovation opportunities. 3. **Decoupling of development from deployment**: Cloud services enable rapid prototyping, testing, and deployment of applications without worrying about underlying infrastructure, fostering a culture of experimentation and continuous improvement. 4. **Scalability**: Cloud-based SAS services can scale quickly to meet demand, allowing innovators to respond rapidly to changing market conditions. 5. **Economic efficiency**: By leveraging shared resources, cloud infrastructure reduces the economic costs associated with building and maintaining on-premises data centers, enabling more efficient use of capital. This enables a range of benefits for SAS providers, including: 1. **Specialization**: With reduced barriers to entry and increased accessibility, innovators can focus on specific niches or areas where they have expertise. 2. **Faster innovation cycle**: The cloud allows for rapid deployment and iteration, enabling innovators to respond quickly to changing market conditions. 3. **Reduced risk**: By leveraging a shared infrastructure, innovators can reduce their exposure to risks associated with hardware and maintenance. This specialization of SAS services has led to the creation of a wide range of niche applications that cater to specific needs, as well as new business models based on data analytics, artificial intelligence, and other emerging technologies." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Is the goal of EigenLayer to support decentralized validation of services, making them ""Actively Validated Services""? According to the text, yes. The goal of EigenLayer is to connect stakers and operators with innovators who are building new decentralized systems and protocols, allowing them to borrow and share trust and security. EigenLayer aims to enable the creation of ""Actively Validated Services"" (AVS), which are services that require decentralized validation. The concept of AVS is meant to be an umbrella term that encompasses a wide range of applications that can be built on top of EigenLayer, including but not limited to, layer one chains. The idea is to provide a flexible and general system for security that allows trust to be offered and shared among multiple stakeholders. EigenLayer acts as a platform that manages the accounting and validation process for these AVS, ensuring that enough stake is backing each service and that node operators opt-in to support it. This enables innovators and creators to focus on building new decentralized services without having to worry about the underlying infrastructure. So, in summary, EigenLayer's primary goal is to enable the creation of Actively Validated Services by providing a platform for decentralized validation, trust sharing, and security management." "these these other things are on this vicinity and what we I think have built is the most generalized system for shat security which is more General than all the other systems in in the flexibility with which trust can be offered and shared so that that's how I would phrase it in relationship with some of the other systems like any networks or protocols there's many different stakeholders so obviously there's rakers who are staking their eth and then also choosing to provide security with that stake to other applications there's The Operators that run the network but I think the concept of the AVS is quite interesting and not something that you know many people in crypto have kind of really come across before EigenLayer so can you help us explain you know what exactly are these abs maybe that's a good SEC way to talk about the the origins of IG da as well yeah um ABS is actively validated services it's a term that we co to explain what types of things can be built on EigenLayer because a lot of people look at EigenLayer you know at least right now in in the context of some kind of a D5 protocol it was never intended to be a D5 protocol it was intended to connect stakers and operators to innovators people inventing new distributed systems and decentralized protocols to borrow and share the the trust and security and we had to come up with an umbrella term you know people usually think of these as chains but we think chain is a very restrictive and a narrow way of thinking uh so we think of these as services and why service right like you know in in the cloud there is an analogous thing called software as a service right software as a service is you write a piece of software throw it on AWS or Google cloud and let it run there and you know as people are using the software you know you pay for the cloud but you also make money and if the unit Economics work then you're actually making money as a SAS but the most important thing for me in the structure of SAS and Cloud was how much open Innovation that actually enabled because you have the cloud and you don't have to think about all the hardware and like how to run it and how to scale it and how to Prov the enough amount of like compute to actually run your services what happens is you have uh a very powerful system where anybody can come and innovate and build new SAS services and put it on top of the internet this led to Super specialization very very narrow specialized SAS services that were built and lots of lots of people all around the world like you know in places which could not compete on the hardware basis could compete purely on a software basis and this is a very interesting important thing that happened with the internet and so if you think of what is the analog analogous thing that can be done on on top of the the crypto blockchain infrastructure that's what an AVS an actively validated service an actively validated service is a service that you uh you know write and it is anything that requires decentralized validation right and these decentralized validated services are then like managed by EigenLayer to make sure that all the node operators opt in the enough stake is backing it like how much particular attributable economic commitment has been backing that service all of this accounting is managed by Aon layer but as a Creator as an innovator you can just write the service and put it on top of like uh EigenLayer so that's the category of uh actively validated Services it includes something like running a new layer one chain but not exclusively running a chain you may be running very specific services and you know what might that service be to take","Does EigenLayer manage the accounting for staking and economic commitment in AVS? According to the text, yes, EigenLayer manages the accounting for staking and economic commitment in AVS (Actively Validated Services). The creator of an AVS doesn't have to worry about things like how much stake is backing their service or how many node operators are participating. EigenLayer handles all this accounting on behalf of the creator, allowing them to focus on building and innovating new services without worrying about these underlying details. This is similar to how cloud providers like AWS or Google Cloud handle accounting for users who deploy software as a service (SaaS) on their platforms. The user doesn't need to worry about the underlying infrastructure costs; they can just focus on building and delivering their application. EigenLayer's management of staking and economic commitment accounting enables a high degree of specialization and innovation in AVS, allowing creators to build highly specialized services that benefit from the trust and security provided by EigenLayer." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","The text is a discussion about EigenLayer, a decentralized network that enables multiple applications to run on top of it. The speaker highlights various use cases for EigenLayer, such as running an oracle that fetches data from the internet and agreeing with other nodes on the correct data, or creating a bridge that reads data from one chain and moves it to another. The speaker also discusses the concept of slashing, which is a mechanism used by EigenLayer to punish malicious actors. The speaker explains that slashing is encoded in smart contracts that talk to the EigenLayer contract, and that Stakers can opt into these contracts, specifying conditions for registration, payment, and slashing. The speaker then describes the process of slashing, explaining that it requires both objective contract triggering and approval from a human committee, known as the slashing veto committee. This committee is made up of experts in protocol design who can adjudicate whether the contract triggered the slashing was legitimate or if it was due to a bug. The speaker emphasizes the importance of protecting Stakers by preventing malicious actors from exploiting the system. The slashing mechanism serves as an incentive for Stakers to act honestly and ensures that the network remains secure. Finally, the speaker clarifies that the concept of reaking in EigenLayer is not similar to rehypothecation in DeFi, where assets are used as collateral to take on more debt. Instead, reaking refers to the risk of being slashed due to malicious actions, which is a distinct and separate issue from taking on excessive leverage. Overall, the discussion highlights the importance ofEigenLayer's security mechanisms, such as slashing, in protecting Stakers and maintaining the integrity of the network." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","What is the main reason that slashing can be triggered in EigenLayer? According to the context, the main reason that slashing can be triggered in EigenLayer is if a Staker performs an ""attributable action"" that is deemed malicious. This means that slashing will only occur when there is clear evidence of intentional wrongdoing or malicious behavior on the part of the Staker. It's worth noting that the term ""attributable"" implies that the action can be directly linked to the Staker, and not just a result of an external factor such as a bug in the code. This suggests that EigenLayer has built-in mechanisms to detect and prevent intentional malicious behavior, while also protecting honest Stakers from being unfairly penalized." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","What are the three types of incentives for validators in EigenLayer? Unfortunately, the provided text doesn't explicitly mention the three types of incentives for validators in EigenLayer. However, based on the context, I can provide a possible answer: According to the text, when a Staker (validator) opts into an Autonomous Validator Service (AVS), they are basically opting into ""positive and negative incentives"". This suggests that there are two types of incentives: positive incentives for good behavior and negative incentives for bad behavior. Given this context, we can infer three possible types of incentives: 1. **Positive Incentives**: These would be rewards or benefits for validators who act honestly and provide high-quality services, such as correct validation, efficient processing, and adherence to network rules. 2. **Negative Incentives (Slashing)**: As mentioned earlier, this is the penalty for validators who act maliciously or fail to meet their obligations. This incentive discourages bad behavior. 3. **Payment Incentives**: Validators are likely paid for their services, which could be a type of positive incentive. These payments might be in the form of tokens or other rewards for participating in the network. Please note that this is an educated guess, and I may be incorrect. If you have more information or context about EigenLayer's incentives, I'd be happy to learn from it!" "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","How does the slashing veto committee work to prevent false positives? The slashing veto committee is a group of external experts who can review and approve or reject the decision to trigger slashing in case of a suspected malicious action by a Staker. This committee serves as an additional layer of protection against false positives, where a legitimate Staker might be unfairly penalized. Here's how it works: 1. **Triggering Slashing**: When a smart contract on Ethereum is triggered, which would lead to slashing due to a suspected malicious action, the protocol will check for two things: * The contract must be objectively verifiable and correctly return as per the conditions specified in the AVS (Active Validators Service) contract. This means that if the Staker has acted correctly, they should not get slashed. * A human review is conducted by the slashing veto committee to determine whether the triggering of slashing was indeed due to a malicious action or an error in the protocol itself. 2. **Human Review**: If the slashing conditions are met and the contract is objectively verified as correct, but there's still a question about whether the Staker acted maliciously, the slashing veto committee steps in for review. 3. **Committee Decision**: The slashing veto committee reviews the situation, considering all relevant factors, and makes a decision: * If they determine that the Staker was not acting maliciously, they will reject the slashing. * If they confirm that the Staker acted maliciously, they will allow the slashing to proceed. 4. **Protection for Stakers**: This process ensures that legitimate Stakers are protected from being unfairly penalized for errors in the protocol or configuration mistakes that may not be their fault. The focus is on ensuring that only truly malicious actions trigger slashing. In summary, the slashing veto committee serves as a safeguard to prevent false positives by providing an additional layer of review and verification before deciding whether to trigger slashing. This helps maintain fairness and trust within the system." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","Can you explain the difference between slashing and rehypothecation in the context of EigenLayer? Based on the context provided, here's a breakdown of the two concepts: **Slashing**: In the EigenLayer context, slashing refers to the mechanism by which a validator (Staker) is penalized for taking malicious actions within the system. This can include attempting to attack or compromise the integrity of the EigenLayer protocol. Slashing is designed to protect the system from malicious actors and ensure that validators behave honestly. When a Staker is slashed, they are penalized according to the rules encoded in smart contracts on Ethereum. The slashing conditions are objective and verifiable, meaning that if a validator follows the rules correctly, they cannot be slashed. However, if a validator takes malicious actions, they can be penalized through the slashing mechanism. **Rehypothecation**: This concept is not directly related to EigenLayer or slashing. In other contexts (e.g., DeFi), rehypothecation refers to the practice of using collateral (e.g., assets) to take out additional loans or margin positions, with the intention of leveraging up one's investment. Rehypothecation can create a chain of debt, where the initial asset is used as collateral for multiple subsequent borrowings. In the context of DeFi, rehypothecation can be problematic if not properly managed, leading to risks such as liquidation cascades or counterparty risk. However, in the EigenLayer context, slashing and rehypothecation are distinct concepts with different implications: 1. **Slashing** is a penalty mechanism for malicious actions within the EigenLayer protocol. 2. **Rehypothecation**, when mentioned in other contexts (e.g., DeFi), refers to using collateral to take out additional loans or margin positions, creating a chain of debt. The key difference lies in their purpose and application: * Slashing aims to protect the system from malicious actors by penalizing them for wrongdoing. * Rehypothecation, on the other hand, is related to leveraging up investments through the use of collateral in DeFi contexts." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","What happens if a validator is slashed, and how does it affect other protocols they have opted into? When a validator is slashed, it means that their staked tokens have been penalized for malicious behavior. This can happen if the validator's actions on one or more protocols are deemed to be malicious by the slashing veto committee. The effect of a validator being slashed on other protocols they have opted into depends on how those protocols are designed and what specific conditions are specified in each protocol's smart contract. In the context of EigenLayer, the protocol that allows validators (called ""stakers"") to opt into multiple applications (called ""AVS""), slashing is encoded into smart contracts that interact with the EigenLayer contract. When a staker opts into an AVS, they agree to follow the conditions specified in the AVS contract, including the terms of payment and the conditions of slashing. If a staker is slashed on one protocol, it does not automatically trigger slashing on other protocols they have opted into. However, if the slashing veto committee determines that the malicious behavior was intentional and widespread across multiple protocols, it could potentially lead to additional consequences for the staker across all protocols they are involved in. But in general, being slashed on one protocol does not directly impact the staker's involvement in other protocols. The design of EigenLayer is such that each AVS has its own separate contract and slashing mechanism, so a staker can be slashed on one AVS without affecting their participation in others. To illustrate this point, imagine a staker who is involved in multiple DeFi protocols (e.g., Uniswap, Aave, Compound). If they engage in malicious behavior on Uniswap, it does not automatically trigger penalties on Aave or Compound. Each protocol has its own rules and consequences for misbehavior, which are separate from other protocols the staker is participating in. So, to summarize: being slashed on one protocol does not necessarily mean a validator will be slashed on all other protocols they have opted into. However, if there is evidence of intentional malicious behavior across multiple protocols, it could lead to further consequences for the validator." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","Why does EigenLayer lean towards protecting stakers over validators when it comes to slashing? According to the EigenLayer team, they lean towards protecting stakers over validators when it comes to slashing because: 1. **Stakers are underwriting the system with their own risk**: Stakers are essentially guaranteeing the integrity of the EigenLayer protocol with their own resources (i.e., their stake). By protecting them from malicious actions, EigenLayer aims to provide a secure environment for stakers. 2. **Slashing is only for absolutely attributable and malicious actions**: EigenLayer's slashing mechanism is designed to prevent stakers from engaging in intentional malicious behavior, such as trying to manipulate the system or exploit vulnerabilities. 3. **Staking is a form of risk-taking**: By allowing stakers to participate in the protocol and earn rewards, EigenLayer incentivizes them to take calculated risks. Protecting their stakes acknowledges that they are taking on this risk and allows them to focus on making informed decisions rather than worrying about malicious slashing. In contrast, validators (or AVS) are external actors that interact with the EigenLayer protocol, and their actions can be more easily monitored and audited. By having a slashing veto committee, EigenLayer can ensure that any potential issues with validator behavior are addressed before they become a problem. Overall, EigenLayer's design prioritizes protecting the stakers' incentives and ensuring the security of the protocol by providing an extra layer of protection against malicious actions." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","How can anti-slashing code be used to prevent slashing from occurring in the first place? According to the text, anti-slashing code can be run locally by a Staker to prevent slashing from occurring in the first place. The code checks that the signature issued by the Staker will not trigger slashing before it is actually issued. If the anti-slashing code determines that the signature would trigger slashing, it can refuse to issue the signature, thereby preventing slashing from occurring. This approach relies on the fact that the trust model for slashing is based on objectively verifiable conditions specified in smart contracts on Ethereum. As long as the Staker runs their software correctly and uses the anti-slashing code to ensure that the signature will not trigger slashing, they cannot be slashed even if the contract specifies that the action would otherwise result in slashing. In this way, anti-slashing code serves as a preventative measure against slashing by providing an additional layer of protection for Stakers." "you know usual examples it might be to run an oracle which fetches data from the internet has a group of nodes agree that that's the correct data and then put it on top of a blockchain it may be a a bridge which reads data from another chain and then moves it on top of another chain it may be a um you know an AI service you may be sitting on ethereum and you want to request some AI inputs and you know you need to run an AI inference to actually adjust the prices or something on top of your Unis swap pool so these are all examples of services there may be other like much more nuanced and specific services that show up on I which we already seeing for example uh we see things like prove of location proof of location is I want to know where these nodes are placed can I run a decentralized service to know the location of these other either operators or users in a decentralized manner and how might the decentralized nodes know it by sending Network latency information like you know I send a packet when do I receive it back if I can receive it back within like you know 30 milliseconds it must be close to one of my locations and if many many nodes can do it and parallel and triangulate where the nodes are so that's a really interesting kind of piece of information so prove of location is that that's an example of a service but we've seen all kinds of different Services I would say like 20 categories of services that are kind of building on top of EigenLayer so that's the ABS actively validated service is a category I can go into some of these examples but that's a high level uh overview yeah and given how diverse these use cases can be I'm curious uh and this is something that a lot of people are discussing as well in terms of the security assumptions because you mentioned the concept of slashing so let's say if someone is reaking for like 20 different applications one of those applications happen to do something bad and you know the validator has to be slashed uh what does that process look like and you know how do we how do we think about the implications for the entire kind of uh you know EigenLayer stack the uh the way to think about it is slashing is encoded into smart contracts that talk to the EigenLayer contract whenever a Staker is opting into a new AVS they're basically opting specifically into an AVS contract and the AVS contract specifies the conditions of registration the conditions of payment and the conditions of slashing so basically who can come in what's the POs to incentive what's the negative incentive to do this stuff and those are encoded in the AVS contracts and so now when um when a Staker opts into a bunch of avss what they're actually saying is I'm opting into these bunch of conditions of like positive and negative incentives and I need to do this bunch of work to actually keep up my uh positive incentives and if the contracts if the slashing conditions are return in code on ethereum Smart contracts and you know for a moment let's assume there are no code programming errors which we have to deal with but you know then essentially what you're saying is you're opting into rigid objectively verifiable slashing conditions which are return in code on ethereum and what that means is if I know that I'm running the software correctly I won't get slashed by this you know by this contract in fact I may even run a piece of code we call an anti-s slasher what an anti slasher does is whenever I issue a signature it checks that this signature will not trigger slashing you know before it issues the signature and so this kind of an anti- slasher can be run locally and so you know that you will not get slashed if you actually if the contracts are correctly return so that's the first level which is as a Staker you're opting into objectively verifiable rigid conditions return as smart contracts on ethereum so the trust model is very similar to the other kind of trust model when you're going and opting into a Unis swap or a or any of these kinds of D5 protocols but to protect users even more we have another layer of protection because we know that code can sometimes be buggy and we see this all the time in crypto like sometimes you know a protocol has a buggy code and then like suddenly people lose their funds and this is something we are taking a cautious approach to and the way we take the cous approach is by actually having what we call a slashing veto committee this is a committee of external parties you know experts in kind of protocol design who can actually vet whether the contract triggered the slashing for a you know on on on the actual protocol or it was a bug that LE led to the slashing if it is adjudicated as a bug slashing does not happen so therefore slashing requires two distinct things the objective contract to trigger the slashing and the kind of human committee to approve it otherwise slashing is not fulfilled so we you know in the balance of powers between stakers and avss we lean on protecting stakers because you know know stakers are basically like underwriting the system with their own like Risk and the the guarantee we want to give to a Staker is if you are not malicious you will not be slashed so slashing is there only for absolutely attributable attributable actions that the Staker or operator took which are malicious not for regular operations where they made a configuration mistake or the the program had a bug or anything like that so because you know when you're building a pon system where anybody can come in and participate you need to protect the system against like malicious actors INF infiltrating the system so you need a system of kerma like positive or negative incentives to keep the system going and that's what Igan does is make sure that as a Staker you don't have any incentive to try to like attack the system whereas also make sure that avss you know have no agency to attack the system even if they put in a buggy C firstly they to put in smart contract code not like you know have arbitrary adjudication conditions but smart contract code and then even then there is a back stop in terms of a common slashing V committee my guess is that a lot of people who are kind of you know concerned about systemic risks that Ian ler could introduce are almost confusing the concept of reaking with the concept of rehypothecation because they see this concept play out a lot in defi where somebody has a bunch of collateral and they use that to margin to do some sort of lending or borrowing and then they use the stuff they borrow to margin again and borrow more and more and more but this is something fundamentally different than that right so if if you do get slash it's not like everything just like every single app that's tied to that St suddenly just like collaps and and stop functioning right just to just to make that very clear to our listeners yeah absolutely I think there are lots and lots of differences I think to take the kind of comparison between the two two things you just laid out imagine you take uh you know people are thinking by reaking into 100 protocols is the same as like you know taking 100x leverage position actually these two concepts are not at all related and the easiest way to see it is if you take a 100x you know margin landing position if the market price of that asset moves 1% you will get liquidated you will lose your entire position whereas if I opt into 100 protocols and I don't act maliciously on any of them I will never get slashed it's a completely different thing","Can you describe the trust model for opting into an AVS (Active Validators Service) contract on EigenLayer? On EigenLayer, when a staker opts into an AVS (Active Validators Service) contract, they are essentially agreeing to be bound by the conditions specified in that contract. The trust model for this process is based on several key components: 1. **Objective Contract Code**: The slashing conditions and incentives associated with the AVS are encoded in smart contracts on Ethereum. This means that the rules governing the behavior of stakers who opt into the AVS are explicit, transparent, and verifiable. 2. **Rigid, Objectively Verifiable Conditions**: By opting into an AVS, a staker is agreeing to be bound by the conditions specified in the contract, which are designed to ensure that they behave in a way that supports the integrity of the EigenLayer protocol. 3. **Anti-Slashing Code**: Stakers can use anti-slashing code to check whether their actions will trigger slashing before taking those actions. This code is designed to prevent stakers from being unfairly penalized for legitimate behavior. In addition to these objective contractual mechanisms, there is also a: 4. **Slashing Veto Committee**: This committee is composed of external experts in protocol design who can review and adjudicate disputes about whether slashing has occurred due to malicious behavior or other factors. If the veto committee determines that slashing was unjustified, it will not occur. The trust model for EigenLayer's AVS is therefore based on a combination of: * Objective contract code and conditions * Rigid, verifiable rules governing staker behavior * Anti-slashing mechanisms to prevent unfair penalties * A slashing veto committee as an additional safeguard against errors or malicious actions." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Here is a comprehensive summary of the text: The discussion revolves around two main areas to consider when evaluating EigenLayer: security and centralization. The speaker highlights that while security has been extensively discussed, centralization is a more nuanced topic. They mention various layers of decentralization, including operator decentralization, and how EigenLayer can contribute to pressures for operators to centralize. To minimize the pressure to centralize, EigenLayer aims to operate on the principle of minimizing the need for computational effort, as seen in the example of Ian's service (IGDA) being built on IG Layer. IGDA is designed to be horizontally scalable, which means it can handle increased performance by adding more nodes rather than increasing individual node capabilities. The speaker suggests that decentralization itself becomes scalability when more nodes are added, resulting in increased bandwidth and performance. This is a key principle used to build EigenLayer. To encourage decentralization, the platform aims to provide expressivity and flexibility for services to specify their requirements for decentralized operators. Services can choose to use oracles of their choice to decide which operators are more decentralized. The discussion also touches on the commercial aspect, highlighting that projects often verticalize into their own infrastructure, but EigenLayer offers a different approach by allowing services to use existing security from Ethereum. The speaker suggests that this could be an attractive option for founders of AVS (Autonomous Value Systems) as it allows them to avoid building and maintaining their own trust networks. Overall, the discussion emphasizes the importance of decentralization in EigenLayer's architecture and how it can provide benefits such as increased scalability and flexibility for services to choose their level of decentralization." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","What are some considerations that we should have when thinking about the centralization that Ian L might or might not introduce? According to the discussion, there are several considerations that should be taken into account when thinking about centralization in the context of EigenLayer: 1. **Decentralization layers**: There are various levels of decentralization that can occur, including operator decentralization. EigenLayer aims to minimize pressure on operators to centralize and encourage more decentralized operations. 2. **Horizontal vs vertical scaling**: Services that require a lot of computational effort may be more inclined to scale vertically (increasing power on each node), while others like IGDA, which is built for horizontal scalability, can increase performance by adding more nodes. 3. **Decentralized trust and its value**: EigenLayer enables the creation of decentralized networks where services can specify their decentralization requirements, encouraging decentralized operations. The value of decentralized trust will be determined by market demand and usage. 4. **Expressivity and flexibility**: The platform allows services to express their desire for more decentralized operators, enabling them to choose which nodes they want to use based on their specific needs. These considerations highlight the importance of balancing decentralization with practical concerns like scalability and commercial viability in order to create a robust and flexible infrastructure for building and deploying services." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Are there various layers of decentralization that protocols like Ethereum may want to have? According to the conversation, yes, there are various layers of decentralization that protocols like Ethereum may want to have. The speaker mentions ""operator decentralization"" as one of these layers, which is likely referring to the degree to which different operators or nodes in a network contribute to its overall decentralization. Operator decentralization could be measured by factors such as: * How many independent entities are operating nodes on the network? * What is the distribution of ownership and control among these entities? * Are there any single points of failure or centralizing forces that could undermine decentralization? By having multiple layers of decentralization, a protocol like Ethereum can create a more resilient and robust system that is less vulnerable to attacks or manipulation by a single entity. This can also provide users with greater assurance that their transactions are being executed in a fair and transparent manner. It's worth noting that achieving true decentralization is often a complex and challenging task, as it requires careful design and implementation of the underlying protocols and systems. However, by prioritizing decentralization and transparency, Ethereum and other blockchain-based protocols can create more trustless and secure environments for users to interact with." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Does EigenLayer contribute to more pressure for operators to centralize? According to the discussion, EigenLayer can potentially contribute to more pressure for operators to centralize if certain conditions are met. Specifically, it mentions that: 1. If there are only a few operators in EigenLayer, and many services need to be operated into the network, this could lead to increased pressure on these operators to centralize. 2. In such cases, the system's performance might require each operator to have more capacity (e.g., computational resources) to handle the increasing load. However, it's also stated that EigenLayer aims to minimize the pressure to centralize by designing the architecture in a way that encourages decentralization. This is achieved through: 1. Horizontal scalability: EigendLayer can scale by adding more nodes without significantly increasing each node's capacity. 2. Expressivity and flexibility: The system allows services to specify their preferred level of decentralization, enabling them to choose which operators they want to work with. By introducing these design principles, EigenLayer seeks to create an environment where decentralized operations have a real value and can potentially earn more than centralized ones, which would counteract the pressure to centralize." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Is operator decentralization a direct and most significant form of decentralization in a system like EigenLayer? Based on the conversation, it seems that operator decentralization is considered a significant form of decentralization in EigenLayer, but whether it's the most direct and most significant one might be debatable. The speaker mentions that there are various layers of decentralization, with operator decentralization being a key aspect. They also emphasize that EigenLayer aims to minimize the pressure for operators to centralize, which suggests that operator decentralization is an important consideration in the system. However, when discussing specific examples and implications of decentralization, the conversation seems to focus more on other forms of decentralization, such as secret sharing, scalability through horizontal scaling, and expressivity in choosing decentralized nodes. In this context, it's possible that operator decentralization is considered a foundational aspect of EigenLayer, but not necessarily the most significant or direct form of decentralization." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Can we try and encourage Services which do not require a lot of computational effort? In the context of decentralization, this is an important consideration. As the speaker mentioned, there are various layers of decentralization, and one key aspect is operator decentralization. Operator decentralization refers to the number of nodes or operators that are involved in a decentralized system. When a system requires a lot of computational effort, it may be more likely to lead to centralization, as only a few powerful nodes can handle the workload. This is known as ""vertical scaling"". However, there are systems that don't require a lot of computational effort and can scale horizontally, meaning they can add more nodes to handle increased demand without needing more powerful individual nodes. The speaker mentions IGda (EigenLayer Data) as an example of such a system, which is designed to be horizontally scalable. This means that the system's performance increases as more nodes are added, rather than requiring more powerful individual nodes to handle the workload. To encourage services that don't require a lot of computational effort, EigenLayer aims to minimize pressure towards centralization and provide expressivity and flexibility for services to specify their needs. For instance, a service can choose which operators it wants to use and how decentralized they want them to be, by using oracles or other mechanisms to decide on decentralization levels. This is a key aspect of creating a market place for decentralized nodes to earn more than centralized nodes, as it allows services to explicitly value and pay for decentralized trust." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Does the system scale horizontally, meaning that as you increase the number of nodes, the system's performance keeps increasing rather than requiring more nodes to have more bandwidth or computational power? Yes, according to the conversation, EigenLayer's service, igda, does scale horizontally. This means that as more nodes are added to the system, its performance continues to improve and increases in line with the number of nodes. In particular, it was mentioned that igda scales by horizontal scaling, which is different from vertical scaling used by other systems like Salana. With horizontal scaling, each new node contributes to the overall bandwidth of the system, allowing the system's performance to increase as more nodes are added. This design allows EigenLayer to minimize the pressure on operators to centralize their operations and makes it easier for services that do not require a lot of computational effort to scale horizontally and maintain decentralization." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","How does scaling horizontally make decentralization itself become scalability in the system? In the context of EigenLayer, horizontal scaling refers to increasing the number of nodes in the system while maintaining a consistent level of performance. This is in contrast to vertical scaling, where each node increases its computing power or bandwidth. The key idea here is that as more nodes are added to the system, the total performance of the system becomes the product of the individual performances of each node. In other words, if you have multiple nodes contributing to the overall processing load, the system can handle a greater total amount of work than it could with fewer or less capable nodes. In the case of EigenLayer's horizontal scaling, this means that decentralization (i.e., having many nodes working together) becomes equivalent to scalability. As more nodes are added to the system, the decentralized architecture of EigenLayer enables the system to scale in a way that would be difficult or impossible for a centralized architecture. Here's why: 1. **Redundancy**: With horizontal scaling, multiple nodes can take on different parts of the workload, reducing the risk of any single point of failure. 2. **Parallel processing**: As more nodes are added, each one can perform its own tasks in parallel, increasing overall system throughput and efficiency. 3. **Distributed load management**: The system can dynamically manage the load across multiple nodes, ensuring that no single node is overwhelmed. By distributing workload across many nodes, EigenLayer's horizontal scaling approach ensures that: * Each node contributes a smaller portion of the total processing power * The overall system becomes more resilient to failures or performance issues at individual nodes * The decentralized architecture naturally scales as more nodes are added In this way, decentralization itself becomes scalability in the system, enabling it to efficiently process large volumes of work while maintaining its decentralized nature." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","What can we do to encourage decentralization and allow services to specify that they only want let's say more decentralized operators? According to the context, EigenLayer has several features and principles that can help encourage decentralization: 1. **Horizontal scaling**: The platform allows services to scale horizontally, meaning they can increase the number of nodes without requiring a proportional increase in computational power on each node. This makes it easier for decentralized networks to handle increased demand. 2. **Expressivity and flexibility**: EigenLayer provides expressivity and flexibility for services to specify their requirements for decentralization. For example, services can choose to use oracles that select only certain types of nodes (e.g., home stakers) for a particular task. 3. **Decentralized node market**: By creating a marketplace for decentralized nodes, EigenLayer allows service providers to pay a premium for nodes with higher levels of decentralization. This creates an economic incentive for operators to maintain high levels of decentralization. To allow services to specify that they only want more decentralized operators, EigenLayer can provide the following features: 1. **Node selection mechanisms**: Services can use oracles or other mechanisms to select nodes based on their level of decentralization. 2. **Decentralization metrics**: EigenLayer can provide metrics that quantify a node's level of decentralization (e.g., number of connections, geographical distribution). 3. **Service-specific requirements**: Services can specify their own requirements for decentralization, such as ""I only want nodes with at least 10 connections"" or ""I only want nodes from countries outside the EU"". By providing these features and principles, EigenLayer aims to create an environment that encourages decentralization and allows services to select nodes based on their level of decentralization." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Can a service use Oracles of their choosing to decide which are more decentralized and which are more centralized? According to the text, yes, a service can use Oracles of their choosing to decide which operators are more decentralized and which are more centralized. This is one of the expressivity features provided by EigenLayer. The idea is that a service wants to use a particular set of operators that meet certain criteria for decentralization, such as excluding nodes run by large exchanges or institutions. They can then use Oracles to gather data on the operators' characteristics and make informed decisions about which ones to use. This approach allows services to tailor their use of EigenLayer to their specific needs and values, promoting decentralization and flexibility in the process." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Is expressivity and flexibility built into the EigenLayer platform so that a service can choose to exclude certain types of nodes or operators? Yes, the EigenLayer platform has been designed to be highly expressive and flexible. This means that a service can choose to specify which types of nodes or operators they want to work with. In particular, the platform allows for services to exclude certain nodes or operators based on their characteristics. For example, a service may choose to only use nodes from a specific location, or only those that are not owned by a particular entity. This expressivity and flexibility enable services to control how decentralized or centralized they want their operation to be, allowing them to achieve the optimal balance between security and performance for their specific use case." "the basic principle and I think when vitalic was discussing a recently I think the two kind of uh areas to look out for that he mentioned were the security and the centralization aspect so I think we talked a lot about the security aspect what are some considerations that we should have when we think about the centralization that Ian L might or might not introduce yeah I think this is a much much more nuanced topic you know I'm glad you brought this up um there are various layers of decentralization that you know protocols uh you know like ethereum may want to have and you know the most I think direct is operator decentralization does EigenLayer contribute to more pressures for operators to centralize maybe there are only like a few operators when there are lots and lots of services that need to be oper opted into and you know the answer to this is you know in in the structure that we are building in igon we want to minimize the pressure to centralize so this is a kind of like an operating principle that we are taking in building EigenLayer and you know if a different team was building EigenLayer they would operate on maybe different principles but like I said we came into the space because of like you know particular ularly building on ethereum because of the shared values and so one of the particular things we do is can we try and encourage Services which do not require a lot of computational effort and this is how Ian da the first service is built on IG on I layer IG da is built to be horizontally scalable which means as you increase the number of nodes the system's performance keeps increasing rather than the I need to have a lot of node requirements on each node to satisfy a certain amount of bandwidth so for example systems like salana scale by vertical scaling each node needs to have more and more in order to actually do well and IG da scales horizontally which means the total performance of the system is the product of the amount of bandwidth available in a node times the number of nodes so you can increase it by like increasing the performance of a node or you can increase it by increasing the number of nodes and because the system is horizontally scaling decentralization itself becomes scalability the more nodes you have the more bandwidth you have and therefore you can scale so this is a principle that we used to build igda okay beyond that what can we do to encourage decentralization and I think over time what will happen is there will be services that require more centralized you know operations that'll be Services which will require more decentralized operations and you know I give us this example this secret sharing imagine I have a secret and I want to store it in a decentralized network so each node has a little bit of the secret if all the nodes were just the same party it was all just coinbase like running hundreds of nodes then I don't get any secret sharing benefit is the same guy just like running 100 nodes and storing portions of the secret so if I'm running a secret sharing Network I actually want decentralization and so one of the really powerful things we're building with EigenLayer is expressivity and flexibility for a service to specify that they only want let's say more decentralized operators how do they know which operators are more decentralized they can choose to use the Oracles of their choosing to decide which are more decentralized and which are more centralized maybe something as simple as I want to exclude all the exchange nodes and I want to exclude all the major lstd nodes maybe a thing that somebody wants to do so there are lots of expressivity in the lay platform and if decentralized trust actually has utility which is what we all believe you know EigenLayer creates a market place for the decentralized nodes to potentially even more earn more than centralized nodes because you know you can't go to ethereum today and say like hey I'm going to pay a transaction fee but this transaction fee only goes to home stakers that's not a thing like your transaction fee goes to whoever picks up the transaction and like Minds it but on IG layer you can actually do it you can actually say like hey I only want to build an oracle which uses the home stakers and you know so we will find out the market value of decentralized trust by actually allowing EigenLayer to exist and our thesis is you know there is enough interesting things to be built that decentralized trust has a real value and actually for the first time very first time in ethereum decentralized nodes could earn something more than centralized notes till now all is the centralized nodes are better yeah that's actually a huge reason why I was very excited to invest in N layer is because one of my big thesis is I believe crypto is the best way to create a market for anything um and I think this is the first market for actual decentralized trust you can actually put a value like a dollar amount fee value on how much people want to pay different type of stakers and what sort of centralization they actually want to see beyond the posturing you see on Twitter because now people can put the money with their mouth this so I'm very excited for that um and I love to kind of talk about the commercial aspect as well because one Trend that I saw in the past few years is you know projects verticalized into their own infrastructure so you see Taps like dydx becoming their own chain you see some uh you know guilds or games like Merit Circle you know verticalized into their own chain so it seems like the market is almost rewarding projects for becoming infrastructure for verticalized to ig layer since you know IG layer is basically telling everybody that hey you don't need to do that you can just simply use existing security from ethereum so from a commercial angle you know what drives founders of you know abss of apps to use EigenLayer versus becoming an L1 or L2 themselves the way to think about it is if you are an AVS founder what are the choices on the table option one build with your own like token and your own trust Network and option to build on top of an existing trust Network like ethereum I and you know deploy your service and you know in the simplest world one would say maybe a lot of avss would want to get started off as the second one which is you know um use uh ethereum and and you know find product Market fit and then maybe like go and do their own thing","Can a user pay a transaction fee on Ethereum only for home stakers, not something currently available on the network? No, a user on Ethereum cannot currently pay a transaction fee only to home stakers (also known as validators or consensus layer participants). When a user pays a transaction fee on Ethereum, it goes to whoever picks up the block in which their transaction is included. The current mechanism does not allow users to specify that they want to reward only validators who are not exchange nodes. This is one of the key features and benefits of EigenLayer's architecture, as mentioned in the provided text: ""EigenLayer creates a marketplace for decentralized nodes (validators) to potentially earn more than centralized nodes (exchange nodes), because you can actually do it, you can actually say like hey I only want to build an oracle which uses the home stakers."" In other words, EigenLayer introduces a new mechanism that allows users to specify and reward validators based on their characteristics, such as decentralization level or node type, in a way that is not currently possible on Ethereum." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","The user proposes an analogy to understand the concept of decentralized finance (DeFi) and the benefits of using EigenLayer as a platform for Autonomous Value Systems (AVS). They compare EigenLayer to a mall where multiple stores are protected by a single security deposit, making it more difficult for attackers to target individual stores. The user then introduces the idea of ""dual staking,"" which allows AVS to borrow trust from two distinct parties: their own token's stakeholders and Ethereum's high-value Quorum. This approach breaks the binary choice between using one's own token or relying on another platform. EigenLayer is designed to facilitate dual staking, allowing AVS to send a portion of their fees to Ethereum's Quorum and another portion to their native Corum. This creates flexibility for AVS to adapt their security models as needed, while also making it more difficult for attackers to target individual platforms. The user emphasizes that EigenLayer is a continuous and optional platform, providing value to AVS by allowing them to access additional security when beneficial. By breaking the artificial tradeoff between using one's own token or relying on another platform, EigenLayer makes it easier for AVS to launch and maintain their services. Finally, the user addresses concerns about whether AVS will eventually migrate away from EigenLayer and towards other platforms. They argue that EigenLayer must create synergies across its AVS users to incentivize them to remain on the platform, rather than being stuck with it out of necessity." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","What is the name of the store in the analogy where you have to put up a deposit? The main store is not specifically named in the analogy. It's simply referred to as ""the main store"" or a generic concept, not associated with any particular brand or name." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","How many protocols are sharing common stake in the described model? According to the text, 100 protocols are sharing a common stake." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","Where does the risk reside in the described Staker model? According to the text, the risk is endogenous (internal) to the Staker. This means that the risk of an attack or failure resides within the Staker themselves, rather than being imposed by external factors. The author argues that this is different from other financial positions or lending models, where the risk may be borne by others (e.g., the protocol or another party). In the described Staker model, the risk is internal to the Staker and is related to their own actions or decisions." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","What happens if 100 Protocols can sustain $1 billion staking on their own, as compared to an aggregated pool across 100 protocols? According to the explanation, if 100 protocols can sustain $1 billion staking on their own, aggregating these stakes into a single pool of $100 billion across all 100 protocols changes the dynamics. The key points are: 1. **Nonlinearity in security**: To attack any one protocol in the aggregated pool, an attacker would need a stake equivalent to the entire $100 billion, not just $1 billion per protocol. This creates a nonlinearity in security, making it much harder for attackers to profit from an attack. 2. **Increased fees**: With more protocols contributing to the aggregated pool, the total fees paid to maintain this large-scale security system increase. This fee is similar to what would be paid by each protocol individually if they were staking on their own ($1 billion). 3. **Hardening of security at scale**: The larger the aggregated pool, the harder it becomes for an attacker to successfully breach any one protocol within the pool. 4. **Synergies across protocols**: The EigenLayer platform aims to create synergies among these 100 protocols, making it more attractive for them to remain on the platform rather than migrating elsewhere. In summary, aggregating $1 billion stakes from 100 individual protocols into a $100 billion pooled stake creates significant advantages in terms of security nonlinearity, increased fees, and hardened security at scale. This makes EigenLayer an appealing option for these protocols, fostering synergies that help keep the platform viable over time." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","At what scale does the nonlinearity of security hardening become impossible to attack? Unfortunately, the text doesn't provide a specific number or scale at which the nonlinearity of security hardening becomes impossible to attack. However, it mentions that as more protocols bring in more fees, the amount of stake will increase, and this is where ""hardening of security actually happens at scale"" making attacks beyond a certain point impossible. The text also mentions that ""in the real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hijack transactions it's simply not possible to pull off an attack Beyond a certain scale"". So, while we don't have a specific number, the text implies that as the size of the stake pool increases (likely in the tens or hundreds of billions of dollars), the security hardening becomes so robust that attacks become economically and practically impossible." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","Why does the amount of stake increase with more and more protocol fees being added, according to the described market equilibrium? According to the described market equilibrium, the amount of stake increases with more and more protocol fees being added because each protocol, in order to sustain its own staking requirements, needs to pay some amount of fee. If 100 protocols are sharing a common pool of stake, let's say $100 billion dollars, each protocol can sustain its own staking requirements on its own (let's say, for example, it would require $1 billion). Therefore, the total fees being paid by all 100 protocols to sustain their individual staking is also $1 billion * 100 = $100 billion. If these $100 billion in fees are aggregated and re-staked across the 100 protocols, the security becomes more resilient because an attacker would need $100 billion to attack any one protocol instead of needing just $1 billion. This creates a non-linearity where the more stake is required to do an attack, the harder it becomes to execute. So, in this scenario, the amount of stake doesn't remain constant with the number of protocols increasing; rather, it increases as more and more fees are paid by each protocol to sustain its own staking requirements." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","What feature is being built into EigenLayer that allows for unique attributable security within a shared pool of stake? According to the text, the feature being built into EigenLayer is called ""Attributable Security"". This allows a protocol within a shared pool of stake (e.g. $100 billion) to have unique attributable security, meaning that even if all protocols are attacked simultaneously, the individual protocol can still slash and redistribute its own funds (e.g. $10 billion for Coinbase)." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","Can an AVS (Application-Viewable Security) have both unique attributable security and pool security in a common system on EigenLayer? According to the text, yes, EigenLayer gives an AVS the ability to express both unique attributable security and pool security in a common system. This means that an AVS can have separate pools of funds for different protocols, where one protocol may want unique attributable security and another may participate in a shared pool. The example given is that Coinbase (the AVS) wants to have $10 billion of its own unique attributable security, but it also participates in a larger pool of 100 billion dollars. This allows the AVS to manage its risk and security in a more granular way. So, to answer your question, yes, an AVS can have both unique attributable security and pool security in a common system on EigenLayer." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","Why do nation-states typically have security functions rather than cities, according to the described analogy? According to the analogy, nation-states typically have security functions rather than cities because there is a ""hardening of security at scale."" This means that as more and more stakeholders (nation-states) contribute to a common pool of security, the overall level of security increases, making it more difficult for attackers to breach." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","What is the name of the dual staking model where ABS can borrow trust from two distinct parties, one being stakers of their own token and another being a high-value Quorum? The dual staking model mentioned is called ""Dual Staking"" with a high-value Quorum being referred to as the Ethereum-based Corum, but more specifically, it's referred to as ""dual Quorum model""." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","How does an AVS decide between sending fees to its own token Quorum versus a neutral high-value Quorum like Ethereum? According to the text, an AVS (Autonomous Value System) can decide between sending fees to its own token Quorum versus a neutral high-value Quorum like Ethereum through governance decisions made in their native token. This means that the AVS's governance will determine how much value is allocated to their own token versus Ethereum, and this decision can change over time. The text specifically mentions that an AVS might start by sending 80% of fees to Ethereum and 20% to its own Quorum, but then later decide to send more value to its own Quorum and less to Ethereum, or even send zero to Ethereum. This flexibility is made possible by EigenLayer's dual staking mechanism. In other words, the AVS has ""optionality"" in how they allocate their fees between their own token and Ethereum, and this decision can be adjusted as needed through governance decisions." "and you know maybe this analogy that I recently came up with may be useful imagine you go into a store you know you go into a mall and then there is this main store that says that hey you have to put up a deposit you know if you come and steal anything here you will lose your deposit and then you know I come in and say hey anyway the main store you're putting up a $100 deposit to enter why don't you make a promise that you know with this $100 you will not steal anything even on the other smaller stores in the mall they say yeah you know now it's in your control to not steal at the mall right like it's very different from taking a margin lending or any other kind of like financial position so the risk is endogeneous to the Staker except smart contract risk and smart contract risk is just pervasive in all of blockchains and that's just what it is right and even that we are trying to build a very cautious govern system in the beginning over time these governance features can be removed but that's the trade-off that we're taking is be cautious in protecting the stake so you know to take another like mental model that people have which is I think very erroneous when 100 Protocols are sharing common stake the model is oh you know maybe there's no risk from the Staker side maybe there's a risk from the protocol side or from the AVS side and I think this is also erroneous and the reason is if there are 100 protocols Each of which can sustain $1 billion staking on their own let's say that's the 100 protocols and so which means they're paying some amount of fee which is sustaining that amount of State in that platform now if you aggregate all of this and create a $100 billion pool this can be restak across 100 protocols the fee is identical to the previous world because you know you're paying the same fee and you're able to sustain $100 billion now to attack any one protocol you need 100 billion doar rather than requiring $1 billion security has this nonlinearity where the the more stake you need to do an attack the more stake you need to actually profit out of and Escape in real world with all this kind of crazy stuff becomes impossible there is no liquidity on an exchange there is no you know tornado or whatever to go and hial transactions it's simply not possible to pull off an attack Beyond a certain scale so there's hardening of security that actually happens at scale so that's the the other I think model which is missing because when people are thinking about 100 protocols sharing the same stake they're thinking the amount of Stak is going to remain the same as the number of protocols increases but that's not the case as more and more protocol bring more and more fees the amount of stake will keep increasing so this is a market equilibrium and and another feature that we're building with IG layer is what we call attributable security when 100 Protocols are sharing a common pool of let's say 100 billion dollars of stake there may be one protocol which says hey you know I just not only want to have this PO olded security but I also want to have unique attributable security just to myself which means even if all the protocols get attack simultaneously I should be able to slash and redistribute let's say $10 billion because I'm coinbase I want to you know be very sure I want to be able to re you know slash and redistribute $10 billion on my own you can do this on EigenLayer EigenLayer gives you an ability to express both unique attributable security as well as pool Security in a common system and the power of pool security is very similar to why like nation states have like security functions cities don't have security functions is because there is a hardening of security at scale so that's um so one of the things we did as you know we thought through like what are the incentives of the F you know of the protocols we know that crypto runs on incentives if the incentive is not aligned people are not going to come and build on top of us one of the things we did is to break the binary choice between hey I stake my own token or I get security from ethereum we support natively what we call dual staking dual staking means I as an ABS can borrow trust from two distinct parties one is stakers of my own token whose whose interests are directly aligned with like the protocols well-being because you know they have exposure to the token and a neutral high value Quorum which is coming from ethereum so you have we have this dual Quorum model which is a very popular model among the many AVS even if they're launching on a single eorum to begin with over time they have the idea to actually build their own other Corum and instead of forcing a binary choice where we say Oh either you choose your your own token Corum or you choose you know the Eid Korum you can say oh I'm sending 80% of the fees to the E Korum and 20% to my own Quorum today and over time I'm going to maybe spend send more to my Corum and less to ethereum maybe at some point I may even send zero to the ethereum cor and send all the value to myself so what this means is specific specifically if you try to use some kind of a discounted cash flow model to try to Value like an AVS you know uh token you might say that the total value that can be accured by the AVS because you have and the thing is this decision between how much value goes to the AVS token versus how much value goes to eth is decided by the avss governance which which will be in their own native token so the at the end of the day EigenLayer is continuous and pure optionality like you have the option to use e if it is beneficial to you you have the option to opt out if it is beneficial to you and what this does is it makes it breaks this binary choice and in in this world the value of the AVS token with igon layer is actually only greater than the value in the absence of wagon layer because adding an option to consume additional Security in a way that you can opt in and out as needed doesn't increase your like uh you know uh cost basis so that's the first first thing is the dual token model basically like completely breaks this artificial tradeoff and makes it very very smooth for people to like borrow as much security as they need to keep their platform in in continuous utility okay so in in this case one of the downstream questions I get is hey does it mean that over time uh you know services will launch on eorum and eventually just migrate on their own and you know this is really a question of whether igen layer is viable not only as a bootstrapping platform for avss but also as a continuous service platform for avss and so it's incumbent on us to find ways to create synergies across these avss in a way that they actually want to stay rather than they are stuck with us and you know we have this you know entrenched Monopoly to like keep this platform going and there are many interesting ways we can actually do it","In the described optionality model, what happens if an AVS sends more or less value to its own token Quorum versus Ethereum over time? According to the text, if an AVS sends more or less value to its own token Quorum versus Ethereum over time, this decision is made by the AVS's governance, which will be decided in their native token. To elaborate on what happens in this scenario: * If an AVS decides to send **more** value to its own token Quorum, it means that more of the fees or value generated by the protocol will be allocated to the native token Quorum. This can lead to a decrease in the importance or influence of the Ethereum-based Quorum for the particular AVS. * Conversely, if an AVS decides to send **less** value to its own token Quorum and instead allocates more to the Ethereum-based Quorum, it means that less value will be allocated to the native token Quorum. This can lead to a decrease in the influence or importance of the native token Quorum for that particular AVS. * At some point, an AVS may decide to send **zero** value to the Ethereum-based Quorum and instead allocate all its value to its own token Quorum. This would mean that the AVS has effectively opted out of using the Ethereum-based Quorum. In any case, these decisions are made by the AVS governance through their native token, and they can adjust the balance between their own token Quorum and the Ethereum-based Quorum over time as needed." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","Here is a comprehensive summary of the text: The speaker discusses EigenLayer, a concept that enables developers to build new services on top of a common infrastructure without having to provision and manage separate security for each service. This is made possible by a shared pool of security resources, called ""EisenLayer"", which can be borrowed as needed. The speaker highlights several benefits of EigenLayer, including: 1. Reduced cost: By sharing security resources, developers don't have to over-provision for the worst-case scenario. 2. Economies of scale: Multiple services can use the same pool of security resources, reducing costs and increasing efficiency. 3. Post-market bundling: Services that are mutually synergistic can be bundled together, making it easier for developers to create complex applications. The speaker also discusses new primitives that can be built on top of EigenLayer, such as co-processors and intelligent DeFi (Decentralized Finance) systems. A co-processor is a Layer 2 system that uses machine learning or AI to dynamically adjust liquidity provisioning on a blockchain. The speaker believes that EigenLayer has the potential to revitalize the crypto ecosystem by enabling new services and applications that were previously impossible due to limitations in existing infrastructure. However, they also acknowledge the risks associated with EigenLayer's reliance on shared security resources and the need for careful management of these resources to prevent vulnerabilities. In conclusion, EigenLayer is a concept that has the potential to accelerate the development of new services and applications in the crypto ecosystem, but its success will depend on careful management and execution." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","Here are the 10 questions extracted from the text: Here are the 10 questions that can be extracted from the text: 1. How does Amazon's cloud computing service, EC2 (Elastic Cloud Compute), allow for elastic scaling of computer resources? 2. Why is it meaningful to have an elastic scaling of security, like Amazon's ES2 (Elastic Scaling of Security)? 3. What are the benefits of having a common pool of security on EigenLayer? 4. How does EigenLayer reduce the cost of security compared to traditional provisioning methods? 5. Can services on EigenLayer pay for security only once, even if they consume multiple services that rely on the same pool of security? 6. How does EigenLayer enable post-market bundling of services into useful and synergistic combinations? 7. What are co-processors in the context of EigenLayer, and how can they be used to improve performance? 8. Can you explain the concept of intelligent DeFi (Decentralized Finance) on EigenLayer? 9. How does the use of AI protocols on EigenLayer allow for more efficient liquidity provisioning on decentralized finance platforms? 10. What are the implications of having a successful or failed EigenLayer, and how would that play out in different parallel realities?" "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","1. Can I borrow as much computer as I want on Amazon's cloud? Yes, Amazon's cloud provides elastic computing capabilities through its EC2 service. This allows users to dynamically scale up or down their computer resources as needed, rather than having to provision a fixed amount of compute power in advance. The idea is that you can ""borrow"" as much compute power as you need for your application, and only pay for what you use. This can be especially useful for applications with variable workloads, such as web servers or data processing tasks." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","2. Does EC2, or elastic compute, allow for borrowing of security as needed? According to the provided context, EC2 (Elastic Cloud Compute) allows for borrowing of computer resources as needed through elastic scaling. However, there is no direct equivalent for security in EC2. A similar concept called es2 (elastic scaling of security) was mentioned in the conversation, which would allow for borrowing of security resources as needed. This would enable users to dynamically allocate and deallocate security resources based on their needs, without having to provision excess capacity. In this context, es2 is not a real service like EC2, but rather an hypothetical concept that illustrates the idea of elastic scaling of security, similar to how elastic compute works in EC2." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","3. Is there a common huge pool of security in EigenLayer that can be pulled randomly like in the cloud? According to the context, yes, there is a common huge pool of security in EigenLayer that can be pulled randomly like in the cloud. This means that instead of having to provision separate security for each service or application, developers can tap into this shared pool of security to access the resources they need. This concept is compared to Amazon's EC2 (Elastic Compute Cloud) and ES2 (elastic scaling of security), where users can borrow as much compute power or security as they want on-demand. Similarly, in EigenLayer, there is a massive pool of security that can be accessed randomly, reducing the need for over-provisioning and saving costs. This feature enables developers to create new services and applications without having to worry about security infrastructure, allowing them to focus on building innovative solutions." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","4. How does EigenLayer reduce the cost of security compared to traditional provisioning methods? EigenLayer reduces the cost of security compared to traditional provisioning methods in several ways: 1. **Consuming security just-in-time**: Unlike traditional methods, EigenLayer allows users to consume security only when needed, rather than over-provisioning for worst-case scenarios. This means that users are not wasting resources on unused or underutilized security measures. 2. **Shared pool of security**: EigenLayer's architecture enables a shared pool of security that can be accessed and utilized by multiple services. This reduces the need for each service to provision its own security, leading to economies of scale and lower costs. 3. **No separate provisioning**: With EigenLayer, users don't need to provision separate security for each service or application. Instead, they can leverage a single, shared pool of security that is accessible across multiple services. By adopting these innovative approaches, EigenLayer makes it possible to significantly reduce the cost of security compared to traditional provisioning methods." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","5. Can multiple services share the same pool of security on EigenLayer, reducing costs and incentivizing mutually synergistic services? Yes, according to the text, EigenLayer enables multiple services to share a common pool of security. This allows services that are mutually synergistic to ""stay together"" and can reduce costs by avoiding separate provisioning for each service. In particular, the author mentions that: * A single service doesn't need to pay for security separately for each service it consumes. * Multiple services can pay a single time for the shared pool of security, leading to an ""economy of scale"". * This incentivizes services that are mutually synergistic to bundle together and form new, useful services. So, in summary, EigenLayer enables multiple services to share a common pool of security, reducing costs and promoting cooperation among services." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","6. What is post-market bundling of services, and how can it be achieved with EigenLayer? Post-market bundling of services refers to the ability to combine existing services and create new ones by integrating them in a way that provides added value to users. In traditional markets, this would require significant development and integration efforts from multiple parties. EigenLayer enables post-market bundling by providing a platform where developers can build and deploy their own decentralized applications (dApps) on top of a common infrastructure. This allows for the creation of new services that combine existing ones in innovative ways. With EigenLayer, users can create complex services by: 1. Selecting individual dApp components or services from various developers. 2. Integrating these components using the EigenLayer framework, which enables seamless communication and data sharing between them. 3. Deploying the integrated service on the EigenLayer platform, making it easily accessible to users. This allows for a new market dynamic where: * Developers can create services that complement each other, rather than competing with existing ones. * Users can benefit from customized experiences tailored to their specific needs by combining services in innovative ways. * The EigenLayer platform acts as an enabler of innovation, accelerating the development and deployment of new services. Examples of post-market bundling on EigenLayer could include: 1. A decentralized finance (DeFi) service that combines a yield optimizer with a price prediction engine to provide users with optimized investment strategies. 2. A social media platform that integrates a content moderation service with a reputation system to promote safe and respectful online interactions. 3. A gaming platform that bundles a blockchain-based rewards engine with a dynamic difficulty adjustment algorithm to create engaging and challenging gameplay experiences. By enabling post-market bundling, EigenLayer opens up new possibilities for developers to create innovative services that meet the evolving needs of users in the decentralized economy." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","7. What are co-processors, and how can they be used in a blockchain context? In the context of blockchain and EigenLayer, co-processors refer to a type of decentralized service that offloads complex computations from on-chain transactions to off-chain nodes. These co-processors can be used to perform tasks such as: 1. **Complex AI-based computations**: Co-processors can run AI models to analyze data, make predictions, or provide recommendations, without having to be executed on the blockchain itself. 2. **Machine learning**: Co-processors can train machine learning models using off-chain datasets, allowing for faster and more efficient training processes. 3. **Data processing**: Co-processors can handle large amounts of data, such as image or video processing, without burdening the blockchain network. In a blockchain context, co-processors can be used in various ways: 1. **Offloading computations**: Co-processors can perform computationally intensive tasks that would otherwise slow down on-chain transactions. 2. **Enabling new use cases**: Co-processors can create new services and applications by processing complex data or running AI models off-chain. 3. **Improving scalability**: By offloading computations to co-processors, the blockchain network can handle a higher volume of transactions without sacrificing performance. To achieve this, EigenLayer proposes a decentralized architecture where co-processors are integrated into the network as separate services that can be called upon by smart contracts or other nodes on the network. This approach allows for a more efficient and scalable use of blockchain technology while enabling new and innovative applications." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","8. How does EigenLayer enable the creation of complex AI protocols that can make decisions based on blockchain data? EigenLayer enables the creation of complex AI protocols by providing a platform that allows developers to build and deploy AI models on top of blockchain data without incurring the high costs associated with traditional ZK proof-based approaches. Here's how: 1. **Off-chain computation**: EigenLayer allows developers to run complex AI models off-chain, using the node's computing resources, which reduces the computational cost of running these models. 2. **Signature verification**: The nodes in the EigenLayer network verify the outputs of the AI models using cryptographic signatures, ensuring that the outputs are correct and tamper-proof. 3. **Blockchain data integration**: Developers can easily integrate blockchain data into their AI models by leveraging the EigenLayer platform's APIs, which provide access to a vast array of blockchain data sources. 4. **Economic security**: The EigenLayer network ensures economic security through a token-based economy, where developers are incentivized to build high-quality services and users are rewarded for participating in the network. By enabling complex AI protocols, EigenLayer opens up new possibilities for building decentralized applications (dApps) that can make decisions based on blockchain data. This could include: * **Smart contracts**: Complex smart contracts that use AI models to optimize decision-making processes. * **Predictive maintenance**: dApps that use AI-powered predictive analytics to identify potential equipment failures and schedule maintenance accordingly. * **Risk management**: Systems that leverage AI-driven risk assessment to mitigate potential losses. The EigenLayer platform's ability to support complex AI protocols has far-reaching implications for the development of decentralized applications, enabling builders to create more sophisticated, efficient, and effective solutions." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","9. Can ZK proofs be used to achieve smart and accurate execution on a blockchain without high costs? No, ZK (Zero-Knowledge) proofs are not a feasible solution for achieving smart and accurate execution on a blockchain without high costs. While they can be used to verify certain properties of transactions or data without revealing the actual data, running ZK proofs can still incur significant computational and gas costs. This is because ZK proofs typically involve complex mathematical computations that require substantial resources (e.g., CPU, memory, and energy) to execute. These costs are often prohibitively high for real-world applications, especially when dealing with complex AI or machine learning protocols like the one described in the context of EigenLayer. In the context of the original answer, the speaker mentions that ZK proofs could potentially achieve similar results to EigenLayer but acknowledges that they are ""very very expensive today."" This suggests that while ZK proofs hold promise for certain applications, their current high costs make them less viable as a solution for achieving smart and accurate execution on a blockchain." "and one way you know I pointed to examples of the cloud early on and one of the really powerful things Amazon's cloud is called ec2 which is elastic Cloud compute right an elastic compute is the idea that I can borrow as much computer as I want and EigenLayer is elastic scaling of security you know if Amazon's ec2 like ion is es2 and es2 is basically elastic scaling of security which is you can borrow how much security you want and so why is this meaningful imagine a Bridge you know who's doing a weekly transaction volume of like you know anywhere between 10 million and 200 million okay so now how much security do I need if I have to provision security separately for the bridge I need to provision worst case 200 million so I need to have 200 million of security just for my bridge but in the EigenLayer worldview there is this common huge pool of security I can randomly pull the amount of security that I want exactly like the cloud which am across all the services creating a big compute platform from which you can pull the amount of compute that you want randomly and that's exactly what happens with I so this reduces dramatically the cost of security because you're not over provisioning for the worst case you're you're consuming security just in time how much you need this is one benefit of EigenLayer there's also all kinds of other benefits where what happens is if a Dap consumes multiple EigenLayer Services I want an AAL I want a DA I want some other thing instead of paying for security separately for each of these Services they can just pay one time because the same pole of security is backing all of these services so there is an economy of scale in Igan lay that actually incentivizes services that kind of are mutually synergistic to stay together so these are you can create almost what what I call like a uh X Market bundling like instead of BU you know somebody could have thought and said oh let let me create a new middleware which is an oracle plus da plus AI together but like who knows how to build an oracle plus da plus AI together in today's market condition like figuring out how to build one is already a huge lift so the EigenLayer allows postmarket bundling of these Services into like useful things that Services can consume reducing the cost basis as well as offering a consumer segment across these different Services just like you go to AWS and you have a bunch of SAS services and you just hook into like five you know a statistic is a typical web2 app has 15 SAS Services integrated in the back end something like that could happen on IG and I think another kind of under uh explored incentive for developers to stick around with i l long term is the fact that it enables kind of New Primitives to be built and one such new primitive that I came across on your Twitter is this idea of like co-processors for instance um and you kind of talked about this idea of like intelligent defi as well which is you know not something that at least I don't think you can build without ion layer so can you kind of break us down you know what exactly is a C- processor and what did you mean by intelligent defi yeah um a a Cod processor is kind of like a layer two system but Layer Two normally you think of as like a chain so this is one of the reasons I don't like the chain terminology so imagine I'm sitting on ethereum and I'm writing a smart contract program and you know maybe I'm on Unis Swap and and you know one of the things I want to do is instead of doing passive liquidity provisioning which is I just put it into a pool and you know fix a certain price level at which I'm provisioning liquidity instead I want to dynamically move around the liquidity right but then the question is who is doing it are you an active participant are you a passive participant what is going on and what might happen is somebody may come up with a machine learning or AI protocol which takes the history of all the transactions and moves around the liquidity like provisioning Spectrum right based on you know these hard inputs on the blockchain imagine that I can actually get high integrity provision of this service which means when the service says that this is the right like you know if I ran this AI on this history and this is the output I get and it's absolutely correct if you had this access and what you could do is you could write you know in uh in your D5 program that hey I'm I'm a passive provider but I'm provisioning liquidity to this AI protocol and you know I'm just hands off after that point and the AI like sits and keeps adjusting your like you know liquidity range and this is a really like interesting service if it could be built but it's not possible to write this in evm or you know you know fit it into the small gas limit of ethereum but if I had an EigenLayer service that I can sit and call on on top of like ethereum the these nodes run the AI service offchain and you know sign off on the in the output of this AI service and then put it on top of ethereum back it with a certain amount of Economic Security now you have a rigid input that the protocol can take and move around this liquidity based on that this becomes really really powerful because now you know we called you know this thing smart contracts right and you know smart contracts you know as they exist today are rigid but not that smart not that intelligent right you know smart means like oh is it an agent is it like doing complex like you know adaptations you have to write it in simple easy code like you know in Unis swap has this curve called XY equals K which is like the first kind of simple programmatic thing that one might think of and what if instead like you had a complex expressivity while not giving up rigidity or correctness how can I get smart and you know accurate execution simultaneously and that's what Ian layer promises there are other Technologies like ZK proofs which could give you this but they're very very expensive today you know running a ZK proof may be as expensive as 100,000 times just running the software you know this is this is possible for simple applications but when I'm when you're talking about running complex AI this cost just like blows up out of control whereas on IG layer you don't incur such a cost so that's that's the idea of cryptoeconomic co-processors I'm going to link your Tweet in the show notes below as well for anyone who wants to check out this idea because I think this is one of the more interesting directions that dii can take that can really Revitalize um now S I think my last question for you just to wrap this up is uh let's assume that there are many parallel realities and we're able to zoom out and see all these realities and in one reality we have IG layer at its most successful and then one where everything's gone wrong something has you horrible happen something horrible has happened so what would EigenLayer look like in these two Universe like what is the most kind of successful outcome you can see and the least successful outcome for EigenLayer the most successful outcome for IG layer would be that you know it it accelerates this whole crypto Vision which is that we can actually enable anybody to come and build new and interesting services on top of a common coordinatEigenLayer","10. In what ways could EigenLayer fail or not live up to its potential, and what would those consequences look like? Based on the conversation, here are some possible ways EigenLayer could fail or not live up to its potential: 1. **Scalability issues**: If EigenLayer is unable to scale to meet the demands of a large user base, it may struggle to provide reliable and efficient service, leading to frustration among users. 2. **Security vulnerabilities**: As with any complex system, EigenLayer could be vulnerable to security breaches or exploits, compromising the integrity of the network and undermining trust in the platform. 3. **Competition from established players**: If EigenLayer is unable to differentiate itself from existing blockchain platforms, it may struggle to gain traction and attract users away from more established alternatives. 4. **Regulatory challenges**: Governments and regulatory bodies may impose strict regulations or even bans on EigenLayer's technology, limiting its adoption and potential. 5. **Technical challenges**: EigenLayer relies on complex technologies like cryptographic protocols and machine learning algorithms. If these technologies prove difficult to implement or maintain, EigenLayer may struggle to deliver on its promises. 6. **Economic instability**: The value of the cryptocurrency used to fuel EigenLayer's services (e.g., DI) could fluctuate wildly, making it difficult for users to predict costs or access services consistently. 7. **Lack of developer adoption**: If EigenLayer fails to attract a sizable community of developers who can build and maintain services on top of its platform, the ecosystem may not flourish as expected. If any of these challenges materialize, the consequences could be significant: 1. **User dissatisfaction**: Users may experience poor performance, security issues, or other problems that erode trust in EigenLayer. 2. **Loss of momentum**: Frustrations and disappointments among users and developers could stall adoption and slow progress on the platform. 3. **Reduced investment**: Investors may reassess their bets on EigenLayer, leading to reduced funding and a decrease in its overall value proposition. 4. **Regulatory scrutiny**: If EigenLayer is seen as failing or underperforming, regulatory bodies may scrutinize it more closely, potentially imposing harsher regulations or even shutting down the platform entirely. However, it's worth noting that these consequences would likely vary depending on the specific circumstances and how they unfold." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","Here is a comprehensive summary of the text: The conversation revolves around the concept of a decentralized and vibrant economy, where developers own platforms and are incentivized to build new projects. The speaker discusses the potential for a catastrophic outcome in this ecosystem, such as hacking or smart contract problems that could result in loss of funds. One specific concern is the financialization of EigenLayer, a protocol that enables users to stake their tokens and issue liquid staking tokens as a receipt. This can provide a buffer against system pressures, but there is also a risk that people may take excessive leverage on top of these liquid staking positions, which could lead to instability. The speaker emphasizes the importance of caution when lending or borrowing against liquid staking tokens, and urges protocols to use conservative estimates of token values to prevent externalizing risks onto other parties. They mention that this is one reason why they don't build these types of platforms, but instead advise users to exercise caution at these layers. The conversation concludes with a discussion about the importance of minimizing the likelihood of catastrophic outcomes in the decentralized economy and encourages listeners to support projects that prioritize caution and responsible development practices." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","What is the name of the person being interviewed on The Block Wrench podcast? The person being interviewed on The Block Wrench podcast is not explicitly mentioned in the text provided. However, based on the conversation and the context, it appears to be a representative of EigenLayer, although their name is not specified." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","What project or ecosystem is being primarily discussed in the interview? The primary project or ecosystem being discussed in the interview is EigenLayer, which appears to be a blockchain-based platform that allows users to stake and earn rewards, as well as issue liquid tokens representing their positions. The host of the podcast also mentions ""the IG protocol"" and other layers on top of EigenLayer, suggesting that the discussion encompasses a broader ecosystem or stack related to EigenLayer." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","What is the term for the asset that represents a person's position in EigenLayer protocol? The asset that represents a person's position in EigenLayer protocol is referred to as a ""liquid reaking token"" or more simply, a receipt token." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","In EigenLayer, what happens when someone has a staged position and wants to go and collateralize it to do lending or borrowing? When someone has a staged position in EigenLayer and wants to collateralize it for lending or borrowing, if they are liquidated, the only way to clear off their loan is to go and withdraw from EigenLayer and then from Ethereum as well. This can lead to pressure into the protocol in both EigenLayer and Ethereum, causing instability. However, issuing a fungible representation of their position in the form of a liquid staking token buffers the system and reduces this risk." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","What type of representation does the liquid reaking token provide compared to withdrawing from EigenLayer protocol directly? The liquid reaking token provides a fungible representation, which is a token that can easily be transferred from one person to another, as opposed to withdrawing directly from the EigenLayer protocol. When you withdraw directly from the EigenLayer protocol, it's not just a simple transfer of funds. It involves moving your stake and collateral from the EigenLayer protocol to Ethereum, which can cause pressures on both the EigenLayer protocol and the Ethereum network itself. In contrast, the liquid reaking token is like a receipt or an IOU that represents your position in EigenLayer. This allows for easier trading and transfer of value without having to actually move funds out of the protocol. It's like having a liquid representation of your assets that can be transferred more efficiently, thereby reducing pressure on the underlying systems. So, in short, the liquid reaking token provides a fungible, easy-to-transfer representation compared to withdrawing directly from EigenLayer, which involves moving actual assets and causing potential pressures on the network." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","What risk is being taken by people who take leverage on top of their liquid reaking positions in financialized infrastructure? According to the text, people who take leverage on top of their liquid reaking positions in financialized infrastructure are taking a risk that is potentially unmeasured or excessive. The issue is not with the leverage itself, but rather with how it's being calculated and managed. In particular, the author suggests that some lending protocols may be estimating the value of a liquid reaking token too liberally, pegging it at 1:1 with the underlying staked assets in EigenLayer (the protocol where these tokens are originated). This could lead people to borrow against their liquid reaking positions without fully understanding the risks involved. The risk being taken is that people who have lent to these borrowing protocols may not be aware of the potential volatility or other risks associated with these liquid reaking tokens. If the value of the token were to drop significantly, it could lead to a cascade of losses and even potentially destabilize the broader financialized infrastructure on top of EigenLayer." "and so what the means is you have this Rich world of not only like you know we talk about end users owning assets but also developers owning the platform or like working on immutable platforms on which they're building their projects so their long-term source of sustenance is actually like really rewarded so many people coming and building new interesting Services you know a rich and vibrant economy of consumption of these services to build things like the open metav ver to build our decentralized AGI to build things like you know secure uh private homomorphic encryption all kinds of interesting things running on top of this common substrate so that's what the most successful interesting world for looks like what is the most catastrophic thing that can happen uh we think about this a lot you know because we want to make sure that we maximize the likelihood of the successful outcome and minimize is the likelihood of the catastrophic outcome what would a catastrophic outcome be something got hacked right and you know people lost money I think that would be a catastrophic outcome uh we we take a lot of precaution to try to ensure that that's very unlikely but in blockchains you know you we don't know these are new and new systems so there's always some risk that you know nobody understands so that's that's one one aspect that could be some smart contract problems in the uh uh in the EigenLayer ecosystem that leads to you know some kind of loss of funds okay the other catastrophic outcome I can Envision is something happening not in EigenLayer but some like you know layers on top and one kind of a layer on top is this financializatEigenLayer where something like you know there is this whole ecosystem of liquid reaking tokens and what they do is they take people who have staged uh you know in the IG layer protocol and issue like a receipt token which is representing their position in EigenLayer this itself I think is not at all harmful like having a liquid representation of your token actually buffers the system quite heavily because when somebody has if you don't have a liquid reaking position what might happen is if somebody uh has a staged position and wants to go and collateralize it to like do Ling or borrowing or whatever and they get liquidated then the only way to like clear off their loan is to go and withdraw from Ian lay and from ethereum their stake and this leads to like you know uh pressures into the protocol into the IG protocol as well as into ethereum itself and what a liquid reaking token does is by issuing a fungible representation which is a token that can just be that change hands instead of you know actually going and withdrawing it actually buffers the risk out of the deeper layers of the stack but what we see happening which might be a risk to the financialized infrastructure on top is people taking leverage on top of their liquid reaking positions you know when people are worried about leverage in igen lay this is the place we should focus on is are people lending and borrowing against their liquid reaking tokens in a kind of unmeasured you know way you know ideally there is no leverage but at you know in practice leverage should be be kept very minimal and this is where I I would urge The Lending protocols for example to urge caution when estimating the value of a liquid reaking token instead of like pegging it at it will always be one as to one to eat they should take cautious approach so that people don't take excess risk and externalize it onto other parties who may then have lent their eath into these Landing protocols and so on so that's the place where I can see some kind of financialization go wrong this is one of the reasons we don't necessarily either build these liquidy staking tokens or you know Landing platforms but we do want to urge all the users to exercise caution at these layers to minimize this kind of an outcome yeah well s I'm really appreciative of how candid you are with this and I look forward to seeing you make sure that the Universe we live in is the first one and not the second one um so thank you so much for coming on the show and we're going to put your socials in the show notes below as well so for listeners at home or if you're watching this on YouTube just click on the drop Dr down and you can follow instram on Twitter so thank you so much for coming on absolutely really enjoyed this conversation Jason hey thanks for supporting another episode of The Block wrench podcast if you've enjoyed this episode please give us a festar rating on Spotify and apple podcast or whatever platform you're listening to this on it really helps us a lot or if you prefer YouTube you can subscribe to our Channel on YouTube as well to not miss an episode I'd love to hear from you guys as well and I personally make sure to read every single comment on YouTube or tweets that are directed at me so feel free to leave a comment there let us know what project you want us to bring on or what trends you want us to talk about or tweet at me at Mr Jason Troy or at theblock crunch on the platform previously known as Twitter currently known as X and thank you so much for supporting and I'll see you in the next episode","According to the speaker, why should lending protocols be cautious when estimating the value of a liquid reaking token? According to the speaker, lending protocols should be cautious when estimating the value of a liquid reaking token because people may take excess risk and externalize it onto other parties who may then have lent their assets into these lending protocols. In particular, the speaker is concerned that if people are allowed to leverage on top of their liquid reaking positions without adequate caution, they may: * Take more risk than intended * Lose value in their liquid staking position * Have their loan collateralized and potentially become ""liquidated"" * Put pressure on the underlying protocol (in this case, EigenLayer) as well as Ethereum itself. To mitigate this risk, lending protocols should take a cautious approach to estimating the value of liquid reaking tokens, rather than simply pegging them at 1:1 with the underlying asset."