Join Delphi Research today and immediately get access to our full Member Portal!

The Delphi Podcast Host and GP of Delphi Ventures Tom Shaughnessy hosts an all star crew of all of the leading Ethereum Layer 2 Rollup scaling projects for an in-depth debate.

Episode Highlights

The Delphi Podcast Host and GP of Delphi Ventures Tom Shaughnessy hosts an all star crew of all of the leading Ethereum Layer 2 Rollup scaling projects for an in-depth debate. See the transcript on our site as well.


We would like to thank Cosmos for making this podcast possible through their sponsorship!
Cosmos is on a mission, link one million blockchains. Name brand projects like Terra, Band, Kava, and Secret use Cosmos and the Cosmos Hub to connect to every other chain in our network. The Cosmos Hub is the port city that delivers Inter-Blockchain Communication today. Learn more at cosmos.network.

Every Delphi Podcast is dropped first as an audio interview for Delphi Digital Subscribers. Our members also have access to full interview transcripts. Join today to get our interviews, first.

Show Notes:

2:00) – (First Question) Eli’s Intro.

(2:26) – Alex’s Intro.

(2:43) – Mark’s Intro

(2:57) – Liam’s Intro.

(3:17) – Ed’s Intro.

(3:40) – Ben’s Intro.

(4:30) – Optimistic roll-ups.

(5:42) – Key differences between Arbitrum and Optimism.

(8:13) – What is Starkware doing compared to Optimism.

(11:20) – Optimistic vs ZK roll ups.

(20:19) – Differences between fraud proofs and validity proofs.

(29:07) – What would happen if the network splits and two parties propose progress in different directions?

(21:18) – Mark thoughts on fraud proofs.

(28:34) – Thoughts on attacks on Ethereum.

(40:02) – Optimism and Arbitrum plans to reach high scale / Signatures / Price feeds

(57:55) – Optimism progressing into production on Ethereum.

(01:01:00) – Thoughts on bootstrapping usage and adoption – Starkware/zkSync/Optimism.

(01:11:57) – Is the roll-up ecosystem going to be sort of a winner take all?

(01:20:10) – Where to find Starkware.

(01:20:55) – Where to find zkSync.

(01:21:57) – Where to find Optimism.

(01:22:43) – Where to find Arbitrum.

(01:23:25) – Where to find Ben Simon.


Resources:


More


Disclosures: This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host may personally own tokens that are mentioned on the podcast. Lets Talk Bitcoin is a distribution partner for the Chain Reaction Podcast, and our current show features paid sponsorships which may be featured at the start, middle, and/or the end of the episode. These sponsorships are for informational purposes only and are not a solicitation to use any product, service or token. Delphi’s transparency page can be viewed
here

 Music Attribution:

  • Cosmos by From The Dust | https://soundcloud.com/ftdmusic
  • Music promoted by https://www.free-stock-music.com
  • Creative Commons Attribution 3.0 Unported License
  • https://creativecommons.org/licenses/by/3.0/deed.en_US


Interview Transcript 

Tom (00:00:00):

Hey, everyone. Welcome back to Delphi’s Clubhouse. Today, I’m thrilled to have two powerhouses of L2 roll-up scaling projects. I’m going to let everybody introduce themselves. We’ll start from the top left. But just a quick overview. We have on Ed from OffChain Labs. We have Eli (Starkware)from StarkWare. Alex (zkSync)from Matter Labs which is running zkSync. We have Liam (Optimism)and Mark (Optimism)from Optimism. We also have my friend, Ben (Mechanism)Simon from the fund Mechanism. Eli, let’s start with yourself. Why don’t you give a quick 15 – 30 second overview on just who you are and then we’ll go to Alex.

Eli (Starkware) (00:00:36):

Yeah, first of all, this is my very first clubhouse. I’m very excited. My name is, Eli (Starkware)Ben-Sasson, I’m co-founder and president at StarkWare. StarkWare is located in Israel. We are a 45 man team doing L2s using a technology called ZK-STARK. Yeah, that’s about it.

Tom (00:01:01):

Awesome, Alex?

Alex (zkSync) (00:01:04):

Hi everyone. I am co-founder of Matter Labs and one of the co-creators of zkSync. ZkSync is a protocol driven by zero knowledge proofs to bring scalability to Ethereum in a trustless way.

Tom (00:01:18):

That’s awesome. Mark, what about yourself?

Mark (00:01:21):

Hey, my name is Mark (Optimism) and I’m a developer at Optimism. In the past, I’ve contributed to Handshake and now I’m working on trying to scale Ethereum.

Tom (00:01:32):

That’s awesome. Liam, let’s go with you next and then we’ll go back up to Ben. You’re in Optimism too?

Liam (Optimism)(00:01:38):

Cool. Yes, I’m Liam. I’ve been working on Ethereum for a handful of years now. I’m co-founder of Youth Global, and I’ve also been working on a protocol called State Channels, which is an L2 scaling protocol. So yeah, generally I’m trying to get Ethereum to a scale and building the community.

Tom (00:01:57):

Awesome. Ed, what about you?

Ed (Arbitrum) (00:02:00):

I’m Ed Felton. I’m co-founder and chief scientist at Off-Chain labs. We make Arbitrum, which is an Optimistic roll-up-based scaling solution for Ethereum. I spent a big part of my career as an academic and did a stint as a White House staffer as well.

Tom (00:02:20):

Damn, that’s awesome. Ben, what about you?

Ben (Mechanism)(00:02:23):

Hi everyone. My name is Ben (Mechanism)Simon. I work at McKesson Capital, which is DeFi-focused crypto investment fund. And I’ve been really interested in roll-ups, specifically over the past few months. So, I’m looking forward to this conversation.

Tom (00:02:35):

Awesome. So guys, we have a lot of projects to cover in a short amount of time. So I’ll try and keep the questions targeted. I’m sure I’m going to mess a couple of things up given that you guys are the experts. Feel free to correct me wherever I’m wrong. Let’s start with project overviews. We could keep this somewhat quick. We have Optimistic roll-ups, ZK roll-ups and Validium. We should start with the Optimistic roll up side with Optimism and Arbitrum. Why don’t we start with Liam (Optimism)and Mark (Optimism) from Optimism. Please give a quick overview of what Optimistic roll-ups are and your approach, and then we’ll go to Arbitrum.

Liam (Optimism)(00:03:16):

Cool. Sure. Thanks. So, basically the idea of Optimistic roll-ups is that it helps to, all roll-ups do this actually, but it’s kind of decoupling the execution from the consensus, right? So, in an optimistic roll-up you make the transactions available and then you do all the execution off chain. You then make available the state groups, the execution, and the results of the execution. Anybody can pull in the transaction data that L1 had consensus over, execute those transactions, and basically compare the state routes if they executed off-chain that they got by executing the transactions to the state routes that were posted on chain. So it’s just a way to not need to do the execution on Ethereum L1.

Tom (00:04:27):

Got it. How does that differ from Arbitrum? Because you guys are both doing Optimistic roll-ups.  What are the key differences between Arbitrum and Optimism? Which is a question for both sides here. But let’s throw it over to Ed to give his take.

Ed (Arbitrum) (00:04:47):

Sure. The main difference in the technology has to do with how fraud proofs work. That is, if somebody proposes a roll up block and someone else believes its not correct, how do you resolve that disagreement? Optimism uses an approach where you re-execute one transaction that there’s a dispute about. Arbitrum uses a multi-round interactive protocol for resolving the dispute in which it subdivides the dispute until it’s a very small dispute and then resolves it on chain. I think most of the differences between the two systems follow from that difference in basic approach. This difference has implications for costs and other things, which we can dig into later.

Tom (00:05:36):

That’s awesome. And I guess just throwing that back to Optimism, what do you guys see as the large differences? Do you agree with that?

Liam (Optimism)(00:05:44):

Yeah, I think Ed brought up a great point with the differences in fraud proofs. I also think that there’s some differences in the VM. Optimism is really focused on staying as close to Ethereum as possible. You really want to be like an extension of Ethereum. We don’t want to develop an entirely different tech stack. I think we also have some different philosophies on MEV and fair sequencing.

Ed (Arbitrum) (00:06:25):

Can I jump in here?

Tom (00:06:27):

Yeah, jump in.

Ed (Arbitrum) (00:06:29):

In terms of Ethereum compatibility, we are super focused on compatibility and we have worked really hard to get as compatible as we can. Our architectural approach gives us a higher degree of compatibility with Ethereum. That’s been one of our north stars in our system. If you compare Ethereum and Arbitrum side by side, we come out pretty well, in terms of incompatibilities. So, I would just encourage people to look at the chapter and verse of that and judge for yourself.

Tom (00:07:06):

Yeah, for sure. Definitely going to get into compatibility a bit. I want to also open this up to Alex (zkSync)and the StarkWare side. Why don’t we switch over to Eli? Why don’t you just give a brief overview of what you guys are doing compared to Optimistic roll-ups and then we’ll go to Alex.

Eli (Starkware)(00:07:30):

Okay. So we represent the cryptographic proof side of things, or the validity proofs side, and that approach to scaling. So, I mean, both us and Optimistic roll-ups want to solve the problems of scale and for that, you need to move some things from L1 to L2. Now the question is, if you want to base your security on L1, how are you going to do that when most of the action is taking place on L2?

Eli (Starkware)(00:07:58):

And I guess the biggest difference between Optimistic roll-ups, which are a form of fraud proofs, and the things called ZK roll-ups, or the things that I’m seeing called ZK roll-ups because actually zero knowledge is a very mathematically well defined object that most of the projects don’t really deliver. So let’s call it just validity proofs based approach. The biggest difference is that in the validity based approach, you may never put out a state update that isn’t valid. It’s impossible cryptographically to do so.

Eli (Starkware)(00:08:34):

And what really happens is that there’s more burden placed on the L2 nodes because they need to generate these proofs of validity which are cryptographically and computationally expensive. But once they did that, these proofs are extremely scalable and can be verified in a logarithmic time. So you can have this exponential compression and purification of correctness. That’s basically the approach of validity proofs which we also known as ZK roll-ups.

Eli (Starkware)(00:09:09):

So that’s Optimistic versus ZK roll-ups. Now within the ZK roll-ups, I’ll describe what we’ve been doing in, and Alex (zkSync)is going to share what they’re up to. We worked closely with a number of teams in offering them scalability solutions. Right now we already deployed three production systems. The first five with spot trading and have been servicing them since June. DYDX with margin trading for just over a week, and roughly at the same time we started servicing Immutable X with NFT minting and trading. It’s also been going pretty smoothly for over a week. These three different systems are all based on validity proofs, and two of them are in something called Validium. The other one is a roll-up. We’re putting a lot of our efforts on opening up these capabilities to anyone to write their own smart contracts and basically get in an Ethereum-like L2 based on validity proofs.

Tom (00:10:27):

Excellent overview. And I have a couple of questions, but I want to get Alex (zkSync)in first.

Alex (zkSync)(00:10:33):

Sure. So I’ll add something to the question of optimistic versus ZK roll-ups. I just want to give a user perspective. Using validity proofs, you get a couple of advantages. An obvious one is that your finality is reached much faster on Ethereum. The moment you provide the proof, it’s final and funds can be used. Let’s say minutes versus a period of one or two weeks for Optimistic roll-ups. And the second biggest interesting thing is that Validium offers a lot of advantages on its own. I think it gives subjectively better security. But there is a similar thing in an Optimistic world, called plasma, where you put the data completely off chain.

Alex (zkSync)(00:11:33):

But what you cannot implement with fraud proofs is a construction called volition, which StarkWare pioneered. Volition is a combination of off chain and on chain proofs. This is what we implement with architecture called ZK Porter. It’s very interesting because it gives you seamless interoperability between Validium and ZK roll-ups, with different properties the users can choose. Specifically what we’re doing with zkSync, we’re currently building a ZK EVM solution. This will be launched as zkSync 2.0. zkSync 1.0 is live since summer last year for payments.  We want to extend it with generic programmability and launch 2.0 this summer.

Tom (00:12:37):

That’s awesome. Thank you, Alex. And there’s a lot to go through. Ben, I want to shoot it over to you because I know you probably have a great way to summarize this for some of our non-technical listeners.

Ben (Mechanism)(00:12:47):

Yeah. First of all, I think getting into the weeds is really awesome. And it’s exciting to hear some of the differences and technical trade-offs between Optimistic and ZK roll-ups and then between Optimistic roll-ups and types of ZK roll-ups. But to add a little bit of context and to situate this discussion in a broader context of scaling Ethereum. I think ZK roll-ups and Optimistic roll-ups both share some important properties.  I think we’re going to get into the nitty-gritty of how they’re different, but maybe it’s just as important to point out at the outset how they’re similar.

Ben (Mechanism)(00:13:25):

So both ZK and Optimistic roll-ups, and roll-ups in general, are different from other potential scaling solutions like side chains. This is because roll-ups separate the computation from the data themselves and offer some sort of finality.

Ben (Mechanism)(00:13:47):

So what that means in non-technical terms is that everything that happens on a roll-up chain itself, isn’t final. It’s not final until it’s included back on the Ethereum-based chain. What that means is the roll-ups are inherently limited. They don’t have the ability to actually forcing acceptance of any transactions that are processed on roll-ups. That limitation is actually a strength because it means that anything that happens on roll-up has a review process. In the case of ZK roll-ups, they made this more instantaneous, with cryptographic validation, that happens on Ethereum. And so there’s this ability for the Ethereum base chain to overrule, in sort of very crass metaphor, any transactions that are processed on roll-ups.

Ben (Mechanism)(00:14:33):

And that’s really important because that’s not the case for other scaling solutions like side chains. This is, I think, why people are very excited about roll-ups because they’re set to inherit the security properties of Ethereum consensus. They don’t require their own massive consensus network. They tap into Ethereum security. I’m curious if any technical experts would disagree with that framing, and ,if so, that would help my understanding. But that’s how I think about roll-ups in general. I think a lot of the disagreements in the technical differences are important to get into. But that’s how I would look at things.

Ed (Arbitrum) (00:15:21):

I guess I would agree with most of that. I think there are some things to clarify though around the issues of finality on Optimistic roll-ups. Because it’s true of Arbitrum, and I’ll let the Optimism folks speak for themselves on this, but you get finality as quickly as in the other systems and as quickly as Ethereum. The reason I say that, is that what you care about as a user, in terms of finality, isn’t what point in time is the transaction fully determined, so that you know and everyone who is paying attention knows that your transaction will happen and exactly what the result will be. In Arbitrum that happens as quickly as on any of these other systems and as quickly as it does on Ethereum. So the only entity that doesn’t find out about your transaction at L1 finality speed or faster is the Ethereum chain itself, right? The whole point of a roll-up is to make it so that the Ethereum chain doesn’t have to follow every detail of every transaction. But as soon as your submission of your transaction for execution is recorded on the L1 chain, then your transaction has finality. Its result is guaranteed. And any one party in the world, including you, can force it to be correctly accepted by the system. So you do get fast finality on Optimistic roll-ups and literally the only party that doesn’t know what the results of your transaction is, is the Ethereum chain. It takes a little bit longer for the L1 chain to fully confirm the already final result of your transaction.

Alex (zkSync)(00:17:17):

So I have a question for this, do you see it as realistic that the users will be able to learn about their finality? Do you think that users of Arbitrum will be able to run the full nodes of Arbitrum? Because that definitely will be a prerequisite for them to learn whether their transactions are final or not.

Ed (Arbitrum) (00:17:37):

So they either need to run a full node themselves, which is feasible because you can run a full node on an ordinary computer. Or you need to do what most Ethereum users do, which is go through a full node that they trust. We view this like Ethereum. We think a lot of users are going to use a full node that’s run by somebody else. Anyone can run a full node, it’s very reasonable to run it on any machine you have. So we’re no better or worse than Ethereum. I would note that if you’re using some other system, like a ZK system, that’s built on top of Ethereum, you would need to run an Ethereum full node yourself to have the same level of confidence. This is not a fundamentally different situation from what you have in a ZK or Optimistic other systems.

Alex (zkSync)(00:18:28):

Follow up question, if you say that the full node will be run-able on average consumer hardware, what scalability boost do we get then? Because if the hardware requirements are the same as for an Ethereum node, then I would expect the same limitations.

Ed (Arbitrum) (00:18:50):

I would say a couple of things to that. Number one is that Ethereum is designed so that the nodes spend only a very small fraction of their time actually executing transactions. Whereas Arbitrum nodes can be executing transactions all the time. And so you get a very substantial speed up from that right away. Then there are a bunch of things that you can do in terms of performance, not all of which we are talking about yet in terms of our roadmap. But there’s a lot you can do to accelerate the execution beyond what Ethereum can do by taking advantage of the engineering freedom you have in designing a system yourself.

Mark (Optimism)(00:19:37):

So that comment that you made about Ethereum using less compute than it actually can; like an Ethereum node will target about 1/10th of it’s max capacity to prevent a denial of service attack or a transaction with elevated gas fees so that the attack won’t be able to crash the entire network. So having that buffer is an important feature in a node.

Tom (00:20:18):

Makes sense. And guys, just as zoom out a bit, can somebody give a good overview of just the differences between fraud proofs and validity proofs? I know we’re sort of backing up a bit, but there is quite a difference between the two setups.

Eli (Starkware)(00:20:34):

I can give it a try. The biggest difference is that in a fraud proof, or an Optimistic roll-up, the system may be temporarily in an invalid position. The way this system works is that anyone can publish any suggested update to the state of the system, even one that may be invalid. What happens if someone does that, is that someone will raise a flag and there will be arbitration. And this is the way it will be resolved. That’s how fraud proofs like Arbitrum and Optimistic roll-ups works.

Eli (Starkware)(00:21:26):

In validity proofs, which is what zkSync and StarkWare are doing, there are also others, Loopring and and Hermes. The system can never be in a position that is invalid. In order to move the state of the system to some new state, a proof must be provided. The cryptography prevents anyone, even a malicious actor, from generating proofs for invalid transitions. In validity proofs, someone called the prover, needs to run a very substantial computation to generate this proof. Then everyone else can sit back and just verify something very quickly and rest assured that everything’s okay. In Optimistic roll-ups, the computational load is passed around among everyone. It requires a lot of participants to be watching and viewing the system, and they all provide pretty much the same amount of work. So there isn’t this skewedness between one node running much harder and than the others. This is a trade off. You could view it either as a benefit or as a disadvantage. It may be actually helpful that someone is running a lot more for others to work less. Or you could say that, no, you want your system to be such that everyone is working equally hard.

Ben (Mechanism)(00:23:10):

So it comes down partly to the question of whether you prefer an approach where you have a bunch of people who are running full nodes. If people are running full nodes, they’re doing the computation themselves. And they know what is happening, and they’re following along themselves. And what Eli’s describing, I think, is an approach which is a little more centralized, where you have a single entity that is doing more heavy lifting and is actually executing all of the transactions while other people are relying on these validity proofs. It comes down to the question of whether you want to be doing proving or dealing with proving every time. Every time someone proposes a transaction, does it have to come with the proof or, I’m sorry, every time someone proposes an update to the state? Or do you want to do the work of proving only in cases where there’s actually a dispute among the parties?

Ben (Mechanism)(00:24:15):

So ZK type systems proactively do a proof every time, whereas Optimistic systems only bother with proving and dealing with disputes in cases where people actually disagree about what the result might be. Hence the name Optimistic. It means that in the normal case, what happens is someone proposes a correct update and other people look at it and say, “Yeah, that’s correct,” and on you go. There’s not on-chain proving or proof checking or any kind of dispute mechanism in the case where people are behaving according to their incentives.

Alex (zkSync)(00:24:57):

Just a short observation that, in a ZK system based on validity proofs, you have much more decentralization because every transaction is verified by you and every node, and not just by a small subset of nodes.

Ben (Mechanism)(00:25:19):

I wanted to briefly jump back to the fraud proofs that I was discussing. An easy way to think about it is that because there is the prover, there doesn’t need to be these referees that are monitoring the state of the roll-up, because the prover is doing all the heavy lifting. In the case of fraud proofs, and Mark, Liam, correct me if I’m wrong, it’s more like, at the outset at least, you have a sequencer.

Ben (Mechanism)(00:25:48):

And I know that, Mark, you mentioned Optimism and Arbitrum are probably going to be taking different philosophical routes down the line when it comes to what that sequencer’s going to look like. You potentially have a sequencer that’s in charge of ordering the transactions and processing on the Optimistic roll-up. Then you have all these other validators, these users, that are refereeing to check that this batch of transactions that was just processed by the sequencer- are those valid? And those referees, these other validators, can potentially throw a flag, to stick with the metaphor, if they want to. This triggers the dispute resolution protocol. Mark, Liam, and Ed can maybe talk about the differences between the dispute resolution protocols. But what happens when this dispute resolution protocol is triggered is that some or either the entire batch of transactions is run through the EVM (Ethereum Virtual Machine), or some segment of it is run through. But that’s where you have this refereeing system. That’s how I think about this fraud present. I’d also add that the Optimistic name for Optimistic roll-ups, is due to that idea that the roll-up assumes optimistically, that the transactions were valid until proven otherwise. Innocent until proven guilty versus ZK is guilty until proven innocent. Everything needs to meet this standard of cryptographic proof.

Ben (Mechanism)(00:27:09):

I’d say that the Optimistic name might have even a different connotation too. This isn’t necessarily intended, but it means that as long as there’s one validator on the Optimistic roll-up that has integrity and can correctly verify the transaction, and as long as you have one active referee on the entire network, then your network is going to be secure if that validator is paying attention and submitting fraud proofs when necessary. Even if all the other validators on the network are corrupted, even if there’s a 51% attack, that state doesn’t matter in the case of an Optimistic world, as long as there’s just one that’s able to submit challenges to either the sequencer or these other validators. And I think that’s a very powerful idea. You don’t just have to corrupt 51% of the nodes like you do on a normal blockchain. You have to actually corrupt 100% of all the validators on an Optimistic roll-up, which is a different level of security.

Ben (Mechanism)(00:28:03):

And to add onto that, and Ed and Mark, but it’s also the case that because there are these transaction withdrawal delays, the finality comes on Ethereum later. Theoretically, isn’t it possible for validators to come online if it looks like there’s trouble with the Optimistic roll-up network? Vitalik has talked about the community being able to mobilize if necessary. Is that true? Maybe we can get into that a little more because this “one honest node” is a really important point. And I know, Alex, you’ve also been critical of that. So I’m just curious a little bit if you guys can dig-in to my assumptions.

Ed (Arbitrum) (00:28:45):

Yeah.

Alex (zkSync)(00:28:45):

So, Tom… Okay. Ed, do want to go first?

Ed (Arbitrum) (00:28:48):

Yeah. Let me just go very quickly. So the one honest node assumption is really important, and it’s not only that one honest node guarantees safety. It’s also that any honest node can force progress. And this is true because you don’t need to have special high-end hardware or anything to function as a node in the system. Anyone can propose an update as well as disputing one that’s wrong. And that means that you can offer a very strong guarantee of progress. Any one party can force correct progress and not just prevent a bad thing from happening.

Alex (zkSync)(00:29:31):

And what would happen if the network splits and two parties propose progressive different narratives?

Ed (Arbitrum) (00:29:37):

There’s a dispute between them. The dispute gets resolved, and the one who’s right will win.

Alex (zkSync)(00:29:42):

By the way, both are valid.

Ed (Arbitrum) (00:29:44):

They can’t both be valid. If they’re both valid, then they’re both correct. If they’re both valid, then they’re going to be the same.

Alex (zkSync)(00:29:53):

Well, they might contain different sets of transactions.

Ed (Arbitrum) (00:29:57):

No, they don’t. And this is actually pretty important about the way these systems work. So let me clear that up. The person proposing these roll-up blocks doesn’t choose, at least in Arbitrum, the order of inclusion of transactions. Ordering is happening by an L1 contract. So you submit your transaction to the chain by making a call to an L1 contract, or more likely a full node does that on your behalf to submit a batch of transactions. And so your transactions are put into order by an L1 contract that’s running on Ethereum and, therefore, trustless. All that happens on the chain and all that these proposed blocks are doing is saying, “What are the correct results of the transactions that are already in order in the chains inbox?” So there’s not an ability to select different transactions to run. It’s not like a mempool. It’s more like processing of a queue. And that means that there is a single deterministic correct result. And that’s really important. That’s how you can have this finality as quickly as you can, because as soon as your transaction is recorded in the chain’s inbox, then the result is fully determined and final. Block proposers are only proposing what is the correct result of a set of transactions that they don’t get to choose.

Tom (00:31:33):

Appreciate that color, Ed and Mark, or sorry, Ed and Alex. And Mark, I think you wanted to jump in too.

Liam (Optimism)(00:31:39):

Yeah, sure. I’ll start off talking a little bit about the fraud proofs because they need to be collateralized by a bond, because the execution, the fraud proof itself, is really expensive on L1. You really want to create a fraud proof that is as cheap as possible to execute. Because then the bond that is put down can be as small as possible. The larger the bond, the more capital inefficiency there is. I do think that in the future, people that are really good at DeFi will figure out systems to automate moving around value. Using the upcoming base fee opcode to be able to know how much value needs to be collateralized at any given moment.

Liam (Optimism)(00:32:32):

And something like Yearn could just move money in and out to make sure that the bond is always greater than the cost of the execution of the fraud proof. Because the idea is that the user that submits the fraud proof will pay the gas for that fraud proof. In return, they get the bond. So unless it’s worth their time, they’re not going to be able to submit a fraud proof that secures the system. There’s a lot of concern around the idea of the fraud proofs being censored. Let’s say that 51% of the miners decide to never build on blocks that include calls to the fraud proof contract. Then, nobody will be able to submit a fraud proof.

Liam (Optimism)(00:33:34):

Now, I don’t think that this is a problem for two reasons. One, you only need one verifier to be online to be able to submit the fraud proof. And you have seven days to get that fraud proof in. Optimism has a seven day fraud proof window, meaning that just one fraud proof needs to get in over seven days. That would be a lot of money, constantly reordering out any blocks that have transactions that call the fraud proof contract. The idea is that the community can coordinate during those seven days and figure out what to do. At the end of the day, the Ethereum community is about rough consensus outside of the code and fixing things that way. The other thing is that I believe that miners will censor users that send fraud proofs because they will want to send the fraud proofs themselves. They will just include a zero gas price transaction directly in the block as the fraud proof and then they will just claim the money themselves.

Alex (zkSync)(00:34:57):

Well, this is very interesting. If I may ask a couple of questions. I think that there are a couple of things here.

Liam (Optimism)(00:35:08):

Yeah, sure.

Alex (zkSync)(00:35:08):

So number one, the attack, which is hypothetically possible, does not require an attacker to spend some ongoing costs the entire week. What they do is just make a plausible commitment. And they say, “We threaten to avert all the blocks, which include this transaction.” And they can show it. They can demonstrate that they have enough hash power. This way, they can force the miners into compliance because the miners would be activists who lose money if they try to block the transaction. So the actual cost is much lower.

Liam (Optimism)(00:35:58):

The only way that would work is if the… Because all it takes is one miner to defect and include the fraud proofs themselves. The reward-

Alex (zkSync)(00:36:12):

No, no, no, no, no, no.

Ed (Arbitrum) (00:36:13):

Wait a second. Let’s get real about this. If it were the case that it’s cheap and easy to launch a seven-day censorship attack on Ethereum, it would be happening all the time now. You could make a lot of money by attacking DeFi protocols and imposing seven-day delays on them. If this were so easy and so cheap and the community really wasn’t going to respond to it, it’d be happening all the time now, but it’s not. I think Alex (zkSync)vastly underestimates the resiliency of the Ethereum ecosystem. And it’s just not consistent with the way that people use the system today. People today don’t wait a week for finality on things.

Alex (zkSync)(00:37:01):

So I think there has been a vetted argument that what prevents miners from doing this 51% attack is that the community might respond. The community response we see with the EIP-1559 is that it’s actually very difficult to achieve a consensus on controversial topics. It’s not as straightforward. We don’t have the uniform community –

Ed (Arbitrum) (00:37:32):

Then why doesn’t this attack happen all the time now?  Why doesn’t this attack happen all the time? Why don’t people –

Alex (zkSync)(00:37:39):

So yeah, there’s been a really interesting argument. There has been one strong argument that it’s actually not in the interests of miners. So first of all, it’s very difficult to exploit any existing vulnerabilities in a very deterministic way. Because what you’re going to do is speculate that the cost of maker will fall and they have to short a lot of maker, they need to provide a lot of capital and so on.

Ed (Arbitrum) (00:38:15):

You’re going to block Oracle updates. That’s the attack. If your argument was correct it would be happening all the time now.

Alex (zkSync)(00:38:23):

No, no, no. You need some way to exploit it to make more money that you’re going to lose as a miner on the declining cost of Ether and-

Ed (Arbitrum) (00:38:32):

Aha.

Alex (zkSync)(00:38:33):

Yeah. So that’s very interesting. So this is an argument I can buy, that the miners have some vested interest in the ecosystem. And that’s why they’re not going to do it.

Ed (Arbitrum) (00:38:42):

That’s why it not happening.

Alex (zkSync)(00:38:43):

It’s going to harm them in the long term, unless there is no long-term. So all the miners know that we are going to transition to the proof of stake very soon. And it’s actually-

Alex (zkSync)(00:38:58):

Sorry?

Ed (Arbitrum) (00:38:58):

Why is this attack not happening now? There are easy ways to profit.

Alex (zkSync)(00:39:01):

Because there is not straightforward.

Ed (Arbitrum) (00:39:04):

Block Oracle updates. That generates very easy price differences between the DeFi systems that rely on the Oracle updates and the outside markets. And then just exploit that price spread like crazy by letting only your own transactions in. It’s an easy attack to do. It would be very profitable. And if you were correct about how easy this was to do, this would be happening all the time now, but it’s not.

Alex (zkSync)(00:39:36):

So that’s what I’m saying. Right now, the miners have a lot of vested interest. But closer to the point where we’re going to transition to proof of stake, this is not going to be the case.

Ed (Arbitrum) (00:39:45):

So all DeFi is going to be dead at that point?

Alex (zkSync)(00:39:51):

I think that it’s really hard rectify in this way. To get the exploit money through exchanges and everything, you’re going to lose a significant portion of that. And you lose the future-

Ed (Arbitrum) (00:40:12):

Yeah. Okay. So the point here, though, is that this is not a thing that’s about Optimistic roll-ups. This is an issue with any system that relies on a finality period of less than a week.

Alex (zkSync)(00:40:24):

I would disagree because there is a huge difference in how easy and straightforward it is to actually exploit the attack. If you do something which is very complicated and you have to do a lot of manipulation and you need a lot of capital to put there, that’s one thing. If you just run something that gives you $10 billion and it’s just yours and you can immediately use it, it’s a very different thing.

Ed (Arbitrum) (00:40:51):

But there are market makers now that have a ton of liquidity. This is their whole business. If anybody can cause a price spread between an Oracle-based DeFi market and the rest of the world, there’s tons of arbitrage available. It’s an easy attack. It’s not happening. This has nothing to do with Optimistic roll-ups. You’re postulating that Ethereum is fundamentally broken, and it’s just not.

Alex (zkSync)(00:41:20):

I’d like to raise a different… Well, I’ll pose it as a question. I think it’s an advantage of a validity proof over Optimistic proofs. But I’d like to hear how Optimism and Arbitrum are going to deal with such things. So here’s a story. We went live with DYDX about a week and a half ago. On April 17, there was this a 20% drop in prices of cryptocurrencies and DYDX’s margin trading. The reason that everything was handled very smoothly, our system handled over 1,000 liquidations in less than an hour, was because there was this very high resolution usage of Oracle price feeds that fed into the system. So what’s going on here is that there’s this very elaborate computation that you put on L2 because it would be too gas costly to put it on L1. And in particular, it wants to consume Oracle price feeds at a very high rate. Now, with a validity proof system, you do not need to put any of those price feeds or their signatures or anything on chain because you have the validity proof. This led us to settling a liquidation with less than 6,000 gas. This happened two days ago.

Alex (zkSync)(00:43:01):

Now, if this would have been in an Optimistic roll-up, you need to put on a main chain, at a very high rate, all of the price feeds, with their signatures, for the system to have high resolution, and allow large leveraging. So my question is, how do you plan to reach high scale with all of this auxiliary data that is needed: the signatures, the price feeds and all of that stuff? What are your plans, Optimism and Arbitrum, for allowing such things at high scale?

Ed (Arbitrum) (00:43:38):

I think there are a couple pieces to unpack here. One is the question of how any of these systems can support very fine-grained Oracle updates in a trustless way because the issue there is how do you interweave these high-frequency Oracle updates with the trades or requests that come in from users.

Ed (Arbitrum) (00:44:03):

That goes to the question of how to do fair ordering and how to do it at a rate that is, I think you’re postulating, much faster than the Ethereum block time. And that’s a whole other side conversation. There’s no inherent advantage or disadvantage for any one system, as far as I can tell.

Eli (Starkware)(00:44:28):

I disagree on that, but that’s a separate point. I think with proofs, you can have much better share of… but yeah.

Ed (Arbitrum) (00:44:37):

Right. So, what you’re saying here is basically a statement about the size of digital signatures, or what is it exactly?

Eli (Starkware)(00:44:46):

I’m giving a very practical situation that arose in our system two days ago. There was this huge drop in prices. You want to have correct liquidations for your customers. In order for the system to allow very large leverage, you would like to have a high rate of updates of price feeds and their signatures. So in a validity proof system, you can have one proof that asserts that these prices were done correctly.

Eli (Starkware)(00:45:50):

Practically, we’re talking about hundreds of price feeds within a very small amount of time, and these are signed, and you want to compute the averages and so on and so forth. So that’s a lot of information that needs to go into the processes. I’m wondering how are you planning to deal with that?

Ed (Arbitrum) (00:46:08):

Well, so I guess what I don’t understand is why you postulate that these Oracle feeds are going to be so large. Because it seems to me that they don’t need to be so large. They can be encoded very efficiently. And all that needs to be put on chain is basically a compressed version of them, information sufficient to reconstruct what they were.

Ed (Arbitrum) (00:46:31):

And so there are some very obvious schemes around data compression and digital signatures that will solve this problem.

Eli (Starkware)(00:46:40):

But how will you Optimistically show that the liquidations were done at the right time, at the right price, if you compress the information? If you took the average over the past… The average and the variation or something like that, how are you going to prove- how are you going to prove it? You need the full sequence. At least, that’s how we do it. We take the full sequence, and we actually process it and do all the liquidations when they happen at the time. But it doesn’t appear on L1 because you can prove it.

Ed (Arbitrum) (00:47:11):

The answer is lossless compression. These things are very compressible. You don’t need to sign every single update. You don’t need to post a signature for every single update. You don’t need to keep the full contents of every update. These things are super compressible if they are- and in a lossless way- if they are being done, if they’re very frequent. This is an engineering problem. It’s not a fundamental barrier to the climate of these sorts of systems.

Eli (Starkware)(00:47:42):

And the signatures? Suppose it always, over five minutes, every second, it just goes minus one. So I agree you can have something that just says over that five minutes, it went down by minus one. But each minus one needs to be signed by the Oracle feeds. And these are different signatures, and they have pretty much full entropy. So you’re talking about a lot of signatures. What are your… That cannot be compressed because these are [crosstalk 00:48:09].

Ed (Arbitrum) (00:48:13):

So, you chain the entries together and simply sign an accumulator. And that way, only the last accumulator needs to be posted. Again, this is an engineering problem. This is not a fundamental limitation of these systems.

Eli (Starkware)(00:48:28):

No, one… Wait, wait. One reaches the information, theoretic lower bound of how much information is there. And that’s what you need in Optimistic roll-ups. And it’s a theoretical limit. With validity proof, you can go below that. You don’t need the information, theoretic information. You can go below that.

Ed (Arbitrum) (00:48:52):

You have a different limit. All of these systems have limits. There needs to be enough information posted on L1 so that people know what happened, and different systems have different engineering constraints. It’s not the case that one type is dramatically more efficient than another type inherently..

Eli (Starkware)(00:49:11):

No, it is the case. I’m sorry. It is the case. I’ll give another example. Suppose Alice and Bob, over an hour, just send back and forth… With signature as a single payment. Okay. A million of them. Okay. Alice pays Bob. Bob pays Alice. Alice pays Bob. Bob pays Alice. In an Optimistic roll-up, in order for the system to work, you need to put a million signatures. It’s an inherent thing. In which a ZK roll-up, you do not. You can give one succinct proof that there were a million signatures, a sequence of them. They are fundamentally different. I think one is fundamentally better than the other. They’re not the same.

Ed (Arbitrum) (00:49:53):

Again, this is engineering, right?

Eli (Starkware)(00:49:56):

No, no. Theoretical, it’s not engineering. It’s not engineering. Theoretically speaking, no-

Ed (Arbitrum) (00:50:03):

Let you aggregate a large number of transactions together into a single signature.

Eli (Starkware)(00:50:10):

That is not an engineer. No, no, it’s not an engineering thing. No.

Ed (Arbitrum) (00:50:16):

These limitations, these are not real limitations of these systems. It’s just a question of which methods you use. If you support BLS signatures, then you only need one signature per batch of transactions, and the batches can be large.

Eli (Starkware)(00:50:33):

No, Alice and Bob… No, no. With Alice and Bob, back and forth, I don’t know which BLS signature you’re using. No, I don’t think I… no, I’m not aware. Maybe there’s something… And it’s not just… I’m telling a story that happened two days ago on a system in production. There’s a fundamental issue, and it is not just theoretical. It is also extremely practical, as I demonstrated two days ago. Now I’m not saying… There are a lot of things that are very good about Optimistic roll-ups, but just saying that it’s all the same, I’m sorry. I don’t buy that.

Ed (Arbitrum) (00:51:15):

Let me try this again. All of these systems post a batch of transactions periodically, right? And-

Eli (Starkware)(00:51:25):

Not necessarily. No, no, no. We do not. In our systems in production now, we do not post a batch of transactions. No, we do.

Ed (Arbitrum) (00:51:34):

Post every transaction separately?

Eli (Starkware)(00:51:37):

No, we don’t post the transactions, period. In Validium for instance, you do not post the transactions. And even in the roll-up, you do not need to post the transactions. You just need to post the information that allows you to reconstruct the current state without posting the transactions. In particular, you do not post the price feeds, you do not post the signatures. You don’t do that. It’s not needed.

Ed (Arbitrum) (00:52:02):

Right. Again, we’re now veering off into this other discussion about whether you have data availability services that are off chain and-

Eli (Starkware)(00:52:13):

No, no, you do not need data availability. No, no, no, no, no. You do not need data… It’s not related to data availability, even in roll-up. So for instance, DYDX is a roll-up mode. It does not require any data availability, and yet the individual transactions and the individual pricings are not posted on the main chain. And the reason is an inherent one. It’s not an engineering one. It is because in a validity proof, you do not need to provide all the nodes, all the information needed to check that the state transition was valid. You do not. It is inherently different. It’s not just an engineering, and it’s not just a choice. These are validity proofs and fraud proofs are two very different technologies.

Ed (Arbitrum) (00:53:00):

How often do you post a proof on chain?

Eli (Starkware)(00:53:06):

Today, with our systems?

Ed (Arbitrum) (00:53:08):

Yes.

Eli (Starkware)(00:53:10):

Every few hours, currently. Basically, I just want to say our customers are the ones who tell us at what rate they want to post the proof, but it is their choice. Roughly once every few hours currently is the rate.

Ed (Arbitrum) (00:53:23):

Okay. So, if an Optimistic roll-up, were posting a block every few hours, or a batch of transactions every few hours, to make those transactions actually… The results of those transactions actually get finality, you would need one aggregated signature for each one of those several hour periods, right? You can use BLS signatures to aggregate all of the digital signatures on all of the transactions within that mega block. And that is one transaction, which… One signature, which is 64 bytes every several hours, if you were to do it that way. It’s just not a big overhead if you use signature aggregation.

Eli (Starkware)(00:54:06):

Wait. What you are saying is that because in certain cases you do not want to post all of the witness as is needed, you are adding some external… Some construction, for instance, aggregated signatures, which you can, for the case of signatures. For other cases, you cannot. And in those cases, yes, you have it.

Eli (Starkware)(00:54:32):

I want to say that with validity proofs, you do not need to revert to that. You could add, you could have validity proof also over a BLS signature, but I’m saying you do not need that. Basically, if tomorrow there was some other computation that you cannot use aggregated signatures, because for instance, it’s not about signatures. It’s about something completely different. And it has actually a lot of entropy in it for all kinds of reasons. So you cannot compress it. In the case of Validity proof-

Ed (Arbitrum) (00:55:03):

Right. You had this example about Alice and Bob generating transactions back and forth for two days. I’ve explained how the overhead of that is very low…

Alex (zkSync)(00:55:13):

May I try to point out what the disagreement is here- I think I understand why you’re talking past each other. So what Eli (Starkware)says is, if you have 1000 transactions that update a single variable, single storage flow-

Ed (Arbitrum) (00:55:28):

No. No, that’s not what I’m saying. Any 1000 transactions can use aggregate signatures. That’s the point. That’s the point of BLS signatures. You can have many signatures made by different people, and you can aggregate them all into a single aggregate signature. So Alice-

Alex (zkSync)(00:55:49):

I understand that. I understand that. I get that. But then you still have to post 10… sorry. 1000 inputs for those transactions.

Ed (Arbitrum) (00:56:01):

Compressed. Yes. As do all of these systems. If you want-

Alex (zkSync)(00:56:05):

What do you mean, compressed? You have to post 1000 values of price feed update and one single BLS signature. Is that correct?

Ed (Arbitrum) (00:56:13):

Compressed. Yes. You can compress the-

Alex (zkSync)(00:56:16):

What would you mean by comp-

Ed (Arbitrum) (00:56:17):

Data compression is a standard engineering technique.

Alex (zkSync)(00:56:21):

Okay. But it’s still all of them, right? If you have 1000 data points, then you have to put 1000-

Eli (Starkware)(00:56:29):

Wait, wait, wait. First of all, you also have, if you compress… you have 1000 signatures, but you also want to post only the compressed information. I don’t know what exactly this compressed is. You also need to somehow prove that the compressed information was signed by these 1000 signatures, which frankly, I don’t know how to do within a BLS scheme unless you add to it some validity proof. No, it’s not just engineering.

Eli (Starkware)(00:56:57):

Look, I come from cryptography. I build systems that prove such things. It’s not just engineering. There is no BLS signature that allows you to assert the computation. That thing is precisely called a proof. It’s a different thing. A signature signs on data. A signature is not sufficient to show that a computation or a compression was done correctly. That thing is called a proof. It’s a different creature. It’s a proof. So you could add… Right, if you want to compress, you can add a proof. That’s fine. Right? But that’s something extra. It’s called a validity proof, and it’s not just signatures.

Ed (Arbitrum) (00:57:35):

Well, we’ve gone pretty far in the weeds here. Let me tell you what is true about Arbitrum. Arbitrum can post transactions as frequently as you want. It posts them at whatever pace users need in order to get finality. The data that is posted on chain is compressed. The system can support BLS signatures, so that you have only one signature over a whole batch of transactions, no matter what those transactions are and no matter how which parties put them in. This is all working today. And that’s the situation. The decompression of that data, the verification of the BLS signatures, that all happens in L2 computation. And so it’s all covered by the Optimistic roll-up system.

Ed (Arbitrum) (00:58:25):

So this is a thing that works, and our code is open source. You can go look at it if you don’t believe it exists. And so, it’s a little bit surreal to be sitting here and arguing about whether this is possible when we’ve actually built it.

Speaker 1 (00:58:41):

I don’t think anyone is arguing whether or not it works. What I think maybe I’d recommend… Tom, Ben, if you want to try to let us zoom out a little bit here, because I think we’re getting into some insane level of detail here. I don’t think is mattering that much.

Alex (zkSync)(00:58:53):

I think if I just can point… I think that we can really succinctly show this problem- because we’ve been discussing it and it’s really interesting to come to a conclusion. So, in this example, if we have 1000 feed updates, we would have to post 1000 pieces of data on an Optimistic roll-up. And just one piece of data only on a Validium process, on a ZK roll-up. Now, 1000-

Tom (00:59:21):

Guys, I think we are gearing a little off into the… I might need a PhD to understand it. The discussion and debate’s obviously incredible, but maybe let’s zoom out and redirect a little. Liam, I know you wanted to maybe jump in a bit.

Liam (Optimism)(00:59:38):

Oh yeah. I guess the one area that I don’t think we’ve covered much, it’s less about the extreme detail of how these systems work, but more about how we’re progressing them into production in Ethereum. I think it’s great, for example, like Stark was talking about how DYBX or Deversifi has been running on it, recently, in production. I think that’s phenomenal. I think what maybe folks are more interested in is, okay, how are we going to get to production specifically? And will Ethereum scale? What’s the pathway going forward for the rest of the year? I think that’d be more interesting to touch on.

Tom (01:00:13):

Yeah. Feel free to jump in. And I know we’re getting to the 1:30 mark. So if anyone has to leave and is busy, feel free. If not, I’d love to keep going, but if you have to leave, no worries.

Liam (Optimism)(01:00:26):

Yeah. I’m going to say a few things just about Optimism at least just my perspective on that question. Like Mark (Optimism)mentioned at the onset of this whole discussion, what Optimism is attempting to be is not some new scaling business model or anything like that. We’re trying to basically extend Ethereum. We see ourselves as members of the Ethereum community.

Liam (Optimism)(01:00:47):

And so in particular, the way that we’re approaching this whole situation is, what’s the minimum possible difference from the existing way that Ethereum… developers consider building Ethereum, that they can consider building on this roll-up system? So to the extent, the types of things we’ve done are make our sequencer not a brand new piece of software. We’re just starting by dipping, yeah, literally just a dip on gap and similarly, the compiler of, um, this Solidity compiler, we’re doing a fork of that to make it so that you can compile down to the OVM which is equivalent to EVM in Arbitrum.

Liam (Optimism)(01:01:25):

But the point I’m making is that the way we’re approaching this is making it so that developers who are already in this ecosystem don’t need to learn a brand new set of fundamental new concepts. They can have a small diff from their existing knowledge of how Ethereum works, to begin actually thinking about developing on Optimistic roll-ups.

Liam (Optimism)(01:01:44):

That’s the philosophical approach that I think is really important to consider here. I think that’s the reason why, for example, a lot of people in the community right now are very excited to be building in Optimism, are very excited to be porting over their existing applications over to it. Because if they can understand it, that’s easier to kind of build. Yeah. So to that end, some things that we’re doing are just a very tiered approach where we’re just finding great applications that have worked today, that have developers on their teams, so like Synthetics or Uniswap that already understand how applications in Ethereum could be developed with the existing Ethereum tech stack and holding their hands to help them understand exactly how the OVM works, so that they’re feeling very comfortable deploying on it. Anyway, a lot of things there, but that’s kind of the philosophical approach. I’d love to hear how others approach it.

Tom (01:02:33):

No, Liam. That’s awesome way to zoom out. And guys, I just want to go on a bit of a round robin, more of a rapid fire question route. So, we’ll start with Eli, we’ll go Alex, Optimism, and then add Ben, feel free to chime in or just ask the next question. But how are you guys thinking about just bootstrapping usage and adoption? Obviously, there’s no tokens here, so it’s tough to incentivize users to come, more rough to incentivize developers to come. How do you do that? And how do you do it at scale?

Eli (Starkware)(01:03:01):

Yeah, that’s a terrific question. We grapple with it every day. And I want to say that on compatibility with Solidity, I think the other three projects are going to be compatible and have stated so, and actually, were taking a different route. So it’s even more challenging. We have this new, funky, true and complete language called Cairo. And so the way we’re going about it is that we want every piece of what we’re doing to be as immediately usable in production by real teams that demand it.

Eli (Starkware)(01:03:37):

So the very first thing we did was to get those teams that were willing to work and try out these systems and build them systems. This allowed us to hone the system and the demands from the programming language and integrate it with the starks and everything. And now we’re building on top of that and making it into something that we’ll be able to enable with smart contracts and whatnot. But yeah, we’re taking a different route that the other three teams here, and if you want, we can discuss that. That’s my short answer.

Tom (01:04:16):

That’s really helpful.

Eli (Starkware)(01:04:16):

Be as close as possible to production all the time. Make everything as useful as possible to real teams and real needs. That’s our route.

Tom (01:04:23):

That’s helpful. Alex, what about you guys?

Alex (zkSync)(01:04:26):

Yeah, so I want to join Eli (Starkware)in saying We also built on the principles of Ethereum, and we want the ecosystem to be an extension of Ethereum. And the way we think about it is just to embrace the existing ecosystem, the knowledge of ecosystem. So our programming model is exactly the same one as Ethereum. We will support Solidity, starting with education 2.0, and we expect most contracts to just compile and most students to just work out the box. So we’re following this much.

Tom (01:05:01):

That’s helpful, Alex.

Alex (zkSync)(01:05:02):

And we’re just going to offer a… Let’s say an order of magnitude better experience from different angles, on the cost side, on the finality to L1, so that users can withdraw immediately. Especially on the cost side with the ZK port approach, for which we are really, really super excited about because it’s going to bring back users who left Ethereum due to very high gas prices. And I think the roll-ups will not be able to alleviate the gas pressure to a very high degree. So the roll-up transactions are still going to be pretty expensive compared to what you guys have on the side chains or other ones. And we think that with this mixed approach, we can bring all those users back, and let them seamlessly participate in the full Ethereum economy, with ZK roll-up on the other side-

Alex (zkSync)(01:06:03):

And all of that with EVM and with the tooling which people are used to. The same programming model.

Tom (01:06:13):

Excellent. Thank you, Alex. Why don’t we go to the Optimism side, Mark (Optimism)and Liam. Or if they’re out, Ed, I know you’re ready to go.

Ed (Arbitrum) (01:06:24):

Sure.

Tom (01:06:24):

Oh, sorry, Mark.

Ed (Arbitrum) (01:06:25):

Yeah, so, I think, as everyone else has said, we’re really focused on being a part of the Ethereum community, extending what’s great about Ethereum. We’ve focused on being compatible, not only at the technical level, but also in terms of how we build a community. But let me start with the technical level. We’ve worked really hard to be compatible with EVM, and that means that Arbitrum accepts EVM code directly and natively. You don’t need to run a special compiler. You don’t need to write in a special language. You don’t need to rewrite your application. You can literally just push the very same bits that you would push to Ethereum to deploy a contract there, send those very same bits to an Arbitrum node, and you’ve deployed in Arbitrum.

Ed (Arbitrum) (01:07:15):

So that level of compatibility of running EVM code natively, you don’t need to compile anything. You don’t need to even download anything in order to work with Arbitrum. So that EVM compatibility and really sweating the details of getting all of the weird corner cases of EVM right. So being compatible with EVM, having our nodes use exactly the same RPC API that Ethereum nodes do, that sort of allows you to operate in a kind of drop-in compatible way.

Ed (Arbitrum) (01:07:51):

But beyond that, we’ve also really focused on building an ecosystem. And that means having our testnet when we started it about six months ago and our main net, when it comes soon, being fully open and public, anyone can deploy to it. Anyone can use it. We don’t pick a set of favorite projects to work with. We certainly work with people, we help people, but anyone who wants to deploy on Arbitrum, anyone who wants to use it, can do that. That’s been true from day one. We have literally thousands of contracts on our testnet. So, that’s been a really important part of it that we want to be part of the community, we want to make sure that we have the services and the ecosystem fully built out there from day one. So, when you see us launch, you’ll see us launch with a robust ecosystem of not only applications, but also the kinds of services and compatibility with wallets and so on.

Ed (Arbitrum) (01:08:50):

This has been a big push for us. And a lot of our design and engineering has gone into supporting that.

Tom (01:08:55):

Thanks so much, Ed.

Liam (Optimism)(01:08:57):

I’ll just add one thing actually after, and then Mark, you can chime in after. The thing that I want to really emphasize is not about, and this has come from learning of multiple years doing this theorem development and trying to contribute back into the community. The thing is not a matter of being compatible with the EVM. Anyone can be compatible with the EVM. I think we’re seeing lots of stuff that is, and I think that’s great. The philosophical thing that’s important is wanting to contribute upstream to Ethereum. The version of the future that I personally want to see, and the reason why I’m personally working on this stuff, is one where Ethereum is scalable. Not Ethereum scales via XYZ company. It’s the one where we’re emerging in stuff that gets Ethereum itself to be scalable.

Liam (Optimism)(01:09:45):

So whether that’s adding in EIPs is to ensure stuff that all of our solutions benefit from, like reducing the cost of call data or contributing to any governance of the Ethereum protocol itself, or trying to work really closely with the geth team, with Solidity team, to get them to understand this stuff fundamentally. That’s kind            of, at least for me personally, that’s where I come from. And I think it’s worth just really taking a thought about how we can give back to the technology that’s enabled all of our businesses to exist in the first place. So that’s the other. That’s what I’m trying to get across when it comes to contributing back and making the minimal diff and the contribution back to Ethereum. But I’ll let Mark (Optimism)keep chiming in on further thoughts around building the community around Optimism and whatnot.

Liam (Optimism)(01:10:32):

Yeah, kind of like the longer term goal would be to try to get the actual sequencer code itself as a flag in geth upstream. And then kind of just build this critical mass of Ethereum developers that, just by contributing to Ethereum, are also contributing to Optimistic Ethereum because Optimistic Ethereum just is a newer iteration of Ethereum. Because, in the short term, in Eth2, there really is no execution. It’s kind of like the roll-up centered roadmap for the near future where after the merge happens- So the merge is kind of planned very optimistically for the end of this year, or pessimistically within probably the first half of next year. So, after this merge happens, since there’s no execution in Eth2 yet, Eth2 is going to need roll-ups to be able to take advantage of all the extra data that is being agreed upon, all the extra consensus throughput. We’re trying to work really closely with Ethereum to make it as easy as possible to basically get Optimism running on Eth2. Because we’re trying to make the whole developer experience as close to Ethereum as possible, and Eth2 is promising that, from the point of view of a developer, that there’s not going to be many changes. You should be able to play your contracts in the exact same way, just like upgrade under the hood and just get way more throughput.

Liam (Optimism)(01:12:26):

And I think that Optimism and other roll-ups could be really key pieces in bridging this gap in the short term and making Eth2 a lot more usable quicker.

Ben (Mechanism)(01:12:46):

Those are really interesting answers to Tom’s question about bootstrapping your networks and thinking and thinking through your goals in the near term and long term. I guess I’m curious, as a user investor, just surveying the landscape of roll-ups and becoming Eth2.. You’ve talked a lot about EVM compatibility, what fraud proofs look like and trade offs, withdrawal, delay times, how often you post batches back in Ethereum. So a lot of even speaking about, I think, or what’s been spoken about, has been the relationship between Ethereum to these individual roll-ups. I’m curious if you could go into maybe a little bit more, and this is something that I guess we can stick with Tom’s rapid fire round robin style question, but I guess if you could go into a little bit more, what cross chain, I guess, cross L2 cross L1, not just sort of individual roll-up to Ethereum, what that infrastructure is looking like, how you’re planning on building it.

Ben (Mechanism)(01:13:40):

And I guess, you know, maybe even at a further layer of abstraction, and this is maybe a spicy controversial question, but do you think the roll-up ecosystem is eventually going to be sort of a winner take all? Is it going to look like a winner take all? Is there a world in which zkSync, StarkWare, Arbitrum, Optimism are all functioning as their own sort of shards or roll ups, I guess, in the Eth2 sense or is it really going to be more of a winner take all race? I mean, I don’t know if that’s a, I’m curious what each of you think from that perspective or is there going to be a truly cross roll-up world or cross chain world from that perspective, or is it going to be maybe more concentrated because of liquidity?

Tom (01:14:18):

Hey Ben, excellent question. Just quick for everyone, time check, we’re going to close this out in 10 minutes, just to respect everyone’s time, but excellent question by Ben. I guess we’ll start with Eli (Starkware)and go from there.

Eli (Starkware)(01:14:30):

Yeah, I’ll be very brief. I hope there’ll be a thriving and diverse set of L2s. Since they are all compatible with L1, so there are multiple ways in which they can converse, right? So that’s what I hope will happen. I’m guessing, and this is certainly true for us now, we are, right now, we have so much work just focusing on getting our Starknet up and running, for us to think, even connecting it to L1 is a lot of work. But definitely, I hope that the steady-state will be many, many different L2s and roll-ups. There’s room for all of them. We need to take over all of the conventional rolls. You need any scaling that you can get, Optimistic, Validity, anything.

Tom (01:15:35):

That’s awesome. Alex?

Alex (zkSync)(01:15:38):

And so I actually think that there will be space for multiple different L2s to coexist, but what I think will happen is that there will be kind of a winner in each category. And by that degree, I don’t mean like Optimistic or ZK roll-ups. Rather, if you have a few L2 solutions which are almost identical, then one of them will get the most effort. But if you have two different layer two solutions, so different types, different technologies, different properties, then you will probably have different users inclining towards one or the other.

Alex (zkSync)(01:16:22):

So I think with like, if we have something like EVM compatible L2, wherever that is, there will be a tendency towards eventually one becoming the more used. And if this one becomes actually so broadly used that it embraces this very large part of the entire Ethereum ecosystem, it will probably, at some point, will be merged into Ethereum itself. So I agree with Liam (Optimism)on this. But if you have EVM and Cairo and some other approaches, which are entirely different, then I think they will live in their own uses.

Tom (01:17:05):

That’s really helpful. Let’s switch over to Optimism, Mark (Optimism)and Liam, who wants to take that one?

Liam (Optimism)(01:17:07):

Do you mind repeating the question real quick?

Tom (01:17:14):

Yeah, for sure.

Ben (Mechanism)(01:17:14):

Yeah, I’m happy to repeat it. I was generally asking about, you’re talking about Ethereum to individual roll-up compatibility. That’s sort of where a lot of the discussion and debate has been, but I was wondering more about cross roll-up, cross L2, even, maybe, also to other side chains, generally how you’re thinking about compatibility with those. And also just maybe the more controversial question about whether this roll-ups space in general is going to be winner take all, or whether there’s going to be room for flourishing of all your solutions or whether there’s some in-between answer there.

Liam (Optimism)(01:17:46):

Oh, totally. Yeah, I think there’s space for many roll-ups. I think the key differentiating factors will be governance and the uptime and the overall trust that the community holds, or the people that are running the network. I think that given any sufficiently useful blockchain, I think that our bots or MEV searchers will use up any extra gas that they can just doing ARBs or whatever other sorts of DeFi things they do. So I could imagine a world where there’s multiple roll-ups that kind of have different governance trade-offs and I could imagine them all being generally at high capacity, just because they’ll have a bunch of different apps to put on them. And then all of their unused gas will just get used up by our bots.

Tom (01:19:00):

That’s helpful. Ed, let’s switch over to you.

Ed (Arbitrum) (01:19:05):

Yeah, the first part of this interaction between chains, I think there’s a bunch of interesting innovation happening there. People are trying out different methods. And I think over time, we’ll have an increasingly robust set of methods for cross chain coordination. The main event right now seems to be L2 to L1 interaction. That’s the most immediate and pressing issue, I think for most people. And there’s a ton of great stuff happening there that can reduce the friction and so on in L1 to L2 interactions. And I think that will generalize over time to L2 to L2 interactions. So I’m pretty optimistic as far as that goes. As far as the question of single chain future versus multi chain future, I think in the shorter term, I think each of these teams is probably focusing, I know we are, on building out a single chain that has a robust community on it, but then over time, I think a couple of things will happen.

Ed (Arbitrum) (01:20:08):

One thing that will happen is that you’ll see the more successful technologies making a transition to full community governance. And because that, I think is a prerequisite for something, having broad support over a longer period of time. And so the question of how things get governed and how you can eliminate points of centralization in these systems becomes more important over time. And the other thing that happens I think is that you see over time, probably multiple roll-up chains, at least, going on and with the tension between on the one hand, the desire for people to be able to have synchronous composability and to take advantage of the network effect of being on the same roll-up chain as other people. On the one hand, so that draws you toward wanting to be together.

Ed (Arbitrum) (01:21:03):

On the other hand, there’s also a desire to decouple things that aren’t really strongly connected to each other, that you can deal with issues around things like state growth, for example, by having things that are unrelated actually execute on different chains. And so you’ll see probably a few big chains that are focused around different sectors or big applications. And then you’ll see probably a longer tail of other chains. And as that cross chain communication and synchronization technology gets better, the cost of being on a different chain from someone else will gradually go down. Not to zero, but to the point where I think you’ll see a bunch of different chains operating over the longer term.

Tom (01:21:50):

Makes a lot of sense, Ed. So guys, we’re at the about the 90 minute mark, this was incredible. It went way better than I expected. I mean, having four of the leading teams for L2 scaling on the roll-up side I thought would be hard to do, but I really appreciate everyone’s time and everyone being respectful. I’m definitely going to have to re-listen to this debate a couple times this weekend to really understand it myself. And I really appreciate everyone’s time. And let’s just go in a circle. We’ll start with Eli (Starkware)again, just tell everyone where they can follow you, the name of your project just again, to remind people for the audio side of things, to link to your voice again, and we’ll go from there.

Eli (Starkware)(01:22:27):

Sure. So the best way to follow us is to follow StarkWare on Twitter. You can also follow me, Eli (Starkware)Ben-Sasson, the same name as I have here. And I also read through a lot of stuff from StarkWare, you can also go to our website, starkware.co. And if you’re a developer and want to learn the language in which our L2 is going to be operating in, so you can search for Cairo lang or Cairo language. That’s our programming language for building Starkware to scale, and that’s language moving our systems. And Tom, terrific episode. Thanks for inviting me.

Tom (01:23:11):

Thanks so much, Eli. Alex?

Alex (zkSync)(01:23:14):

Yeah, so you can easily find us on Twitter under zkSync, just as it’s spelled in the title of the room, and you will find everything from there. And I want to thank Tom and I want to thank all the participants here for a great discussion. It was on fire, I definitely enjoyed that. I will the link to recording, share it everywhere. And also for building this great solutions, and no matter what’s going to happen, the ecosystem movement, this is good for Ethereum. Four amazing solutions being built by really, really smart, really diligent teams can only mean that Ethereum is going to scale very soon. And, yeah, the risks are very low here.

Tom (01:24:04):

Thanks so much, Alex. Yeah, I agree. It’s very hard to get bullish on competing L1 chains when they’re going to run into the same scaling issues as Et, if they’re centralized and there’ll be right back here and you guys are the ones leading the way. Mark (Optimism)and Liam (Optimism)from Optimism, you guys are up.

Alex (zkSync)(01:24:21):

You can keep following Optimism if you just go to optimism.io. You can also follow on Twitter @optimismPBC. It’s like public benefit corporation. Yeah, I encourage folks that are interested in hacking, Optimism has a hackathon going on right now called the scaling Ethereum hackathon. We can also see a bunch of hacks being worked on and there’s talks and workshops and whatnot for Optimism and actually all of the other solutions on stage as well. So that’s a great place to dive into more of the details. And finally, I’d also just say, generally it’s ways that you can engage with Ethereum, and whether that’s going to be at EIPs or Eth research, we’re all over those things as well. And like you mentioned, trying to catch up to 3M. so you’ll find us there too.

Tom (01:25:04):

Thanks so much guys. Ed, why don’t we close out with you?

Ed (Arbitrum) (01:25:08):

Sure. If you’re a developer, you can go to developer.offchainlabs.com for documentation. General information about Arbitrum is at arbitrary.io. That’s A-R-B-I-T-R-U-M. On Twitter, we are @arbitrum. And let me add a plus one on the Scaling Ethereum hackathon, we were there and along with all these other teams, and it’s a great way to see more about the community in operation. And I’d also say, stay tuned for an announcement from us soon.

Tom (01:25:45):

That’s awesome. And Ben, just share a little bit about yourself.

Ben (Mechanism)(01:25:50):

Yeah, thanks, Tom. It was great to be here with all of you. Clearly I’m bit of the odd one out, because I’m not a founder of a roll-up protocol. But you can find me on Twitter @benjaminsimon97. It’s also the same as my clubhouse handle. So yeah, it was really great talking with all of you, I learned a lot and I’m looking forward to re-listening to this a couple of times too.

Tom (01:26:08):

Me too, I’m really excited to re-listen, I have my weekend ahead of me. Thanks again, everyone. Hope to have everyone on again soon.

Apr 23, 2021 | 84 minutes | Chain Reaction

Reach Out

* fields are required
By submitting this form you accept our terms and conditions