
Original author:Zhixiong Pan
Original author:
In the "Ethereum Shanghai Upgrade Summit" event, we invited four completely different expansion solution teams in the Ethereum ecosystem to talk about the cutting-edge technology of Ethereum.
Especially EOF and EIP-4844 which will probably be included in the next upgrade (Cancun). In addition, the narrative of decentralized Sequencer and modular blockchain is also a new direction that researchers are more concerned about.
These four teams have their own distinctive features, such as focusing on the field of zero-knowledge proof (especially zkEVM), focusing on WASM and dynamic expansion, focusing on Move language and modularization, and focusing on general-purpose storage. In addition, they also have a lot of differences in other technical details.
Dorothy Liu from AltLayer
Jolestar from Rooch Network
Qi Zhou from EthStorage
Ye Zhang from Scroll
TLDR
Participating in this discussion are:
The EOF upgrade has less impact on application developers, but has some impact on Rollup and zkEVM. There may still be some controversy as to whether EOF will enter the next stage of the upgrade.
Several solutions for decentralized Sequencer: Byzantine Fault Tolerance (BFT), MEV Auction, Shared Sequencer (Flashbots), VDF, etc. In addition, fair sorting may not be fair, and different strategies need to be adopted for applications.
The purpose of EIP-4844 is not to expand capacity, but more to realize a set of concepts required for future Danksharding, including Blob and its Data Hash. Implement these concepts ahead of time so that no contract upgrades are required when implementing Danksharding. EIP-4844 will not bring significant improvements over the current Ethereum data on-chain method, and the bandwidth they bring is basically in the same order of magnitude.
Everyone has a completely different perspective on the modular blockchain: some people think that in the era of fat applications, more choices need to be provided; is the most important data layer.
Hardcore Learning Materials
The following is the full text of the discussion, which was translated by OpenAI Whisper and processed by GPT-4, with some adjustments and deletions.
Zhixiong Pan:
Topic 1: EOF (Ethereum Object Format)
The Ethereum Object Format (EOF) was originally planned to be implemented in the Shanghai upgrade, but has now been delayed. The essence of EOF is to provide a pre-data structure for Ethereum's self-bytecode, which is conducive to the future upgrade of EVM's bytecode features and smart contracts.
Dorothy Liu:
Will this upgrade have more impact on the entire Layer 2 or Rollup ecology? In particular, the Rollup ecology will essentially deploy some smart contracts on Layer 1 of Ethereum, so will this upgrade affect the technical choices and paths related to Layer 2?
In the view of the AltLayer team, EOF is not a very important innovation. Although it is an important improvement to the EVM's packaging model, it has no direct impact on the writing of the Solidity language. Therefore, from the perspective of application development and general development, this change has little impact on them, and even many developers may not understand this matter, and it will not cause problems.
However, for Rollup, the impact is huge, because Rollup is essentially the execution layer of Ethereum. When Ethereum changes, Rollup needs to adjust accordingly. This EOF change may have a greater impact on zkEVM, because zkEVM is relatively more technically difficult. They are Type-3 in the EVM compatibility classification, so they need to pay more effort in the pursuit of higher compatibility. It may even require a rewrite or very drastic changes.
Ye Zhang:
However, for those of us doing Optimistic Rollup projects, whether written in WASM or EVM, we are Type-1 compatible, that is, fully compatible with EVM. Therefore, for us, the difficulty is very low. Although we need to make some modifications, the difficulty is quite low. In addition, our own production certificates are all in WASM, so for us, the actual impact is not big.
First, I think EOF is indeed an important technological development. There was a previous opinion that zkEVM does not need to be changed frequently because the EVM core is upgraded less frequently. But with EOF recently, there may be other upgrades that will change the core logic of the EVM, which will indeed have an impact on zkEVM. However, at present, because EOF is forward compatible, it still has certain compatibility with previous contracts. At least for our existing contracts and developers, the direct impact is relatively small. There may be some points that need attention in the future, but at least for our Layer 1 contracts, the impact will not be great. On Layer 2, the degree of impact depends on how much we support EOF. We definitely lag a little bit because we need to support EOF if we want to be fully consistent with Layer 1's EVM.
In fact, when we developed zkEVM, we have paid attention to the core upgrade of EOF, which is easier than expected. Because we adopted a modular design when building the entire zkEVM, Opcode can be easily updated. For EOF, the main change is to add some version control and Opcode, we only need to implement these new Opcode, and do a good job in version control, such as adding some labels and variables in the circuit. These changes are made at the Opcode level and require the addition of some circuitry. In addition, EOF also performs some checks on bytecode deployment, which affects a sub-circuit of zkEVM, the bytecode circuit. In this circuit, we need to check the input hash and output the corresponding bytecode. We may need to add some constraints to this circuit if we need to check at this point.
Qi Zhou:
Overall, I think the changes are necessary, but not to the point of requiring a complete refactoring. Ethereum has now brought zkEVM to a very important position, and they themselves are leading some key technological developments. I know that Vitalik and some members of the EIP core team are also concerned about the progress of the zkEVM team, including communicating with us to understand the impact of this upgrade on zkEVM. Because each upgrade of Layer 1 brings certain risks, it was previously believed that each upgrade may be irreversible. Therefore, we hope that Layer 1 is as stable as possible, and the more stable the settlement layer is, the better, so that there will be less impact on applications and existing technologies. On Layer 2, we can carry out various innovations. However, if Layer 1 does need to be changed, I think Layer 2 can also be adjusted accordingly, and the changes will not be particularly large. But I'm sure there will be some delay as we need to implement the changes and possibly audit them. The audit process may take additional time to complete, so we generally want to make as few changes as possible. However, I don't think these changes will have a fatal impact on the entire system and will not cause serious problems.
Regarding EVM Object Format (EOF), I noticed that the Ethereum community also covered many related topics in previous discussions. Especially during the upgrade in Shanghai, we can see that Ethereum has some different attitudes towards the whole upgrade process. For example, Dankrad, the founder of Danksharding, has certain doubts about EOF. They believe that although EOF does not change much for developers, it mainly improves the security of contracts. In this regard, they feel that now is not the most important stage of Ethereum expansion. In fact, EOF is only a very small part of the overall expansion plan, so there has been a lot of controversy in this regard.
However, why would you want to include EOF in your upgrade plan now? That's because EOF has been raised for a very long time, maybe four or five years, and maybe longer in previous internal discussions. After analyzing EOF, we do agree with some of Dankrad's points very much. Although EOF brings certain security to the entire Ethereum contract, such as canceling the relatively dangerous operation of dynamic jump, in fact, in the process of a large number of Ethereum practical operations and writing contracts, the compiler has made the underlying error-prone problem is avoided. Therefore, we have not encountered abnormal contract execution caused by similar jumps in the development process in recent years.
Jolestar:
As for whether EOF will enter the next stage of upgrade, such as Cancun, considering that it may have a certain impact on Ethereum's Layer 2 computing layer, I personally think there may be some controversy. Compared with EIP-4844, which we will discuss next, there are still more differences in views on EOF.
Regarding EVM Object Format (EOF), it mainly brings two optimizations. First, it fronts the verification of the smart contract code and verifies it at deployment time. This is very effective for performance improvement. At present, many projects have adopted this method by performing code verification during the deployment phase. Second, EOF provides an extension capability. Although this update may not bring about major changes, once this expansion mechanism is provided, there may be many expansion needs in the future. At this time, you may be faced with a choice: whether you want innovative expansion or compatibility. This has been a dilemma for virtually all software systems all the time.
In this way, EOF actually opens the door for future changes. For example, in order to implement a certain Layer 2 feature, a new version can be added and some new functions can be added to the smart contract code. At this time, compatibility with Layer 1 may be affected. Such a change could indeed be highly controversial. However, from my point of view, I think innovation and evolution should still be a priority at this stage, not a complete freeze yet. Of course, for Layer 1, this is another dimension, and the judgments of Layer 1 and Layer 2 may be different.
Zhixiong Pan:
Topic 2: Decentralized Sequencer (sequencer)
At present, the Sequencers of most Layer 2 networks are still at an early stage, and most of them are single Sequencers. Some projects plan to upgrade to a decentralized Sequencer in the future.
Is there currently a reasonable design for a decentralized Sequencer?
Could Layer 2 native tokens be a requirement for a decentralized Sequencer? Such as Optimism/Arbitrum, although they have their own tokens, they can only be used as governance tokens.
Qi Zhou:
So what other cutting-edge or early decentralized Sequencer solutions are worth paying attention to?
Recently we have looked at some related topics, such as Arbitrum's Sequencer. The Arbitrum community is facing a big problem that a large number of nodes (like 10k connections) connect to Sequencer to get the latest transaction information and arbitrage from it. We often joked internally whether this Sequencer is similar to the New York Stock Exchange, because there are many quantitative robots near the New York Stock Exchange to quickly obtain transaction data through optical fibers and perform quantitative operations. Does this Sequencer model cause us to degenerate into a very centralized system?
Dorothy Liu:
I think how to realize the decentralization of Sequencer is a very important issue. Some solutions I can think of are borrowing from Proof of Stake (POS) mechanism and using Layer 2 native tokens as proof of stake. In this way, we can achieve sequencer rotation, similar to Ethereum's recent ongoing secure leader election. Combining these technologies, I think we can find some mature ways to solve this problem.
With this question, I can share some of my personal observations on the development history of Arbitrum and Optimism. At the beginning of last year, I went to Amsterdam to participate in an event about Rollup held by Chainlink. At that time, four teams including Arbitrum, Optimism, zkSync and Metis were invited. The focus of their discussion is mainly on expansion and performance. I asked them privately about their views on the consensus layer and whether Rollup needs to add a consensus layer. They said that they are only considering expansion at the moment. Optimism did not take this issue into consideration at the beginning, so it cannot be changed now.
From our perspective, we designed the decentralized Sequencer from the beginning of the project. We've done two Dark Forest games and two NFT Minting campaigns, both directly on the Ethereum mainnet, using the decentralized Sequencer. In our opinion, this is not a technical problem, but a legacy of historical development. How they change an engine in a running system is going to be more difficult than if a new project had been fully planned from the start.
On the issue of tokens, Arbitrum and Optimism have proven that they can operate without their own tokens. If you want to have tokens, you may need some designs, such as Slashing, etc. We will release the decentralized Sequencer network next month, which will be the first decentralized Sequencer network on the market. At that time, we will have some designs including Staking and Slashing.
Ye Zhang:
Finally, regarding MEV, although it is two issues with Sequencer, there is a relationship between them. A decentralized Sequencer network may be able to solve the MEV problem to a certain extent. However, this problem may never be fully resolved. Currently, the Arbitrum team has implemented some methods, such as adding randomness improvements, but they cannot completely solve these problems. We may also propose some MEV solutions in hardware or otherwise, which will be announced in the future.
We've done a lot of research on Sequencer, and while we haven't announced a specific solution yet, we're definitely working on a design. There are currently two main directions. The first is a scheme based on Byzantine Fault Tolerance (BFT), such as using Tendermint to select a leader to replace the previous centralized Sequencer. This approach requires staking and slashing, possibly issuing its own tokens for PoS, or combining with other restaking or similar mechanisms. The advantage of BFT is that it can provide very fast pre-confirmation and maintain a good user experience.
The second, more promising solution is the MEV Auction (auction). Optimism first proposed the concept of MEV auction, that is, whoever gives the highest price can get the right to produce blocks. However, the disadvantage of this solution is that users may be overpriced by MEV robots.
Similar schemes include Base Rollup proposed by Justin Drake, whose core idea is to reuse Layer 1 validators to generate Layer 2 blocks. In this way, the validators who build the Layer 1 and Layer 2 blocks can put the Layer 2 verification transactions into the Layer 1 block when building the Layer 1 block, so that the overall bid is larger, and the Layer 1 block is included first. The advantage of this scheme is that it is highly consistent with the incentive mechanism of Layer 1, but the disadvantage is that it does not have pre-confirmation like BFT, and user experience may become poor.
Therefore, currently the most promising directions are schemes based on reusing Layer 1 validators, and schemes based on BFT. We are working on solving problems in these directions, such as providing pre-confirmation while reusing Layer 1 validators.
There are also some directions for decentralized Sequencer. For example, Flashbots and others have proposed a scheme called Shared Sequencer (Shared Sequencer). Optimism's Superchain also has a similar design. The core idea of sharing the Sequencer is to use a fixed set of Sequencers to process transactions on multiple chains, realize cross-chain MEV, and improve the experience of cross-chain UI. However, this design is still in its early stages, and some key issues have not yet been resolved, such as Sequencer node pressure and the atomicity of cross-chain transactions.
Shared Sequencers may be valuable in the future for many specific application scenarios (such as small real-time asset applications) that may not have sufficient capacity to run a decentralized network. However, for large real-time asset applications, it can be challenging to convince them to use a shared sequencer.
Flashbots is trying to implement a shared Sequencer by building a privacy decentralized network to solve the essential MEV problem brought about by information acquisition and bring a certain degree of value to users. However, since the scheme has not been fully disclosed, it is difficult to assess whether it can overcome the aforementioned problems. As an MEV player with high orthodoxy, we will evaluate Flashbots in detail after it makes its proposal public.
Regarding the need for Token, I think it mainly depends on the consensus algorithm used. If the BFT algorithm is adopted, Token is likely to be required, but if the method based on the existing Layer 1 validator is adopted, Token may not necessarily be required, because other people's resources can be reused. This way, the source of value becomes who is staking you, and MEV will flow directly to network validators. Therefore, how Layer 2 views the value supplement of MEV is the determining factor.
Tarun Chitra: Ordering So Fair It Ain't Fair Ordering
Regarding the way MEV is handled, there is an argument that fair ordering can solve the MEV problem to some extent, but in reality it may just alleviate the MEV problem. Because there may still be various bots competing for priority, the situation is similar to that of a traditional trading firm. While some randomness has been introduced, this has not yet been fully resolved. In fact, one study on MEV pointed out that fair ranking is not fair. By constructing an attack model, the researchers proved that in the case of fair ranking, the user experience may be worse. Therefore, in order to achieve true fairness, it may be necessary to adopt different sorting strategies for different applications.
There are also many interesting experiments. For example, the transaction is encrypted by encryption technology (such as VDF), and decrypted after the transaction is confirmed. In this way, it is impossible to predict the content of the transaction in advance. Additionally, there are methods such as time locking. In short, we are also paying attention to and improving these solutions. Although we have some general directions, each direction has various problems. Therefore, we hope to conduct rigorous analysis before finalizing the plan to ensure the stability of the plan. Sequencer involves many problems such as value flow, so we don't think there is a perfect solution yet. This is why we put the research on Sequencer behind.
There is now a general feeling that a decentralized Sequencer may be beneficial for censorship resistance. But as I mentioned in today's discussion, actually by means of bridging, it is possible to ensure that transactions are enforced at the second layer. What's more, most bridge designs will stipulate that if you don't include a transaction, it may be affected in some way for a day. However, most decentralized finance (DeFi) may liquidate you in the next minute, and then directly reject your transaction, which may bring a bad experience to users. Therefore, real-time anti-censorship is very important for users and DeFi.
Another issue is the issue of miner extractable value (MEV). For example, Arbitrum now implements a first-come, first-served policy, which employs a centralized sequencer. People's previous point was that if you found out that I was being used in MEV, I might leave the network, causing people to not believe in the legitimacy of the network. But a huge problem is that when your network starts and the network effect gradually appears, such as a single sequencer, once the ecology develops to a certain scale, when you start to charge or consider MEV, users have actually become very dependent on you. It is difficult for them to transfer to other platforms. So I think the threat could come in a couple of years and they suddenly start implementing MEV, which is a potential threat.
Additionally, there are compliance issues. If your token is associated with Proof of Stake (PoS), you may face certain compliance risks. For example, if you are centralizing and a country asks you to shut down serializers or fix certain issues, there may be some benefit to decentralization.
Jolestar:
Therefore, we have conducted a very exhaustive analysis and invested a lot of energy in this direction. If anyone in the audience is interested in protocol research, please join us. We are recruiting talents in this area, and we will release more detailed analysis later.
Regarding the introduction of BFT as the Sequencer, everyone has reached a consensus. In fact, we need to introduce a BFT consensus in Layer 2. What we focus on is what this consensus decides. If it directly determines the outcome, then we can let it directly determine. We want Layer 2 to provide scalability, so we consider Sequencer from another angle. What is our main purpose? One is for safety. In the Layer 2 scheme, Sequencer and other schemes such as Proposer or similar Prover or zk, they are different roles. The Sequencer and Proposer are separate in operational situations, and if they want to cheat, they must jointly cheat. For example, Sequencer hides transactions, and Proposer creates a fake root. Security is guaranteed if their roles are separated and assumed by different organizations.
Before the Sequencer is submitted to Layer 1, the order of transactions can be adjusted by the Sequencer, and there may be room for cheating. To eliminate this problem, we let Sequencer provide a proof similar to a fraud proof, called Sequence Prove. You justify putting the transaction in a position, giving a commitment. If the order of the last link is inconsistent with the commitment, the Sequencer can be challenged and punished.
Finally, if we really want to introduce BFT voting, we should decide the order of transactions, not the outcome of transactions. The existing consensus mechanism determines the final execution result vote, and does not determine the transaction order of the blockchain. Therefore, in this case, we may need to introduce an order- or fairness-sensitive consensus, allowing everyone to vote on the order of transactions. This is a direction we are currently exploring and a solution to decentralize with Sequencer.
Zhixiong Pan:
Topic 3: EIP-4844 (Proto-Danksharding)
In addition to EOF, another important implementation of protocol layer upgrade and expansion is EIP-4844, especially closely related to the Layer 2 team. At the beginning of the year, KZG's Ceremony has been launched, and the addition of the protocol layer may take some time. For the direction of Rollup, ZK Rollup or expansion, how long do you think it may take for the Layer 2 team to integrate EIP-4844?
Jolestar:
Regarding the integration of EIP-4844, what difficulties do you think may be encountered? In addition, have you estimated the impact of using EIP-4844 on GAS or overall expansion?
In my understanding, integrating EIP-4844 has not changed much from the original Rollup scheme. We have been looking for a solution with higher TPS and lower fees, but relying on EIP-4844 alone cannot solve this problem. In fact, it just adds a transaction type, the Blob type, while the overall block size limit is still limited by the base layer. If we want to achieve the ideal Layer 2 with a TPS of hundreds of thousands or close to 100,000, the transactions of the first layer still cannot be directly placed in a block.
Dorothy Liu:
Therefore, our goal now is to implement a modality in which different protocols are composable, in order to choose between protocols that are less costly, easy to use, and have greater throughput.
First, the EIP-4844 upgrade was relatively easy for our OP to integrate, whereas zkEVM might be more difficult. Zhang Ye can provide you with professional answers about this upgrade. Also, I want to share a protocol called Monad whose founder is named Keone and you can follow him on Twitter. Keone, who was the developer of Jump, is a math genius with a knack for calculations. He posted on Twitter his prediction about the launch of the EIP-4844 protocol, predicting that the TPS of Arbitrum can be increased to about 160, but whether this number is significant or not is up to the beholder. Whether there is room for improvement in his calculation method is debatable, but we believe that EIP-4844 can only improve performance to a certain extent, and ultimately still depends on the combination of EIP-4844 and Rollup, and may require multiple Rollups to improve performance. EIP-4844 doesn't solve much by itself.
Ye Zhang:
For us, the service we provide is called "Rollup as a Service", as Zhang Ye mentioned, many applications (such as games and DeFi protocols) require high throughput, and when the requirements for composition are not high, they Can run on a Rollup. These Rollups will share a decentralized Sequencer network, and the Prover and Validator networks will also be decentralized. This is an assumption of our current service offering, and we will provide more details over the next month. Therefore, we believe that the existing EIP-4844 or a single Rollup solution cannot solve all problems, and the key is to rely on a large number of Rollups to provide services for different projects and application scenarios.
Regarding EIP-4844, we are working on it. EIP-4844 will definitely reduce some of the data costs. Recently there was a discussion on Twitter about the data cost of Polygon and zkSync. Both we and Polygon are now using the raw data of transactions directly on-chain. Optimism and Arbitrum will be compressed to a certain extent before being uploaded to the chain, while zkSync and StarkWare will use a more space-saving State Diff mode. For high-frequency operations on the same account, State Diff may save a lot of space. Someone analyzed the data of zkSync after using State Diff and found that it can indeed save a certain amount of cost. However, if the cost of data is further reduced after EIP-4844 or sharding, we still prefer to directly upload the original data of the chain transaction, because this will allow others to execute transactions faster after seeing your data and obtain stronger guarantee.
We hope that after the data cost becomes lower, adding some compression algorithms may achieve an effect similar to that of State Diff. But currently we prefer to use transaction raw data. As for the impact of EIP-4844 on us, it will affect two parts: one is our bridge (Bridge), we have begun to explore the new bridge design under EIP-4844, there will be some impact, and a new specification needs to be written . The other part is in our circuit (Circuit), because in the new format, we cannot directly access the previous data, but only a small commitment (Commitment), so we need to prove the openness of this commitment in the circuit. There's definitely an overhead to the circuit, but we think it's doable.
Qi Zhou:
Implementing EIP-4844 would involve a domain problem, as the data is on another curve, which may not be in the same format as the native data. A long time ago, Vitalik proposed a concept of proving equivalence, but later found that if the domains are different, there are still some problems. Dankrad and Vitalik propose a sophisticated way to incorporate committed openness into circuits. We think this change is definitive and takes time, but it's not particularly complicated and feasible. We need to coordinate when Layer 1 implements this change, and then we adjust accordingly. Until then, we will continue to focus on the current system.
We did a lot of research on EIP-4844. In fact, the purpose of EIP-4844 is not to expand capacity, but to realize a set of concepts required for future Danksharding, including Binary Large Object (blob) and its Data Hash. Data Hash can be accessed in the contract, and these concepts are implemented in advance so that no contract upgrade is required when implementing Danksharding. EIP-4844 will not bring significant improvements over the current Ethereum data on-chain method. We have made preliminary estimates that the bandwidth they bring is basically in the same order of magnitude.
However, according to Danksharding's specification, the throughput level is about 20 times higher. Therefore, assuming we achieve a speed of 100 TPS on EIP-4844, using Danksharding can theoretically reach 2000 TPS, or even higher. The Ethereum community, including Vitalik Buterin and the Danksharding team, is very concerned about the EIP-4844 upgrade, because once it is upgraded, the next major upgrade of Ethereum will not need to upgrade the entire contract system.
Our storage contract is directly designed for EIP-4844, which may actually be simpler in terms of development implementation and storage proof. Using Danksharding provided by EIP-4844, the system has actually been pre-calculated, which is a very friendly storage method for us.
There can be challenges with technologies like ZK and Optimism, especially with regard to how data is passed. Ethereum already has some tools, including random evaluation methods, that can retransmit blob data into CoreData. We need some challenges to prove that these transactions are correct.
We plan to provide some common libraries, similar to the OpenZeppelin library, to facilitate various operations on data blobs on EIP-4844. This will bring security and efficiency advantages in terms of auditing and gas consumption.
In summary, EIP-4844 is innovative and has great potential for the operation of the entire data layer of Ethereum. For those interested in EIP-4844, it is recommended to study Ethereum related code and parameter design to better understand how to use this technology.
Zhixiong Pan:
Topic 4: Modular Blockchain
Jolestar:
The last topic is to discuss the modular blockchain with you. This is a hot topic closely related to the public chain itself, and it can be said to be a new trend. Although it has been more than a year since the beginning of last year, a big view at present is that the blockchain may gradually be layered, forming various levels such as consensus, execution, DA or settlement. Against such a background, whether Layer 2 protocols will face greater challenges and competition, and whether the competitive landscape of the entire industry will undergo major changes.
I think the modular blockchain is an evolution of the Layer 2 idea. Since on Layer 2, we moved the execution from Layer 1 to Layer 2, why can't we disassemble different modules and let different systems undertake them. This introduces another perspective, which is to take the traditional distributed system as an example. In the traditional distributed system, we have a consensus layer called Zookeeper and Raft. Such a consensus layer is equivalent to Layer 1 on the blockchain. However, Layer 1 of the blockchain is different from the traditional consensus layer. Layer 1 of the blockchain can run programs. So now we run the program in Zookeeper, but find that the execution efficiency of the global consensus is too slow, we need to remove it to achieve expansion.
Another perspective is from the application point of view, when we build applications, we choose different system components to complete application functions, such as Zookeeper or MySQL. In fact, we are building applications, not that MySQL is scaling Zookeeper. This modular blockchain idea can introduce a new perspective, that is, how we can use existing Layer 1 resources, as well as storage and execution functions, to combine to build a basic DApp.
Dorothy Liu:
This new perspective does not conflict with existing public chains and components of the crafting system. We believe that developers first need to choose a suitable language to build applications to ensure determinism and verifiability. We chose the Move language for better extensibility. Developers use this language to build applications, and then think about how to achieve decentralization. It may not be completely decentralized at the beginning, but the ability to decentralize is required. For example, transactions can be made public so that anyone can verify them. While still centralized, at least verifiable. Subsequently, a protocol can be plugged in to ensure security, as a promise. This gradual completion mechanism provides a new development path for centralized applications.
I very much agree with Jolestar's point of view. Since last year, we have realized that the industry has evolved from fat protocols and thin applications to fat applications and thin protocols. The focus now shifts from the pursuit of decentralized utopia and technological progress to practical application scenarios. With the emergence of more and more new applications such as games and social networking, everyone pays more attention to what blockchain applications can achieve, rather than just expanding Ethereum or advancing a certain technical direction. Although Ethereum is an important L1, with the emergence of different L1s, everyone is willing to accept the trade-offs in the multi-chain era.
As a Rollup-oriented company or team, we do not only serve the Ethereum community. We hope to build a disassembled and combined Lego technology solution to meet the needs of different applications. Some applications may require a larger block limit, some game companies may want a Solana VM compatible L2, someone may want to put DA on Celestia or be compatible with EigenLayer. We want to provide a combinable solution, and customize solutions for different applications according to their needs. We can put L1 on Ethereum, BNB Chain or even Solana, or our own Beacon Layer. From a security point of view, it is very important to put the proof on Ethereum, but the transaction body (transaction ontology) does not necessarily need to be placed on Ethereum. Departing from the transaction body of Ethereum will greatly reduce the cost.
Qi Zhou:
In this fat application era, we focus on how to provide the most suitable expansion solution for each application, not just the optimization direction. For the argument between zkEVM and the OP, we can support both sides. We focus on the prover cost of zkEVM. For example, Scroll may need to spend $10,000 a month to run a prover, which may be too expensive for game manufacturers. They need a cheaper prover solution. We can provide a prover solution using a computer, and provide different technical solutions according to different needs. Here is our solution.
From a historical perspective, I agree with where modularity is headed. This includes the OSI layers in networking as well as computer architecture. Considering Ethereum or blockchain as a world computer, we can learn from the existing computer system to better play its role. Because each component has a huge market value, for example, Layer 1 of Ethereum can be compared to an early CPU with basic computing power and limited storage capacity. DA, on the other hand, is more like memory.
Between memory and CPU, we need data hashing and precompilation to transfer data. Coredata is similar to registers, and we also need modular plug-ins, such as high-performance computing devices such as GPUs. In the blockchain network, the communication of calculation data is realized through DA and zero-knowledge proof. In addition, we need a lot of storage to copy data from memory to hard disk. Finally, we also need devices such as mouse, keyboard and monitor, similar to the MetaMask and Web3 protocol we use now.
Drawing lessons from mature computer systems and drawing experience, we can imagine the future computer world system. Of course, we can also connect the hard disk to other computers, such as other Layer 1 devices. However, if a Layer 1 does not have a good DA layer and memory, the speed and bandwidth of data transmission and storage will be greatly limited.
Ye Zhang:
A computer cannot boot without a CPU and memory, which is a very critical part. With these basic components in place, we can combine devices such as GPUs, displays, and hard drives. When developing the EthStorage and Web3 protocols, we supported multiple chains and adopted a modular approach to access other networks. These networks can be other Layer 1 or Layer 2 as long as they have basic execution and DA layers. When accessing Layer 2, EthStorage actually becomes Layer 3 to some extent. This is our vision of the overall architecture of the world's computers or blockchains in the future.
I think modular blockchains are really a very important narrative right now, the separation of execution and data. However, at least for us, we currently still see Ethereum as the most important data layer. Regarding the definition of Rollup, there are various interpretations, because various Rollups are emerging. At present, a definition accepted by most of the community is that you need to publish data to Layer 1 at least, even if you do not publish proofs, you still need to publish data to Ethereum, so as to inherit the security of Ethereum. Because most people think that as long as the data is uploaded to the chain, whether through other nodes, full nodes or light nodes, at least one deterministic result can be obtained, and it must be finalized.
Regarding various Rollups, their main idea is that whether you believe the result depends on your own choice, the full node or light node you choose. For example, an Optimistic Rollup would say "I challenge you", while a zkRollup would say "I've always proved you right". In fact, all these Rollups are just to establish a point, to decide whether I trust your data, and whether I trust your funds to be bridged from you.
Therefore, at least for us, we hope that our platform will always trust the data layer of Ethereum and insist on doing a Rollup. We will also work hard to advance the process of Data Sharding. I think the only thing to focus on is the timing, when exactly will it happen and how long it will take. Until then, do we need to take transitional measures? This mainly depends on our estimation of the timeline of data sharding. If we estimate that it will take five years to achieve, then we may have some intermediate steps. But if we think it will happen in a reasonable time frame, we will still stick to this direction.
Excessive modularity may not be a good thing. When we observed the situation of Layer 2 before, there may be only nine Layer 2s, each of which has its own security attributes, such as whether the contract can be upgraded, whether the Sequencer is decentralized, whether it is open source, and whether it has been audited. For Layer 2, these security properties are tricky. For example, if the contract can be upgraded, if a non-compliant Rollup does something bad, it may be discovered instantly. If the general public doesn't understand the properties of different Rollups and the trade-offs between them, and there are already 100 Rollups to choose from, some security issues may arise. Therefore, I think that in the end there may still be several major Rollups that provide absolute security, and everyone has certain guarantees for their properties, and at the same time have good composability. On this basis, it is possible to alleviate the pressure on Layer 2, whether it is to develop Layer 2 of some application layers in parallel or deploy Layer 3 on top of it.
At this time, they can choose to use other data layers or other methods. As long as the community reaches a consensus, they can innovate according to their own ideas.