
Author | Vitalik Buterin
Author | Vitalik Buterin
One of the most valuable properties of most blockchain applications is trustlessness, i.e. the ability of the application to keep functioning in the intended manner without relying on certain participants to behave in a certain way, even if their relevant interests may change in the future And make it behave unexpectedly. Blockchain applications are never completely trustless, but some applications are more trustless than others. If we want to move towards the goal of trust minimization, we need to first be able to discern the level of trust.
First, my simple definition of "trust" is: trust is making assumptions about the behavior of others. Before the outbreak, you would not deliberately keep a distance of two meters from others when walking on the street to prevent someone from suddenly stabbing you. This is a kind of trust: first, trust that people rarely lose their minds, and second, the legal system The maintainers of have a strong incentive to constrain this behavior. When you run a piece of code written by someone else, you trust them to be honest in writing it (whether in their own conscience or in the financial interest of maintaining their reputation), or at least enough people exist to check it out to find vulnerabilities. Not growing your own food is another form of trust, trusting that enough people will grow it for a profit and sell it to you. You can trust groups of different sizes, and there are different types of trust.
In order to analyze the blockchain, I try to decompose trust into the following dimensions:
How many people do you need to behave according to your expectations?
How big is the total number of people?
What motivation do people need? Do they need to be altruists, or mercenaries? Do they need to avoid collaboration?
How badly would the system be affected if these assumptions were violated?
For now, let's focus on the first two points, and here's a diagram:
The darker the green, the healthier the model. Let's analyze these categories in detail:
1 of 1: There is only one participant in the entire system. If this actor behaves as you expect, the system will (will) work properly. This is the traditional "centralization" model, and it is also the model we want to surpass.
N of N: A "dystopian" world. All participants in the system have to behave as expected for the system to function properly, and we have no remedy if any of them fail.
N/2 of N: This is how the blockchain works, if the majority of miners (or PoS validators) are honest, the blockchain will work properly. Note that the larger the value of N, the more valuable N/2 is. A widely distributed network of miners/validators makes more sense than a blockchain controlled by only a small number of miners/validators. Still, we want to go a step further with this level of security due to the possibility of a 51% attack.
1 of N: There are many actors, and as long as at least one of them behaves as expected, the system will function properly. Any system based on fraud proofs falls into this category, as do trust settings, although the value of N is usually small in this case. Note that we really want the value of N to be as large as possible!
Few of N: As long as a small, fixed number of actors behave as expected, the system will function properly. Data availability checks are one of them.
0 of N: The system does not need to rely on external actors to function properly. Validating blocks yourself falls into this category.
Although models other than "0 of N" have some degree of "trust", there is a huge difference between these models! Trusting that a particular person (or organization) will behave as expected is a completely different situation than trusting that any individual will behave as expected. "1 of N" is more similar to "0 of N" than "N/2 of N" to "1 of 1". One might think that the "1 of N" model is similar to the "1 of 1" model because both models rely on a single player, but in fact the two are quite different: in a "1 of N" system, If the participant suddenly disappeared or blackened, it is possible to change another participant, but in the "1 of 1" system we have no other choice.
In particular, be aware that even the software you run often depends on a "few of N" trust model for its correctness to ensure that someone will check if the code has a bug. Knowing this, trying to make the rest of your application switch from a "1 of N" model to a "0 of N" model is like installing security doors in your home, but with the windows open.
Another important distinction is, if your trust assumptions are broken, how much damage is done to the system? On blockchains, the two most common types of failures are liveness failures and safety failures. Liveness failure means that you are temporarily unable to perform operations (for example, withdraw coins, pack transactions into blocks, and read data on the chain). A security failure is a situation that the system wants to prevent (for example, an invalid block being added to the blockchain).
The following is a list of trust models adopted by some blockchain layer 2 protocols. I use "small N" to refer to the set of participants in the layer 2 system itself, and "big N" to refer to the participants at the bottom of the blockchain. My assumption is that the layer 2 community will always be smaller than the underlying blockchain. Also, I use the term "liveness failure" specifically to refer to situations where tokens cannot be raised for an extended period of time. An inability to use the system but being able to withdraw funds almost instantly does not count as an active failure.
"Channels" scheme (Channels, including state channels, Lightning Network, etc.): use a "1 of 1" trust model to ensure liveness (your counterparty can temporarily freeze your funds, but you can distribute funds in multiple channels) risk reduction in ), using the "N/2 of big N" model for security (possibility of losing funds in a 51% attack).
Plasma (centralized operator): Use a "1 of 1" trust model to ensure liveness (operator can temporarily freeze your funds), "N/2 of big N" model to ensure security (possible in 51% attack lose funds in the process).
Plasma (semi-decentralized operators, such as DPOS): use the "N/2 of small N" trust model to ensure liveness, and the "N/2 of big N" model to ensure security.
Optimistic rollup: Use a "1 of 1" or "N/2 of small N" trust model for liveness (depending on operator type), and an "N/2 of big N" model for security.
ZK rollup: Use "1 of small N" trust model to ensure liveness (if the operator fails to package your transaction, you can withdraw, if the operator does not package your withdrawal transaction immediately, you cannot package more transactions, You can withdraw money yourself with the help of any full node in the rollup system); there is no risk of security failure.
ZK rollup (light withdrawal enhanced): There is no active failure risk and security failure risk.
Finally, there is the question of "motivation". Do you need participants who you trust to be very altruistic, mildly altruistic, or rational enough to get participants to follow expectations? By default, "fraud proofs" require participants to be slightly altruistic, but the degree depends on computational complexity (see "Verifier's Dilemma" for details), and there are many ways to improve the process to make it more reason.
If we add a mechanism to pay for the service, then it is rational to help others withdraw from the ZK rollup, so there is little need to worry about not being able to withdraw from the rollup. At the same time, if the community agrees not to accept a blockchain under 51% attack (rolling back a very long transaction history or reviewing blocks for too long), then the risk to other systems can be mitigated.
Conclusion: If someone says that a system "relies on trust mechanisms", we can get to the bottom of it! Do they mean a "1 of 1" model, a "1 of N" model, or an "N/2 of N" model? Does the system require participants to be altruistic or rational? If it is altruism, what is the cost to the participant? How long do I have to wait to get my funds back if the assumption is violated? a couple of hours? How many days? Or is it frozen forever? Knowing these questions, we may have very different answers to whether to adopt the system.