
This article comes from a16z cryptoThis article comes from
, the original author: Guy Wuollet, compiled by Odaily translator Katie Gu.
In fact, many teams struggle to find the "right" token design for their project. However, the industry lacks a tested design framework, so future generations repeatedly encounter the same challenges as their predecessors. Fortunately, there are also (few) early examples of successful token designs. Most effective token models have unique elements specific to their goals, but most flawed token designs have some common bugs. Therefore, this article will discuss why we should consider token research and design, not just "token economics", and list seven "pitfalls" tips.
secondary title
#1 Clarify the goals of the token designThe biggest problem in token design is how to build a complex token model before clear goals. The first step should be to define the goal and make sure the whole team fully understands: what it is, why it is important,What are you really trying to accomplish?
Failure to rigorously define goals often results in redesigns and wasted time. Clarity of purpose also helps avoid the “inventing a token economy for token economy design” problem, which is common with some token economy designs.
Also, the goal should be around the token itself, but this is often overlooked. Examples of clear goals include:
Design a game with a token model that allows for optimal scalability and support modeling.
A DeFi protocol hopes to design a token model that reasonably allocates risks among participants.
Designing a reputation protocol that guarantees money cannot directly replace reputation (e.g., by decoupling liquidity from reputation signals).
Design a storage network that keeps files available with low latency.
Design a staking network that provides maximum economic security.
Design a governance mechanism that elicits true user preferences or maximum engagement.
The list goes on. Let tokens support any use case and achieve any goal, not the other way around.So how do you start defining a clear goal?
Well-defined goals are often derived from the "project mission". While "project missions" tend to be high-level and abstract, goals should be concrete and reduced to their most basic form.
Let's take EIP-1559 as an example. Roughgarden stated a clear goal for EIP-1559: "EIP-1559 should improve the user experience with simple cost estimates in the form of 'clear best bids' outside of periods of rapid demand growth."
What these two examples have in common is stating a high-level goal, providing a relevant analogy to help others understand your goal, and then going on to outline the design alternatives that best support that goal.
secondary title
#2 Evaluate existing work against fundamental principles
When creating something new, it's a good idea to start with what you already have. When you evaluate existing protocols and existing literature, you should evaluate them objectively on their technical merits.These factors may not be relevant to the ability of a token model to achieve its stated goals. Valuation, popularity, or other simplistic ways of assessing a token model may cause Builders to “do more detours.” If you assume that other token models work properly when in reality they don’t, you may be creating a token model that is “inherently flawed”.
secondary title
#3 Clarify your assumptions
Articulate your assumptions. When you're focused on building a token, it's easy to take basic assumptions for granted. It's also easy to misrepresent assumptions you're really making.
Take, for example, a new protocol that assumes its hardware bottleneck is computational speed. Making that assumption part of the token model (for example, by limiting the hardware cost required to participate in the protocol) can help align the design with the desired behavior.
Articulating your assumptions makes it easier to understand your token design and make sure it works. You also can't test your assumptions without being explicit about your assumptions.
secondary title
#4 Validate your assumptions
There is a saying: "It's not what you don't know that gets you into trouble. It's what you know for sure isn't."
Token designers can validate their assumptions in a number of ways. Rigorous statistical modeling, often in the form of agent-based models, can help test these hypotheses. Hypotheses about user behavior can often also be tested by talking to users, preferably by observing what people actually do (as opposed to what they say they do). This has a high probability of successful validation, especially through an incentivized testnet that produces empirical results in a sandbox environment. Formal verification or intensive auditing will also help ensure that the code base is functioning as expected.
secondary title
#5 Clarify "abstract barriers"
An "abstraction barrier" is an interface between different layers of a system or protocol. It is used to separate different components of a system, allowing each component to be designed, implemented and modified independently. Clear abstraction barriers are useful in all fields of engineering, especially software design, but are even more necessary for decentralized development and large teams building complex systems that cannot be understood by individuals.
In token design, the goal of removing abstraction barriers is to minimize complexity. Reducing (internal) dependencies between different components of a token model results in cleaner code, fewer bugs, and better token design.
As an example, many blockchains are built by large engineering teams. A team might make assumptions about hardware costs over time and use that to determine how many miners contribute hardware to the blockchain at a given token price. If another team relies on the token price as a parameter but doesn't know the first team's assumptions about hardware costs, they can easily make conflicting assumptions.
At the application layer, clear abstraction barriers are critical to enable composability. As more and more protocols are combined with each other, the ability to adapt, build on, scale and remix will only become more important. With larger compositions comes greater possibility, but also greater complexity. When applications want to compose, they must understand the details of the composition protocol they use.
By creating clear abstraction barriers, token designers can more easily predict how a particular change will affect each part of the token design. Clear abstraction barriers also make it easier to scale a token or protocol and create a more inclusive and scalable Builder community.
secondary title
#6 Reduce reliance on external parameters
External parameters are not inherent to the system, but can affect overall performance and success, such as the cost of computing resources, transaction volume, or latency in the early stages of the creation of the token model.But while the token model only works when the parameters are kept within a limited range,May exhibit unexpected behavior
Or to take another example, decentralized networks often rely on cryptographic algorithms or computational puzzles that are difficult, but not impossible, to solve. Difficulty often depends on an exogenous variable, such as how fast a computer can compute a hash function or zero-knowledge proof. Consider a protocol that makes assumptions about how fast a given hash function can be computed and pays token rewards accordingly. If someone invents a new way to compute hash functions faster, or simply has outsized resources to solve a problem out of proportion to their actual work in the system, they can be rewarded with unexpectedly large tokens.
secondary title
#7 Revalidate Assumptions
Designing a token should be like designing an adversarial system. User behavior will change as the way the token works changes.A common mistake is to adjust the token model without ensuring that arbitrary user actions still produce acceptable outcomes. Don't assume that user behavior will stay the same due to token model changes.
Usually this kind of mistake happens late in the design process, where someone spends a lot of time defining the token's goals, defining its functionality, and doing validation to make sure it works as intended. They then identified a special case and changed the token design to accommodate it, but forgot to revalidate the entire token model. By fixing one special case, they produced another (or several other) unintended consequences.