Interpretation of how Internet computers lead the Web3 era from the bottom of technology?
星际视界IPFSNEWS
2021-09-26 09:44
本文约12246字,阅读全文需要约49分钟
With the gradual popularization of communication and visualization mobile terminal devices, people flock to the concept of web3 or metaverse.

With the gradual popularization of communication and visualization mobile terminal devices, people flock to the concept of web3 or metaverse. And how to take the road to a new world? But there is no clear answer. DFINITY has explored the road of "Internet Computer" in 5 years of exploration experience. Will it become an effective solution?

introduce

introduce

DFINITY is a non-profit organization headquartered in Switzerland. All income can only be used for one purpose, which is to participate in the research and development and promotion of Internet Computer, a decentralized open source network project. Although this project is led by DFINITY, the governance system has been launched since the day it went online, and the actual physical nodes of the network are independently operated by many third-party independent operators.

The entire project belongs to the holder of the governance token, that is, the entire community. DFINITY will continue to participate in the development and promotion of the entire platform as a major technical contributor, but we are only one of the contributors. In just over three months since its launch, many other community teams have already participated. The development of this platform is inseparable from the contribution of the entire community. Further promoting decentralization is our main goal at the moment.

As the creator of the Internet Computer platform, DFINITY's vision is blockchain singularity, which means that all applications that can run on the Internet should be built with blockchain technology.

In order to achieve this, we have added a layer of protocol based on blockchain consensus technology above the TCP/IP layer and below the application layer, which we call Internet Computer Protocol (ICP). This set of protocols builds a virtual subnet by exchanging data between multiple physical nodes (computers).

The nodes inside the subnet reach a consensus on the input and output, mutually verify the calculation results, and can communicate with other subnets. Multiple subnets are combined to build a virtual computer. The capacity can increase with the increase of subnets. Anyone can run programs on it, access other people's programs, and so on.

1.png

But it sounds like our current Internet is no different, especially the concept of micro service. Then why can't the current Internet be called Internet Computer?

The difference lies in the protocol of this set of ICP. The purpose of this protocol is to ensure that all programs are executed correctly, their state cannot be tampered with, and when a program calls another program, it can trust that the call will be executed correctly. Due to the lack of this layer of protocol in the current Internet, all programs have to solve cumbersome problems such as availability, reliability, and mutual authorization by themselves, and thus bring various incompatibilities and security burdens.

Question 1

Question 1

Internet Computer provides a new paradigm of program construction and has its own set of "jargon". Can you briefly introduce these "jargon" and what infrastructure do you think is the most useful for developers?

Answer: You can talk about it from several different angles, first from the perspective of end users.

Accessing an application on an Internet Computer is basically the same as accessing an ordinary website, and users do not need to pay any fees. This is the same as when using traditional cloud services, the cost is borne by the project party. Most of the other blockchains charge gas fees to users and require pre-installed wallet software, which has a relatively high threshold.

The cost of operating an application, including computing and storage, is measured in cycles, the native token of Internet Computer. The price of cycles is pegged to SDR, 1SDR = 1 Trillion Cycles. The price of SDR is weighted by a basket of currencies set by the International Monetary Fund, including the US dollar and RMB, and is relatively stable.

Back to the user's point of view, they don't have to care about the concept of cycles. However, many applications need to process user login. For this reason, Internet Computer also launched an anonymous identity management system, which we call Internet Identity. This system is completely based on web standards, and users do not need to install wallet software to use it.

All of these are to lower the threshold for users to use, so that the application of blockchain can really go out. Internet Identity is mainly to solve the problem of one identity logging in on multiple devices. Moreover, in different applications, the codes obtained by the applications are different, which can prevent the user's behavior from being maliciously tracked.

Finally, users may also be interested in participating in Internet Computer's governance. This is a neuron voting system called NNS, which is one of our innovations. It is also at the application level, but it has a special authority, that is, it can manage all Internet Computer subnets and all aspects of the entire system, including the code running on nodes, version upgrades, creating new subnets, accessing new nodes, etc. wait.

To participate in voting, you first need to hold ICP tokens and lock a certain amount of ICP to get a neuron. The weight of voting is related to the number of locked positions, the length of locked positions, and the age of neurons. Participating in voting will also be rewarded, and the amount of reward has nothing to do with voting for or against. It can also follow the decisions of other neurons to vote automatically. Overall, these settings are designed to link the behavior of users to vote with the long-term interests of the platform and to reward users for their contributions.

After talking about the user's perspective, let's look at it from the developer's perspective.

The application program running on the Internet Computer is encapsulated in a lightweight container called canister. The concept of a docker container that is usually familiar to everyone is a bit different. Canister not only encapsulates the code, but also automatically persists the state of the container. It can be simply understood as a long-running operating system process. The state of the process, including memory and message queues, is automatically saved and will not be lost due to power-on and power-off. This means that the concept of the file system has been stripped from the Internet Computer, and developers do not need to consider reading and writing files and hard disks to save data, which is a considerable simplification.

Another thing developers need to understand is that the communication mode between canisters is asynchronous and belongs to the actor model. That is, each canister is its own process, and communicates with other canisters by sending messages, that is, asynchronous method calls. The processing of a canister's internal message queue is single-threaded, no need to consider locks, and each method call is atomic. It is easy to get started if you are familiar with actor model programming.

To develop an application, usually the canister container is used as the backend, and the frontend interaction can be in the browser or a separate APP. It was also mentioned before that Internet Computer can directly run the website, which means that the canister can implement the http request interface by itself, and return the webpage including Javascript to the user's terminal. The front-end and back-end can be packaged together into a canister and deployed on the Internet Computer.

We have ready-made libraries for front-end development, both Javascript and Rust. When the front-end needs to call the back-end code, just make an asynchronous function await call directly, and the bottom layer has been implemented by library functions. If you need to know more, there is an interface and data encoding format called Candid, which supports the implementation of multiple languages. Canister uses Candid to describe external interfaces and data types.

In general, what developers need to understand revolves around the concept of Canister. WebAssembly, Actor model, Orthogonal Persistence (automatic persistence), Motoko, Candid. I also recommend learning about System API, which is the standard for Internet Computer interfaces https://sdk.dfinity.org/docs/interface-spec/

This information is very detailed, involving all aspects of the entire system, and we have made a lot of formal efforts to define the semantics of the interface, which is convenient for developers to understand the behavior of the system in depth.

Question 2

Question 2

Compared with traditional Alibaba Cloud, Tencent Cloud, AWS and other platforms, what is the difference between Internet Computer? They are also private cloud services built by the company, and they also use data centers, remote backup, and multi-node operation.

Answer: The current cloud service platforms are all based on a foundation. You must rely on the platform provider to maintain the security of the platform, maintain network connectivity, uninterrupted computing, and data loss, etc.

Although most of the time there is no conflict between the interests of the commercial platform itself and the interests of the users it serves, they are not completely consistent. There is a concept that everyone should be familiar with, Platform Risk (Platform Risk), so I won’t talk about it here.

But the most important point is that these cloud platform infrastructure providers do not want to become commodities (replaceable commodities), and they are doing their best to retain and lock customers.

Internet Computer first existed as a decentralized network. The nodes inside are all operated by third parties and run in different data centers. The management of the real network is handed over to users, not dominated by node operators or data centers.

So there is no centralized business organization to make all the decisions. The design of the entire governance system is also based on a long-term perspective, hoping to maintain the consistency of user interests and platform development. This platform is paid to the node operator. Whether a certain node is operated by Zhang San or Li Si does not matter at all. This is a free market. So for Internet Computer, infrastructure such as hardware and network has become a commodity (replaceable commodity).

Looking back at the development history of the entire PC industry, we can actually see that it is an inevitable law of history that infrastructure (such as PC hardware) becomes commodity (replaceable commodity), and I believe it will not be an exception in cloud services.

It can be said that a computing platform such as Internet Computer has been separated from the hardware infrastructure. This kind of business model is unimaginable without decentralization and blockchain technology. But today it can become a reality, which is the best interpretation of the progress of the times.

Along the way from Bitcoin and Ethereum, some people have a negative attitude towards this emerging thing just after seeing the currency price hype and the Ponzi scheme. In fact, the change of the times is just around the corner.

In addition to the consistency of interests, another aspect is to use more advanced technology to simplify system redundancy, thereby saving the entire platform cost, which also means bringing savings to users.

We also mentioned a lot of advantages of trusted computing earlier. In fact, there is also a distributed advantage and an advantage of using cutting-edge encryption technology. They mean that many traditional technical maintenance tasks, such as firewalls, are basically no longer necessary. If a customer wants to use these current cloud platforms well, it must invest a lot in operation and maintenance. And Internet Computer can save a lot of costs in this area.

The third point is tokenization, which is the tokenization of applications. This can be said to be the next trend in the development of the entire Internet application, which is unstoppable. Traditional cloud service providers also provide bridging components with the blockchain at most, and its architecture is inevitably quite bloated after a complete set. Since Internet Computer can directly run websites and applications, as a native blockchain, it is very easy to integrate tokenization.

question 3

Every smart contract on Internet Computer is "extensible". Specifically, how does the extension of the protocol work at the technical level? Are there any cases of extension?

Answer: Scalability has several dimensions, one is storage space, the other is network traffic, and the other is computing power, how many transactions can be processed per unit time. Whether it is scalable mainly depends on whether it can bypass known bottlenecks. On a public platform, we also need to consider how to allocate limited resources among different users and applications.

The main design idea of ​​Internet Computer is to scale out, that is, to solve bottlenecks by adding resources and creating new subnets. This idea is basically the same as that of mainstream web applications. When an application cannot handle all user requests through one canister, it is reasonable to use multiple canisters to process part of the user requests at the application level. That is to say, when designing the application, you need to take this into consideration, and at least leave a possibility of migrating to the new architecture. At present, I know that OpenChat is designed to use multiple canisters, and DSCVR also leaves room for this, but it still focuses on one canister.

From the system level, through canister expansion, the current threshold of 4G memory can be surpassed. In terms of computing, it also starts from the guiding ideology of concurrency, and does not choose the global atomic design of Ethereum. Therefore, different canisters process their own messages in their own threads. As long as the hardware load allows, it will not affect the performance of other canisters. As for the network, bandwidth basically determines the upper limit of expansion. Any blockchain cannot avoid this physical bottleneck. It can only use fragmentation, which corresponds to different subnets for Internet Computers.

Of course, there are various optimization solutions at the system level that can improve performance, and we have been working on this, hoping to give full play to the performance of the hardware.

question 4

What types of Dapps are more suitable to be carried on it? We found that there are relatively few DeFi protocols on the Internet Computer at present. What are the directions for the Dapp track on the Internet Computer in the future?

Answer: DeFi mainly needs liquidity to promote it. For security reasons, the function of canister transfer ICP has not yet been opened, which also limits liquidity. However, this restriction is temporary. At present, the stability of the entire network has been good since it went online. I believe that this restriction will be lifted through NNS voting at an appropriate time. I believe that many developers are already ready, and the explosion of DeFi applications is only a matter of time.

Personally, I am still very optimistic about the current social dapp on Internet Computer. Once this track has the blessing of tokenization, it will grow very rapidly, and it will definitely not be inferior to DeFi and NFT games. There are also some dapps with social attributes on other blockchains, but they are all subject to the threshold of starting. After all, the step of using the wallet correctly has stumped many users. The dapp on the Internet Computer uses Web standard technology and can be accessed by any browser.

Another direction that I am optimistic about is the application for individual users and small and medium-sized enterprises. For project management, file sharing, creator economy (podcast, vlog, web articles, etc.), although there are relatively mature solutions on the Internet, platform risks always exist. I also mentioned the platform risk of cloud services earlier. I believe everyone has a certain experience of giant monopoly in various other fields. Now the decentralized structure is a new possibility, the platform itself should become a transparent existence, instead of being entrenched in the upper reaches of the food chain and devouring the interests of users with overlord clauses.

In the final analysis, which track has a future depends on whether its application can quickly gather value. This value does not mean how much your project is locked, because this amount can change at any time. It's about how many connections you have established with users and other applications. This association will become more and more valuable as trust deepens and uses increase. Code can be pasted, but this association cannot be copied. And if used properly, tokens can accelerate the accumulation of value to a certain extent, but in the end it depends on the intrinsic value of the project itself.

question 5

Canister, as a container running on Webassembly, hosts the environment running on the Dapp chain. What's new about Canister recently?

Answer: Just this Monday, DFINITY released the development roadmap and welcomes the participation of the community. https://dfinity.org/roadmap. Among them, those related to canister are:

1. Stable memory expansion

2. Canister ECDSA signature

3. Apply AMD SEV to protect data privacy

Capacity expansion is currently mainly for stable memory, that is, memory management that is not affected by code upgrades. It used to be limited by the 4GB limit of the Wasm virtual machine, but now it can be released. The upper limit is limited by the total memory of the subnet, which is currently about 300GB.

The ECDSA threshold signature technology simply means that each canister can sign the data without storing the private key, and this signature can be verified by the public key, and each canister can get a unique public key. This is in line with the Chain Key technology we have already implemented, and it has a wide range of applications. For example, canister can directly initiate a Bitcoin or Ethereum transaction and sign it.

This means that the private key must be handed over to the program in a private environment, but now it can be done in a decentralized environment. It can also be used in issuing SSL certificates, DNS custom domain names, etc.

The use of AMD SEV's technology is mainly to protect Canister's data privacy to a certain extent, so that even node operators cannot snoop on user data. We have been making preparations for this, and it is quite difficult. Fortunately, the hardware used by the current nodes already supports SEV technology, so I hope it will be a smooth upgrade by then.

question 6

"Open interconnection services" can implement permanent APIs, allowing developers to build data or functions that depend on other services with confidence, and there is no risk of revocation. How are Open Internet Services deployed on Internet computers?

Answer: The easiest way to provide a persistent API is to make the code of the canister controller unmodifiable by setting it to an empty set.

I personally also made a very simple canister called blackhole. Its main purpose is to allow other canisters to set the controller as blackhole, so that not only the code becomes unmodifiable, but blackhole also provides additional query functions, such as checking the balance of book cycles, or checking the hash value of the code. The controller of blackhole itself is set to itself, and its code is also public, so it is easy to verify the correctness of the hash value. If you need your canister to be trusted by others, setting its controller to blackhole is a neat way.

But if you still need to maintain the code upgrade function, you need to introduce the community governance function. The Service Neuron System we are developing allows applications to create neurons by locking tokens and then vote to manage all aspects of the application, including code upgrades.

Of course, the SNS system we made is still under development, and there are no examples yet. And it is only one of the candidate solutions. The community has already made other attempts in this area, and I believe they will gradually mature.

question 7

Security is an important issue for computers. What mechanisms does Internet Computer use to replace functions such as firewalls? In terms of anti-tampering, what are the characteristics of DFINITY compared with other blockchain bottom layers?

Answer: One of the main functions of a firewall is to prevent hackers from invading the system and gaining access to the intranet, thereby achieving the purpose of stealing or tampering with data. First of all, the division of internal and external network permissions is very problematic. It is quite fragile, because once it is breached, all default permissions on the intranet will be exposed to attackers. Therefore, we have seen that many companies have abandoned this approach and changed to setting permissions for each service and using unified identity management technology to authorize users.

Corresponding to it is the identity management on the Internet Computer. A public key corresponds to a user's identity, and each canister can obtain the caller's identity. This identity cannot be tampered with by a third party, whether it is a user calling canister or a call between canisters. The reason why this can be achieved is that this kind of call must pass the consensus protocol, especially the cross-subnet call, both the initiator and the responder must pass the consensus protocol, and will be approved and executed after verification.

To quickly and efficiently verify the validity of any subnet signature, we must use the chain key technology we developed. It can support dynamic node connection and removal while ensuring that the threshold signature public key remains unchanged. This is currently not possible for other blockchains, so Internet Computer is currently the leader in verifying transactions, and basically does not need to synchronize data between its subnets (except for the necessary public keys of each subnet and node public keys) ).

If you want to tamper with data on the Internet Computer, it is not enough to just break through the authority of a node. It must be able to control more than 2/3 of the number of nodes in a subnet. So the security of the subnet depends to a certain extent on the number of nodes. Moreover, through the irregular rotation of nodes, the security in this aspect can be further strengthened. Even if a subnet is breached, it cannot impersonate the identity of other subnets, so the scope of the loss is controllable.

It is one thing to ensure that data is authentic and reliable and not tampered with, while the protection of data privacy is another. Most blockchains are public data, so there is no privacy protection. True privacy protection can be achieved at the application level, using technologies such as homomorphic encryption, but the current efficiency is not enough. So our current plan is to apply AMD SEV technology to encrypt at the hardware level. However, the security of the entire Internet Computer does not depend on hardware, and the guarantee of SEV is a plus.

question 8

The name of DFINITY has actually been launched as early as 6 years ago. Although the process of launching the mainnet is relatively slow, we can see that the DFINITY team really wants to do something subversive, and the consensus is also very strong. What factors have influenced the transition from "Ethereum's sister chain" to "world-class Internet computer"?

Answer: The slogan of World Computer was first put forward by Ethereum, and it has inspired many people, although now it is more focused on DeFi and digital assets. The direction of "world-class Internet computer" has always been the goal of DFINITY, and it is not a route that was changed after financing.

At first, due to the constraints of the team, there were only clear innovations in BLS and consensus protocols, so the first step was to start with this aspect, launch a chain and then iterate gradually. But then we realized that if we don’t solve the problem of cross-subnet communication, we will always stay in the rut of “another blockchain” and it will be difficult to innovate. It is precisely because of the team's persistence that there is a breakthrough in the chain key, which solves the problem of cross-subnet verification and realizes the promise of scalability.

Looking back, in fact, we only need to keep asking ourselves a question: Why can’t a decentralized blockchain run a website?

First of all, we need to solve an efficiency problem, that is, accessing a website requires a millisecond-level response. How can it be done? Our answer is to separate read-only queries from state modifications, so that 99% of network traffic is read-only, and responses at the millisecond level can be achieved. To modify the state, we also achieve a response within two to three seconds through the innovation of the consensus protocol.

The efficiency is achieved, how to verify the correctness of the content? How to make normal browsers do it too? Then the required conditions for verification must be simplified. Can we abandon historical blocks and just pass a public key? How to solve the problem of node dynamic change with BLS public key? How to solve the problem of centralized domain names and SSL certificates? How to expand the capacity if the access traffic increases? Where are the bottlenecks and boundaries of capacity expansion? What should I do if there is a conflict between the need for capacity expansion and the method of calling the synchronous contract?

As long as you keep asking questions and looking for answers, I believe a practical solution will gradually emerge. This is what DFINITY has been doing for the past few years.

question 9

Ethereum has just completed the EIP-1559 upgrade, which has taken the first step of deflation, and the price of tokens has gradually risen. Do you think that for decentralized infrastructure, the performance of tokens is more incentivized to supporters or technological disruption is more important? How to achieve a relative balance between the two?

Answer: This is how I see it. The short-term performance of tokens depends on the confidence and expectations of market participants, and the long-term performance still has to return to the value of the platform itself. The technology of Ethereum can be said to have passed the test of time, and despite various shortcomings, it has been recognized by the entire cryptocurrency market. As for deflation or inflation, each has its own disadvantages. I am not quite able to agree with the rhetoric of the BTC maximalist. DeFi's innovations in terms of liquidity and incentives are also very exciting, but in the long run, most projects do not actually add value, and are more of a digital game. In the short term, users obtained through token price increases may also lose users due to price drops or the rise of another project.

Technological innovations are also easily copied by competitors. But from an overall point of view, these innovations have been pushing the entire industry forward. When it comes to a single project, it is really hard to say whether it can benefit from pure technological innovation. Everyone in the industry is talking about ecological construction, how much protection can an ecological project have on a platform, especially how to convince developers to invest in a start-up platform is not an easy task.

I think the most worthwhile direction is to expand the circle of efforts. From payment transfers, to DeFi, to NFT and games, it is a process of continuous expansion. Under this general trend, try to expand the blockchain technology to a wider range of fields, such as the goal of allowing native websites to run on the blockchain. Use technological innovation and token incentives to acquire new users together to prosper the ecology and increase value.

Question 10

Many people think that Internet Computer is the main position of web3 application. Each public chain has more or less its own insights and technical implementation paths for web3, such as Polkadot and Ethereum. What is DFINITY's insights and future plans/Roadmap for the road to web3?

Answer: The purpose of DFINITY is to throw away all unnecessary baggage and move towards the destination of Blockchain Singularity. The Internet Computer project still has many imperfections, and there is still a certain distance from the complete realization of this goal. We hope that more people can join in to promote the technological progress of the platform itself and build more colorful upper-level projects. to win customers.

The focus of each public chain is different. We believe that everything that can be built with blockchain will eventually be realized with blockchain. Therefore, it does not exclude the combination of other public chain technologies. For example, in the roadmap we released on Monday, there are in-depth integration projects with Ethereum and Bitcoin, which is a perfect complement for both parties. This will further stimulate the flow and integration of assets across chains, simplify the application architecture, and abandon the centralized burden of cloud services, thereby improving the overall security and robustness of the application.

Running a website is an important step, but it is only the first step of the Internet Computer. I believe that the foundation laid by the current Internet Computer will definitely become a part of the grand puzzle of Blockchain Singularity in the future.

Question 11

What is the Canister Signature? Where does Canister store the private key used for signing? Does Canister support an Event mechanism similar to the Ethereum smart contract, which can be subscribed to obtain an update call? Is the caller obtained according to the return value? Finally, when will regular Canisters be able to process ICP tokens?

Answer: Canister Signature refers to signing the calculation result (or contract state) of the canister with the public key of the subnet. At present, we are using BLS threshold signature, which has a good feature of uniqueness of public key and signature, which is not available in other aggregation signature technologies (BLS can also do aggregation signature, which we also use in the protocol) .

Threshold signatures, simply speaking, are different nodes, which have their own private keys to sign the calculation results. Once a limited number of signatures (thresholds) are collected, a unique threshold signature can be obtained, using a public key that is Verifiable, so this public key is treated as the subnet's public key. There is no corresponding subnet private key here, and the private keys of nodes are stored separately and are different.

Many canisters can run on a subnet. Using the merkle tree method, it is easy to get the path to the calculation result of one of the canisters. Therefore, the signature of the subnet plus this path can be regarded as the signature of a piece of data by the Canister.

Canister signature is equivalent to event log or receipt to some extent. Since we don't require nodes to keep all historical blocks, it doesn't make much sense to do this solely for the event log. After all, this function can also be realized through query call and certified var, and it is more powerful.

community question

community question

Question: I found that icp developers prefer to develop social applications. Why is icp suitable for developing social applications or why developers like to develop social applications in icp?

Answer: From my understanding, the aggregation of value comes first from the aggregation of people. So once a blockchain platform can directly gather people together, there will definitely be such social projects born. But in the current web environment, it is not easy to make a social explosion, so these projects will definitely try different ways of playing. Tokenization There is no fixed routine for tokenization, and I am looking forward to seeing the innovations in it.

Question: There is a worry in development. For example, container A calls container B, and container B calls container C. If container C updates its state and returns a value to container B normally, container B hangs up. Container A will also Failed However, the state of the C container has changed, is there a recommended solution to a similar atomicity problem?

Answer: The atomic design of Ethereum is that once a contract on the call stack undergoes state rollback, all contracts must roll back state. This means that there is a global lock, and processing a user transaction will lock all related contracts until it is fully processed, during which time the locked contract cannot process any other transactions. Although such a design is convenient for developers to program, the inherent defect is that the performance cannot be expanded. So we abandoned this approach when we initially designed the canister model.

If you need this combination, traditional databases already have very mature solutions, such as two-phase commit. This can be achieved by negotiating a standard at the application level through the canister interface, and does not necessarily need to be supported at the system level.

There are always trade-offs in system design. Sometimes it is better to add restrictions, and sometimes it is better to provide choices. Our opinion is that a single method call conforming to atomicity is a more appropriate granularity, and there is no need to force the entire call stack to be atomized together.

Question: I am a novice in the currency circle, and I think the Dfinity ecological wallet is too difficult to use. Will it change in the future?

Answer: First of all, the so-called Internet Identity II of the current wallet is not required, it is an option. I don't use it in my own apps. Secondly, my understanding is that the support of browsers for various devices is relatively good, but it is difficult to support Apps on mobile phones, which requires further research and development work. Users using Android phones in China will not be able to use WebAuthn if Google Play services cannot be installed on them. This also requires a suitable solution, and we are investigating.

Question: I'm a developer building applications on computers on the Internet. I want to ask when the backend can make external http/https calls? Also, due to the limit of 4GB per container, I have to implement my distributed storage system, so when will BigMap be released or is there any other better way of scalable file storage?

Answer: The external http/https calls can be viewed separately, one for reading and one for writing. The former requires an oracle, and the latter is completely feasible if the other party satisfies ren-entrancy.

Question: Can DAPP on Ethereum be directly transferred to IC?

Answer: Solidity can be compiled into Wasm, but the programming model is different (such as the granularity of atomicity), and the system interfaces that need to be supported are also different, so some work needs to be done to support it.

Question: Ask a consensus question, does the consensus in the subnet belong to PoS? If there is evil in the subnet node consensus, is there a punishment mechanism?

Answer: PoS, currently there is no requirement for nodes to pledge tokens. Because it is an access mechanism, nodes cannot be anonymous, so the related penalty mechanism is relatively easy to implement. Nodes need to receive tokens as wages on a regular basis, and the current penalty mechanism is to deduct them.

Question: The stoic wallet mnemonic cannot be imported into the plug wallet. I asked the stoic developer and said that the choice of encryption algorithm is different. At the same time, I see that the official wallet mnemonic is even more different. Will there be a unified standard for this follow-up? Is it possible to import multiple wallets with one mnemonic like Ethereum?

Answer: It will indeed cause an uncommon problem, so in the short term, the user can only be reminded to mark which mnemonic phrase belongs to which company. In the long run, it is hoped that the community can negotiate a common standard.

If you are interested in the Dfinity ecology, please follow the Interstellar Vision public account and reply "ICP" to enter the Dfinity ecology exploration exchange group.

——End——

星际视界IPFSNEWS
作者文库