Appearing at the NVIDIA Conference, why did NEAR inexplicably become the leading public chain in AI?
星球君的朋友们
2024-03-13 05:22
本文约1394字,阅读全文需要约6分钟
Backed by high-performance chain functions, NEAR's technical extension and narrative guidance in the direction of AI seem to be much more powerful than pure chain abstraction.

Original author: Haotian (X: @tmel0211)

Recently, the news that NEAR founder @ilblackdragon will appear at the NVIDIA AI Conference has attracted a lot of attention on the NEAR public chain, and the market price trend is also gratifying. Many friends are wondering, isnt NEAR chain All in doing chain abstraction? Why has it become an AI head public chain inexplicably? Next, I will share my observations and popularize some AI model training knowledge:

1) NEAR founder Illia Polosukhin has a long-term AI background and is a co-builder of the Transformer architecture. The Transformer architecture is the basic architecture of ChatGPT for LLMs large language model training today, which is enough to prove that the boss of NEAR did have experience in creating and leading AI large model systems before establishing NEAR.

2) NRAR has launched NEAR Tasks at NEARCON 2023, with the goal of training and improving artificial intelligence models. Simply put, model training demanders (Vendors) can issue task requests on the platform and upload basic data materials. Users (Taskers) can participate in answering tasks and perform manual operations such as text annotation and image recognition for data. After the task is completed, the platform will reward the user with NEAR tokens, and these manually labeled data will be used to train the corresponding AI model.

For example: the AI ​​model needs to improve its ability to identify objects in pictures. Vendor can upload a large number of original pictures with different objects in the pictures to the Tasks platform, and then users manually mark the positions of objects on the pictures to generate a large number of picture-object positions ” data, AI can use these data to learn independently to improve image recognition capabilities.

At first glance, NEAR Tasks doesn’t just want to socialize artificial engineering to provide basic services for AI models. Is it really that important? Here is a little bit of popular science knowledge about AI models.

Normally, a complete AI model training includes data collection, data preprocessing and annotation, model design and training, model tuning, fine-tuning, model verification testing, model deployment, model monitoring and updating, etc., among which data annotation and preprocessing are the manual part, while model training and optimization are the machine part.

Obviously, most people understand that the machine part is significantly larger than the manual part. After all, it appears to be more high-tech, but in actual circumstances, manual annotation is crucial in the entire model training.

Manual annotation can add labels to objects (people, places, things) in images for computers to improve visual model learning; manual annotation can also convert speech content into text, and mark specific syllables, word phrases, etc. to help computers Speech recognition model training; manual annotation can also add some emotional tags such as happiness, sadness, anger, etc. to the text, allowing artificial intelligence to enhance emotional analysis skills, etc.

It is not difficult to see that manual annotation is the basis for machine-based deep learning models. Without high-quality annotated data, the model cannot learn efficiently. If the amount of annotated data is not large enough, the model performance will also be limited.

At present, in the field of minimally invasive AI, there are many vertical directions that are based on the ChatGPT large model for secondary fine-tuning or special training. Essentially, based on the data of OpenAI, additional new data sources, especially manually labeled data, are added to perform model training.

For example, if a medical company wants to do model training based on medical imaging AI and provide a set of online AI consultation services for hospitals, it only needs to upload a large amount of original medical imaging data to the Task platform, and then let users mark and complete the task. Manually annotating data, and then fine-tuning and optimizing the ChatGPT large model with this data will turn this general AI tool into an expert in a vertical field.

However, it is obviously not enough for NEAR to become the leader of the AI ​​public chain just by relying on the Tasks platform. NEAR actually also provides AI Agent services in the ecosystem to automatically perform all on-chain behaviors and operations of users. Users can be free with just authorization. Buying and selling assets in the market. This is somewhat similar to Intent-centric, using AI automated execution to improve user on-chain interaction experience. In addition, NEARs powerful DA capabilities allow it to play a role in the traceability of AI data sources and track the validity and authenticity of AI model training data.

In short, backed by high-performance chain functions, NEARs technical extension and narrative guidance in the direction of AI seem to be much more powerful than pure chain abstraction.

When I was analyzing the NRAR chain abstraction half a month ago, I saw the advantages of NEAR chain performance + the team’s super web2 resource integration capabilities. I never expected that the chain abstraction had not yet become popular enough to reap the benefits of this wave of AI endowment. I can expand my imagination again.

Note: Long-term attention still depends on NEARs layout and product advancement in chain abstraction. AI will be a good bonus and bull market catalyst!

Original link

星球君的朋友们
作者文库