
summary
Cryptocurrency fraud has entered a new era driven by AI deep fakes, social engineering, and fake project packaging. This report, jointly written by Bitget, SlowMist, and Elliptic, analyzes common fraud methods from 2024 to early 2025, and proposes joint defense measures for users and platforms.
The three most dangerous fraud types currently are:
1. Deep fake impersonation – using synthetic videos to promote fake investments;
2. Social engineering fraud – including job search Trojans, phishing bots and fake pledge schemes;
3. Modern Ponzi schemes - disguised as DeFi, NFT or GameFi projects.
Modern fraud is shifting from technical vulnerabilities to dual attacks on trust and psychological vulnerabilities. From wallet hijacking to multi-million dollar fraud, attacks are becoming more personalized, deceptive, and covert.
To this end, Bitget launched the "Anti-Scam Hub" action page, upgraded the platform protection system, and cooperated with SlowMist and Elliptic to achieve on-chain stolen money tracking, phishing network dismantling and cross-chain fraud behavior marking.
The report includes real case analysis, a fraud red label list, and user and institutional protection guidelines.
Core conclusion: When AI can perfectly replicate anyone, the security line must start with questioning and end with collective defense.
Table of contents
1. Core Summary
The current status of the escalation of AI-assisted crypto fraud threats, and the joint countermeasure mechanism of Bitget, SlowMist and Elliptic.
2. Introduction: Evolution of threats
How the development of DeFi, the popularization of AI and cross-border convenience have spawned new breeding grounds for fraud and the risks involved.
3. Anatomy of a Modern Crypto Fraud
Analysis of the most dangerous frauds today:
3.1 Deep fake impersonation
3.2 Social Engineering Strategies
AI Arbitrage Robot
Trojan job hunting trap
Social Media Phishing
Address poisoning attack
Pixiu token scam
Fake pledge rebate platform
Airdrop Trap
3.3 Ponzi Schemes in the Web3 Era
4. Strengthening digital defenses: Bitget multi-layer security architecture
Detailed explanation of Bitget's real-time threat detection, token due diligence, dual audit mechanism and $300 million protection fund.
5. On-chain fraud tracking and fund forensics (by Elliptic)
How transaction monitoring, cross-chain bridge tracking and behavioral analysis can identify and block the flow of stolen money.
6. Protection recommendations and best practices (written by SlowMist)
A practical guide for users and enterprises: from phishing identification to fraud prevention habit formation and enterprise-level response framework.
7. Conclusion: Future Path Planning
How crypto security is moving from isolated defense to network immunity, and how Bitget is staying one step ahead of escalating threats.
Insight into the frontier: Uncovering new trends in crypto fraud
1. Core Summary
In January 2025 , Hong Kong police busted a deep fake fraud group and arrested 31 people. The gang stole $ 34 million by impersonating cryptocurrency executives - this was just one of 87 similar cases uncovered in Asia in the first quarter (Slow Mist, 2025 Crypto Crime Report). And these are indisputable facts. From the AI synthetic video of the Singapore Prime Minister to Musk's "false endorsement", deep fake trust attacks have become a daily threat.
This report jointly completed by the three parties reveals how crypto fraud has evolved from crude phishing scams to AI-enhanced psychological manipulation: Nearly 40% of high-value fraud cases in 2024 involve deep fake technology. Whether it is a Trojan job trap or a Ponzi "pledge platform", behind it is the precise use of trust, fear and greed by social engineering.
Crypto fraud isn’t just about stealing money — it’s eroding the foundation of trust in the industry.
Bitget's security system intercepts a large number of trust abuse behaviors every day: abnormal logins, phishing attacks, and malware downloads. To this end, we launched an anti-fraud center, developed active protection tools, and cooperated with global leading platforms such as SlowMist and Elliptic to dismantle fraud networks and track stolen money.
This report maps the evolution of threats, reveals current high-risk tactics, and provides practical defense strategies for users and organizations. When AI can replicate anyone's face, security mechanisms must be fundamentally skeptical.
2. Introduction: Evolution of threats
The borderless nature of cryptocurrency is both its greatest strength and its greatest risk. As the total value locked in decentralized protocols exceeds $ 98 billion and institutional participation continues to grow, the same technology that is driving innovation is also fueling a new wave of cryptocurrency fraud.
This is no longer the basic phishing attacks of the past. The scale and sophistication of fraud will increase dramatically from 2023 to 2025: In 2024, global users will lose more than $4.6 billion to fraud, a year-on-year increase of 24% (Chainalysis, "2025 Crypto Crime Report"). From deep fake impersonation to Ponzi ecosystems disguised as "staking income", scammers are using AI, psychological manipulation and social platforms to deceive experienced users.
Three major attack methods:
Deep fakes , pretending to be public figures endorsing fake platforms.
Social engineering scams , including Trojan job quizzes and phishing tweets.
Ponzi scheme variants , such as DeFi/GameFi/NFT-packaged scams.
The most alarming thing is the escalation of psychological manipulation: victims are not simply deceived but gradually persuaded. Scammers not only steal passwords, but also design traps based on behavioral blind spots.
Of course, the defense system is also evolving synchronously: collaborative innovation within the ecosystem is accelerating.
Bitget’s behavioral analysis system flags suspicious patterns in real time; Elliptic’s cross-chain forensics tracks multi-chain assets; SlowMist’s threat intelligence helps eradicate Asian phishing gangs.
This report integrates real-life cases, field research, and third-party operational data to analyze the main causes of current asset losses and provide countermeasures for users, regulators, and platforms.
Fraudulent tactics continue to evolve, but so do defense mechanisms. This report details specific solutions.
3. Anatomy of a Modern Crypto Fraud: Top 10 Scams for 2024-2025
As blockchain technology becomes more popular and crypto assets become more valuable, fraud becomes more complex, hidden and sophisticated, showing new characteristics of "high-tech disguise + psychological manipulation + on-chain induction". In the past two years, fraudsters have integrated AI, social engineering and traditional fraud models to build a more deceptive and destructive fraud ecosystem. Among them, deep fakes, social engineering and Ponzi variants are the most rampant.
3.1 Deep fakes: the collapse of the trust system
Generative AI will give rise to a new type of trust fraud in 2024-2025: a trust-based fraud using deep fake technology. Attackers use AI synthesis tools to forge audio and video of well-known project founders, exchange executives, or community KOLs to mislead users. Forged materials can often be indistinguishable from the real thing - imitating the target's facial expressions and voice, and even generating videos with "official logos" in the background, making it difficult for ordinary users to distinguish the authenticity. Typical scenarios:
(1) Celebrity deepfake promotion investment
Scammers use deep fake technology to easily "invite celebrities to endorse". Case: Singapore Prime Minister Lee Hsien Loong and Deputy Prime Minister Lawrence Wong were also made into deep fake videos to promote the "government-endorsed encryption platform".
https://www.zaobao.com.sg/realtime/singapore/story20231229-1458809
Tesla CEO Musk has been repeatedly implicated in fake investment incentive scams.
https://www.rmit.edu.au/news/factlab-meta/elon-musk-used-in-fake-ai-videos-to-promote-financial-scam
Such videos are widely spread through social media platforms such as X/Facebook/Telegram. Scammers often turn off the comment function to create the illusion of "official authority" and trick users into clicking on malicious links or investing in specific tokens. This attack method takes advantage of users' inherent trust in "authorities" or "official channels" and is highly deceptive.
(2) Bypassing identity authentication
Scammers use AI to forge dynamic facial videos (which can respond to voice commands) and combine them with photos of victims to break through the identity authentication system of exchanges/wallet platforms, hijack accounts and steal assets.
(3) Virtual identity investment fraud
In 2024-2025, the Hong Kong and Singapore police successively smashed several deep fake fraud groups. For example, in early 2025, the Hong Kong police arrested 31 suspects in a case involving a sum of up to HK$34 million. The victims were spread across Singapore, Japan, Malaysia and other Asian countries and regions. Characteristics of criminal organizations:
Recruiting graduates of media majors to build rich virtual identities and backgrounds;
Create a large number of phishing groups on Telegram, approaching targets with "highly educated, gentle, and friendly personalities";
Induce users to invest in fake platforms through the rhetoric of "making friends → guiding investment → obstacles to cash withdrawal";
Forged chat records/customer service conversations/revenue screenshots to create a sense of authenticity and a false sense of trust;
Inducing continuous recharge under the pretext of "activating computing power" and "withdrawal review" (Ponzi structure).
https://user.guancha.cn/main/content?id=1367957
(4) Deep fake + Zoom phishing
Scammers impersonate Zoom to send fake meeting invitation links, tricking users into downloading "meeting software" containing Trojans. "Participants" in the meeting use deep fake videos to impersonate executives or technical experts, manipulating victims to click further, authorize or transfer money. Once the device is controlled, the scammers will begin to remotely control the device and steal cloud data or private keys.
https://x.com/evilcos/status/1920008072568963213
From a technical perspective, scammers use AI synthesis tools such as Synthesia, ElevenLabs, and HeyGen to generate high-definition audio and video in minutes, and spread them through platforms such as X/Telegram/YouTube Shorts.
Deep fake technology has become a core component of AI-driven fraud. The credibility of visual and auditory content has dropped sharply in the AI era. Users must verify "authoritative information" about asset operations through multiple channels to avoid blindly trusting "familiar faces or voices." At the same time, project teams should recognize the brand risks brought about by AI forgery, establish a unique and credible information dissemination channel, or use on-chain signature broadcasting for identity authentication, so as to resist forgery attacks from a mechanism perspective.
3.2 Social Engineering Strategies: Exploiting Psychological Vulnerabilities
Complementing high-tech means are low-tech but highly effective social engineering attacks. Human nature is the weakest and most easily overlooked link, which leads many users to underestimate the threat posed by social engineering. Scammers often manipulate user behavior through disguise, guidance, intimidation and other means, and use users' psychological weaknesses to gradually achieve fraudulent purposes.
(1) AI Arbitrage Robot Scam
AI has become a signature technology for improving productivity, and scammers have been quick to seize on the trend, labeling their scams “ChatGPT generated” (a buzzword that sounds cutting-edge and believable) to lower user’s guard.
The scam usually starts with a detailed video tutorial. In the video, the scammers claim that the code of the arbitrage robot is generated through ChatGPT, which can be deployed on blockchains such as Ethereum, monitor the release and price fluctuations of new tokens, and arbitrage through flash lending or price differences. They emphasize that "the robot will automatically complete all the logical operations for you, and you just need to wait for the profit to be generated." This statement is very consistent with the preconceived notion of many users that "artificial intelligence = easy money", further reducing their vigilance.
The scammers use packaging language that lowers the technical threshold of users to guide users to visit a highly simulated Remix IDE interface (actually a fake page). From the interface alone, it is difficult to distinguish the real from the fake. Users are asked to paste the so-called "contract code written by ChatGPT". After the deployment is completed, users are told to inject start-up funds into the contract address as the initial arbitrage principal, and the scammers imply that "the more you invest, the higher the return". After the user completes these steps and clicks the "Start" button, what awaits them is not a steady stream of arbitrage profits, but the inability to recover their funds. Because the code copied and pasted by the user already contains the fraud logic: after the contract is activated, the recharged ETH will be immediately transferred to the wallet address preset by the scammer. In other words, the entire "arbitrage system" is essentially a beautifully packaged money-making tool.
SlowMist analysis shows that this type of scam uses a "wide net, small bait" strategy, causing a single user to lose tens to hundreds of dollars. Although the amount of money defrauded of a single user is relatively small, the scammers can still make stable and considerable illegal profits by spreading tutorials on a large scale and inducing many users to be deceived. Since the amount of money lost by each victim is not large and the operation seems to be "completed autonomously" rather than a direct fraudulent transfer, most victims choose to remain silent and not investigate further. What is more worrying is that these scams can easily be re-launched with a new look: just by changing the robot name or replacing a few page templates, the scammers can go back online and continue to defraud.
Other social engineering tactics include: Trojan job hunting traps, fake interview programming tasks, tweets/Telegram private messages phishing links, similar address poisoning attacks, blocking the sale of "Pixiu Plate" tokens, and rebate scams disguised as staking platforms. These attacks use trust (private chat contact), greed (high-yield promises) or confusion (fake interfaces and chat records) to constantly change packaging, and cause users to lose money in a covert way that allows users to actively cooperate.
3.3 Ponzi scheme: old wine in new bottles
The crypto ecosystem is developing rapidly, and traditional Ponzi schemes are also following closely and have not disappeared. They have undergone a "digital evolution" using on-chain tools, social viral growth, and AI-driven deep fakes. These scams usually disguise themselves as DeFi/NFT/GameFi projects for fundraising, liquidity mining, or platform coin staking. In essence, they are still Ponzi structures of "new money to make up for old debts", and they will collapse if the cash flow is broken or the operator runs away with the money.
The JPEX incident that shook Hong Kong in 2023 is a typical case. The platform calls itself a "global exchange" and promotes the platform currency JPC through offline advertising and celebrity endorsements, promising "high and stable returns", attracting a large number of users without regulatory approval and lack of information disclosure. In September 2023, the Hong Kong Securities and Futures Commission marked the platform as "highly suspicious", and the police arrested many people in the "Iron Gate Operation". As of the end of 2023, the case involved HK$1.6 billion and more than 2,600 victims, which may become one of the largest financial fraud cases in Hong Kong history.
In addition, the typical model of on-chain Ponzi projects is also evolving. In 2024, blockchain analyst ZachXBT exposed a fraud group that deployed the Leaper Finance project on the Blast chain. The group had operated projects such as Magnate, Kokomo, Solfire, and Lendora to steal tens of millions of dollars. They forged identity authentication documents and audit reports, pre-laundered funds, and artificially increased on-chain data to lure users to invest. After the TVL reached millions of dollars, they quickly withdrew liquidity and ran away with the money.
What is even more shocking is that the gang has repeatedly targeted multiple mainstream chains, including Base, Solana, Scroll, Optimism, Avalanche, and Ethereum, using a rapid “skin-changing and rebranding” rotation scam method.
For example, the TVL of their Zebra lending project deployed on the Base chain once reached more than $310,000; on Arbitrum, their Glori Finance project’s TVL peaked at $1.4 million. Both projects are forks of Compound V2. These projects used funds extracted from other scams such as Crolend, HashDAO, and HellHoundFi as seed funds, forming a closed loop of fraud.
Compared with traditional Ponzi schemes, digital scams have the following new features:
More covert technical disguise: using open source contracts/NFT packaging/on-chain data accumulation to create the illusion of "technological innovation" and mislead users into believing that these are legal and compliant DeFi products.
The rebate structure is complicated: the flow of funds is concealed by "liquidity mining", "staking rewards" and "node dividends", but in reality, funds are extracted at multiple levels and internal and external disks are manipulated.
Social fission communication: relying on WeChat groups/Telegram channels/KOL live broadcasts to drive new users, forming a typical pyramid scheme communication model.
Gamified interface and identity forgery: Many projects use game UI and NFT project IP to create a "young" and "legitimate" image. Some projects even combine AI face-changing and deep fake technology to forge images or videos of project founders or spokespersons to increase credibility.
For example, in February 2025, hackers hijacked the X account of Tanzanian tycoon Mohammed Dewji and used deep fake videos to promote the fake token $Tanzania, raising $1.48 million in a few hours. Similar counterfeiting techniques have been widely used to forge founder videos, fabricate meeting screenshots, and forge team photos, making it increasingly difficult for victims to distinguish the authenticity.
The following fraud red label comparison table summarizes the core warning signs and simple prevention measures for users' reference.
How to stay safe: Be wary of suspicious or unknown sources - whether it's through LinkedIn, Telegram or email; don't run unfamiliar code or install unknown files (especially under the pretext of work testing or app demos); bookmark official websites; use browser plugins such as Scam Sniffer; don't connect your wallet to unknown links. Trust in the crypto world needs to be actively verified rather than passively given.
4. Strengthening digital defenses: Bitget’s multi-layered security architecture
In the face of increasingly complex digital asset threats, Bitget has built a comprehensive security framework designed to protect every platform user. This section introduces the strategic measures implemented in account protection, investment review, and asset protection.
1. Account protection: block unauthorized access in real time
Bitget uses a full set of real-time monitoring tools to detect and alert users to any unusual activity. When logging in from a new device, users receive detailed email notifications that include anti-phishing codes, verification codes, login locations, IP addresses, and device details. This instant feedback enables users to detect and handle unauthorized access in a timely manner.
In order to reduce impulsive behavior that may be caused by fraud, Bitget has established a dynamic cooling-off period. This mechanism is triggered by indicators such as abnormal login locations or suspicious transactions, and temporarily disables withdrawals for 1-24 hours to allow users to re-evaluate and confirm whether account activities are normal.
In addition, Bitget provides an official verification channel , enabling users to verify the content of communications and effectively prevent phishing attacks.
2. Investment review: strict evaluation of digital assets
Recognizing the surge in high-risk tokens in the crypto market, Bitget has developed a detailed due diligence process for listing assets, which includes conducting a comprehensive background check on the project team, in-depth analysis of token economics, evaluating valuation and distribution models, and assessing community participation.
To further ensure the accuracy of the assessment, Bitget has implemented a two-tier security audit system. Internal blockchain security engineers conduct a thorough code review to identify vulnerabilities. At the same time, third-party authorities conduct a review to ensure that the review is in place.
After the asset is launched, Bitget's proprietary on-chain monitoring system will continue to monitor transactions and contract interactions in real time. The system is designed to adapt to new security threats, constantly evolve and update its threat model to quickly respond to emerging risks.
3. Asset protection: comprehensive protection of users’ assets
Bitget adopts a dual wallet strategy, using both hot wallets and cold wallets to improve security. Most digital assets are stored in offline, multi-signature cold wallets, greatly reducing the risk of cyber attacks.
In addition, Bitget also has a huge protection fund worth more than US$300 million to compensate users in the event of platform-related security incidents.
For Bitget Wallet users, the platform has adopted some additional security features, including phishing website alerts, built-in contract risk detection tools, and the innovative GetShield security engine. GetShield continuously scans decentralized applications, smart contracts, and websites to detect potential threats before users interact.
Through this multi-security architecture, Bitget not only protects the security of users' assets, but also enhances users' trust in its platform, setting a benchmark for security standards in the cryptocurrency exchange industry.
5. On-chain fraud fund tracking and marking
The previous sections of this report have described how scammers use different methods to obtain cryptocurrency, including the use of deepfakes. Scammers often attempt to transfer the stolen funds and eventually convert them into fiat currency. These fund flows can be tracked - blockchain analysis tools are essential in this process. Such tools fall into three main categories: transaction monitoring, address screening, and investigation tools. This section focuses on how transaction monitoring tools can detect and flag fraud-related funds, making it more difficult to use the stolen funds.
Transaction monitoring tools have been widely adopted by cryptocurrency exchanges such as Bitget. The tool identifies and marks potential risks by scanning in and out transactions. Typical application scenarios include checking all user deposits to identify potential risks. Most normal user deposits will not be marked as high risk, and the funds are automatically processed and credited to the user's account in a timely manner; however, if the deposit funds come from a known fraud address, the funds will be marked as high risk.
Let’s take a look at an actual case of transaction monitoring. The following figure shows the analysis of a user’s recharge on a cryptocurrency exchange by a transaction monitoring tool. As shown in the figure, a user’s recharge on an exchange was identified as a transfer to an address associated with a “pig-killing” investment scam.
The tool gives the highest risk score of 10/10, triggering a manual review process - user funds will not be automatically credited, and the activity will be handed over to the compliance team for manual review.
Advanced criminal organizations are familiar with transaction monitoring mechanisms and often use specific on-chain operations to obfuscate (i.e. hide) the path of funds. The typical method is "fund layering": transferring stolen funds through multiple levels of intermediate addresses in an attempt to cut off their connection with the source. Advanced transaction monitoring tools can penetrate unlimited levels of intermediate addresses and accurately locate the source of funds for criminal activities. Criminal organizations are also increasingly using cross-chain bridges, which will be analyzed in the next section.
5.1 Cross-chain Bridge
In the past few years, a variety of blockchains have been launched on the market. Users may be attracted to a blockchain because it hosts a specific cryptocurrency or decentralized application or other service. Cross-chain bridges allow users to transfer value across chains in near real time. Although ordinary blockchain users are the main users of cross-chain bridges, scammers are increasingly using them to transfer stolen funds between blockchains. Scammers usually use cross-chain bridges for the following reasons:
Obtaining obfuscation opportunities: Certain obfuscation tools only support certain blockchains (e.g. most coin mixer websites only process Bitcoin). Criminal organizations often cross-chain to the target blockchain to use obfuscation services and then transfer to other blockchains.
Increased difficulty in tracing: Cross-chain transfers significantly increase the complexity of fund tracing. Even if investigators can manually track a single cross-chain behavior, repeated cross-chain operations will greatly delay the progress of the investigation, and if the funds are split, the feasibility of investigators manually tracking all clues will also be reduced (the case below shows that dedicated tools can achieve seamless cross-chain fund tracking).
Criminal organizations are well aware that some automated transaction monitoring tools stop tracking at the cross-chain bridge. The upper part of the figure below shows that such tools stop at the cross-chain bridge when identifying illegal activities, resulting in the exchange only seeing funds from the bridge address and being unable to trace the previous path. The lower part of the figure below shows the Elliptic transaction monitoring tool used by Bitget, which automatically penetrates the cross-chain bridge to fully restore the fund path and expose the relevant illegal entities.
The case study below describes how illicit entities have used a range of cross-chain bridges and blockchains to systematically attempt to launder cryptocurrency, and how certain tools can be used to identify this activity.
Case Study: The screenshot below from Elliptic’s investigation tool shows how a criminal organization used cross-chain bridges to move funds across multiple blockchains before ultimately depositing the funds into a cryptocurrency service.
Funds are initiated from the Bitcoin chain (left), cross-chain to Ethereum, Ethereum switches addresses for internal transfers, cross-chain to Arbitrum, cross-chain to Base chain, and finally deposited into the cryptocurrency service platform. The image also highlights two other instances with the same pattern. Although not fully displayed, the same routine appears more than ten times, reflecting the systematic nature of money laundering.
This behavior has two purposes: to slow down or interfere with investigators’ tracking; and to prevent the receiving exchange from identifying the illegal source of funds. However, blockchain investigation tools that support automatic cross-chain bridge tracking can seamlessly restore the complete path. Transaction monitoring tools with cross-chain tracking capabilities (such as the Elliptic system used by Bitget) can automatically identify the original connection between funds and criminal organizations.
5.2 How to detect fraudulent funds using behaviors and patterns
The previous cases rely on known illegal cryptocurrency address labels (such as pig-killing addresses), which are usually collected from multiple channels such as victim reports and law enforcement cooperation. However, the expansion of the scale of fraud (coupled with factors such as the low rate of victim reporting) makes it impossible to cover all addresses.
Therefore, some advanced transaction monitoring tools introduce behavioral detection as a supplementary line of defense. By automatically analyzing behaviors and patterns, the system can infer whether a specific address performs on-chain operations that match the characteristics of fraud and mark the relevant interactions as risky. This type of behavioral analysis is usually performed by professional behavioral detection models (partially using machine learning technology). As of now, Elliptic Behavior Detection can identify more than 15 types of fraud (including pig-killing trays, address poisoning, ice fishing attacks, etc.), and its detection capabilities continue to expand.
The following example shows how behavioral detection can prevent users from transferring money to a scam address: In this example, there are three addresses related to the scam. The top and bottom addresses were identified and confirmed by the victim's report. The address in the middle was not reported, but the behavioral detection model marked it as a potential scam-related address.
The address subsequently received a transfer from an exchange. If the exchange had enabled behavioral detection alerts, risks could have been identified before the transfer to avoid user fund losses. Ultimately, the funds from the three pig-killing addresses all flowed to the same address, which was later frozen by Tether officials and blacklisted. All USDT held by the address were frozen, further confirming the illegal nature of the funds involved.
Click here to learn how Bitget increased its risk interception rate by 99% after connecting to the Elliptic blockchain analysis tool - this industry-leading tool supports more than 50 blockchains and has automated cross-chain bridge tracking and behavior detection capabilities.
6. Protection recommendations and best practices
In the face of continuously upgraded fraud techniques, users need to establish a clear self-protection awareness and technical identification capabilities. To this end, SlowMist proposes the following core anti-fraud suggestions:
(1) Improving the ability to identify fake content on social media
Never click on any link in the comment section or group chat - even if it looks "official". When performing key actions such as wallet binding, claiming airdrops, and staking operations, be sure to verify through the project's official website or trusted community channels. It is recommended to install security plug-ins such as Scam Sniffer to detect and block phishing links in real time to reduce the risk of accidental touches.
(2) Be wary of new risks introduced by AI tools
With the rapid development of large language model technology (LLM), various new AI tools have emerged. The Model Context Protocol (MCP) standard has become a key bridge connecting LLM with external tools/data sources. However, the popularity of MCP also brings new security challenges. SlowMist has published a series of MCP security research articles , and recommends that relevant project teams conduct self-inspections in advance and strengthen their defenses.
(3) Make good use of on-chain tools to identify risky addresses and Ponzi characteristics
For token projects suspected of running away or fraud, it is recommended to use Use anti-money laundering tracking tools such as MistTrack to check the risks of project-related addresses, or use GoPlus token security detection tools for quick assessment. Use Etherscan/BscScan and other platforms to check the warnings in the victim comment area. Be highly vigilant about high-yield projects - abnormally high returns often come with extremely high risks.
(4) Do not blindly believe in “scale effects” and “successful cases”
Scammers often create a profiteering atmosphere through large Telegram groups, fake KOL endorsements, and fake profit screenshots. Generally speaking, the credibility of a project should be verified through transparent channels such as GitHub code base, on-chain contract audits, and official website announcements. Users need to develop the ability to independently verify information sources.
(5) Preventing social trust-based “file-induced” attacks
More and more attackers are using platforms such as Telegram, Discord, and LinkedIn to send malicious scripts disguised as job opportunities or technical test invitations to trick users into operating high-risk files.
User Protection Guide:
Be wary of suspicious job or freelance offers that require you to download/run code from platforms such as GitHub. Always verify the sender's identity through the company's website or email, and don't trust the "limited time high reward task" rhetoric.
When dealing with external code, strictly review the project source and author background, and refuse to run unverified high-risk projects. It is recommended to execute suspicious code in a virtual machine or sandbox environment to isolate risks.
Be cautious when handling files received from platforms such as Telegram/Discord: turn off the automatic download function, scan files manually, and be wary of script running requirements in the name of "technical testing".
Enable multi-factor authentication and regularly change strong passwords to avoid reusing passwords across platforms.
Please do not click on conference invitations or software download links from unknown sources, and develop the habit of verifying the authenticity of the domain name and confirming the source of the official platform.
Use hardware wallets or cold wallets to manage large assets and reduce the exposure of sensitive information on connected devices.
Update your operating system and antivirus software regularly to prevent new malicious programs and viruses.
If you suspect your device is infected, disconnect from the Internet immediately, transfer funds to a safe wallet, remove malicious programs, and reinstall the system if necessary to minimize losses.
Enterprise Protection Guide:
Regularly organize phishing attack and defense drills and train employees to identify forged domain names and suspicious requests.
Deploy an email security gateway to intercept malicious attachments and continuously monitor code repositories to prevent sensitive information leakage.
Establish a phishing incident response mechanism that combines technical defenses with employee awareness. This multi-layered strategy helps minimize the risk of data breaches and asset losses.
(6) Keep in mind the “basic principles” of investment judgment
High return promise = high risk: Any platform that claims "stable high returns" or "guaranteed profits" should be considered a high-risk platform.
Viral growth based on recruiting new people is a typical red flag: projects that set up a recruitment rebate mechanism or a tiered "team benefit" structure can be preliminarily judged as pyramid schemes.
Use on-chain analysis tools to identify abnormal fund flows: Platforms such as MistTrack can track large amounts of abnormal fund movements and analyze the team’s cash-out paths.
Verify the transparency of the auditing agency and team: Be wary of "fake audit reports" provided by some projects or formal endorsements by small auditing agencies. Users should confirm whether the smart contract has been audited by a trusted third party and the report is public.
In short, crypto fraud in the AI era has evolved from simple "technical vulnerability exploitation" to "technical + psychological" dual-dimensional manipulation. Users must improve their technical identification capabilities and strengthen their psychological defenses:
Verify more, be less impulsive: Do not lower your guard because of "acquaintances, authoritative videos, and official backgrounds."
Question more, transfer less: When it comes to asset operations, be sure to delve into the underlying logic, verify the source, and confirm safety.
Avoid greed and always be skeptical: the more tempting the project’s promise of “guaranteed profit” is, the more vigilant you need to be.
It is recommended to read the "Blockchain Dark Forest Self-Help Manual" written by Cos, the founder of SlowMist, to master the basic anti-fraud skills on the chain and enhance self-protection. In case of theft, users can seek assistance from the SlowMist team.
Only by thoroughly understanding the fraud mechanism, improving information discrimination, strengthening the awareness of security tools, and standardizing operating habits can we protect the safety of assets in the risk wave of the digital age full of temptations and risks. Security protection cannot be done once and for all, and continuous attention is needed. Building a complete cognitive system and basic defense habits is the only beacon to move forward steadily in the digital age and avoid fraud traps.
7. Conclusion: Future Path Planning
Five years ago, fraud prevention meant “don’t click on suspicious links”; now it means “seeing is not believing”.
When AI fake videos, fake recruitment processes and tokenized Ponzi schemes turn trust into a means of harming users, the next stage of crypto security not only relies on intelligent technology, but also requires collective defense. Bitget, SlowMist and Elliptic are building a joint defense network by sharing threat intelligence, automated fund tracking, and cross-ecological risk tagging.
The conclusion is clear: security cannot rely on isolated measures, and a networked, continuous, user-centric system must be built.
To this end, Bitget will make every effort to advance in three major directions:
AI Red Team Attack and Defense Drill: Simulate new fraud methods to test system vulnerabilities.
Compliance Data Collaboration Network: Work with regulators and compliance partners to build an intelligence sharing ecosystem.
Promote security education: empower users with real-time threat awareness capabilities through the Anti-Fraud Center.
Scammers continue to evolve, and we need to upgrade and iterate as well. In this industry, the most precious currency has never been Bitcoin, but trust.
To download the full report, click here .