Artificial Intelligence (AI) is increasingly being used to commit crimes in the crypto asset ecosystem, as reported by Elliptic.
AI-crypto crimes
The report from Elliptic highlighted the rising trend of criminals using Generative AI to create deepfakes and other deceptive materials for promoting crypto scams.
According to Elliptic:
“Doctored videos – or ‘deepfakes’ – of notable individuals promoting investment scams have targeted the likenesses of various prominent figures. These deepfakes are often shared on platforms like YouTube, Tiktok, and social media platforms.”
These deepfakes lure unsuspecting individuals into investing in fraudulent projects, leading to fund theft.
Moreover, there has been a surge in AI-related scam tokens, investment platforms, Ponzi schemes, and fake trading bots. Scammers leverage trendy technology and buzzwords to create fraudulent schemes, resulting in exit scams. Elliptic pointed out:
“Scammers utilize AI to generate hype in dishonest investment platforms, aiming to enhance trading capabilities through AI-based strategies.”
In a notable incident, the iEarn fake AI trading bot scam in 2023 resulted in approximately $6 million in losses. This surge in AI trading bots prompted the Commodity Futures Trading Commission (CFTC) to issue a warning in January.
Elliptic’s report revealed that criminals are utilizing large language model (LLM) AI programs to identify project code vulnerabilities, aligning with a recent report by Microsoft and OpenAI showing increased use of LLMs by cybercriminals.
Furthermore, AI technology is employed in facilitating large-scale crypto scams and disinformation campaigns. Elliptic explained:
“AI can automate social media posts generation and distribution, accelerating disinformation campaigns.”
Despite the rising threats, Elliptic believes AI technology offers numerous benefits, with mitigation requiring collaborative efforts from law enforcement agencies, crypto compliance professionals, and AI users.