ElizaOS Vulnerability Shows How AI Can Be Gaslit Into Losing Millions

ElizaOS Vulnerability Shows How AI Can Be Gaslit Into Losing Millions

The post ElizaOS Vulnerability Shows How AI Can Be Gaslit Into Losing Millions appeared on BitcoinEthereumNews.com.

In brief The study highlights how memory injection attacks can be used to manipulate AI agents. AI agents that focus on online sentiment are most vulnerable to these attacks. Attackers use fake social media accounts and coordinated posts to trick agents into making trading decisions. AI agents, some managing millions of dollars in crypto, are vulnerable to a new undetectable attack that manipulates their memories, enabling unauthorized transfers to malicious actors. That’s according to a recent study by researchers from Princeton University and the Sentient Foundation, which claims to have found vulnerabilities in crypto-focused AI agents, such as those using the popular ElizaOS framework. ElizaOS’ popularity made it a perfect choice for the study, according to Princeton graduate student Atharv Patlan, who co-authored the paper. “ElizaOS is a popular Web3-based agent with around 15,000 stars on GitHub, so it’s widely used,” Patlan told Decrypt. “The fact that such a widely used agent has vulnerabilities made us want to explore it further.” Initially released as ai16z, Eliza Labs launched the project in October 2024. It is an open-source framework for creating AI agents that interact with and operate on blockchains. The platform was rebranded to ElizaOS in January 2025. An AI agent is an autonomous software program designed to perceive its environment, process information, and take action to achieve specific goals without human interaction. According to the study, these agents, widely used to automate financial tasks across blockchain platforms, can be deceived through “memory injection”—a novel attack vector that embeds malicious instructions into the agent’s persistent memory. “Eliza has a memory store, and we tried to input false memories through someone else conducting the injection on another social media platform,” Patlan said. AI agents that rely on social media sentiment are especially vulnerable to manipulation, the study found. Attackers can use…