LLM routers security risk in AI-powered crypto
Researchers warned of a critical LLM routers security risk in AI-powered crypto after finding that intermediary routers that forward requests to models such as OpenAI and Anthropic can access and modify data passing through them. The paper reports 26 routers injecting malicious tool calls, a client wallet drained for $500,000, and that attackers could take over roughly 400 hosts within several hours. McKinsey projects AI agents could mediate $3 trillion to $5 trillion of global consumer commerce by 2030.
LLM routers security risk in AI-powered crypto vulnerabilities
LLM routers are intermediary services that forward user requests to large language models such as OpenAI and Anthropic and can access and modify the data that passes through them. These routers operate in an environment where LLM agents have moved beyond conversational assistants into systems that book flights, execute code, and manage infrastructure on behalf of users. Intermediary LLM routers can see and exfiltrate credentials passing through them, and a single altered instruction can immediately compromise systems or funds. The paper highlights that these capabilities create a critical security gap in AI-powered crypto infrastructure.
The study found 26 LLM routers injecting malicious tool calls and linked those routers to credential theft and a wallet drain of $500,000. The researchers reported one client’s wallet was drained for $500k and described that attackers could, within several hours, take over approximately 400 hosts. The authors of the paper are affiliated with UC Santa Barbara, UC San Diego, Fuzzland and World Liberty Financial.
“A malicious router can replace a benign command with an attacker-controlled one or silently exfiltrate every credential that passes through it,” the researchers wrote. The paper notes that a single altered instruction can immediately compromise systems or funds, illustrating how an intermediary change can have direct financial and operational consequences. The researchers also reported they were able to poison routers to forward traffic to them, demonstrating practical routes for exploitation.
A group of security researchers affiliated with UC Santa Barbara, UC San Diego, Fuzzland and World Liberty Financial authored the paper describing vulnerabilities in AI agent infrastructure. The study’s findings were reported in an article that links to CoinDesk as its source. The authors detailed instances of compromised intermediary services that forward requests to models such as OpenAI and Anthropic.
The researchers note that LLM agents have moved beyond conversational assistants into autonomous systems that book flights, execute code, and manage infrastructure on behalf of users. These systems often rely on intermediary LLM routers that forward requests to underlying models and can access and modify data passing through them. The paper links these intermediary capabilities to credential theft and direct financial loss in crypto contexts.
The researchers wrote that “very soon” there will be more AI agents than humans making transactions on the internet and warned of increased transaction volume. They also wrote that there could be “one million times more payments than people, all in crypto.” The paper includes practical demonstrations and quotes illustrating how scale and autonomy increase risk as agents handle more transactions.
Intermediary LLM routers in AI-powered crypto environments can access and modify data passing through them, enabling credential theft, wallet drains, and the takeover of hosts when malicious modifications occur. Because LLM agents and routers can execute actions on behalf of users, a single altered instruction or malicious tool call can immediately compromise systems or funds, creating a critical security gap that affects authentication, transaction integrity, and infrastructure control at scale.


