Anthropic Mythos AI cybersecurity risks were the subject of a high-level meeting convened by U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell with CEOs from Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs. Officials warned those bank leaders about risks tied to the model after testing showed Mythos had discovered thousands of previously unknown software vulnerabilities, including zero-day flaws in major operating systems and web browsers. Attendees were urged to strengthen defenses against AI-assisted cyberattacks on financial infrastructure.
Anthropic Mythos AI cybersecurity risks
A high-level meeting led by U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell brought together Wall Street bank CEOs to warn about cybersecurity risks tied to Anthropic’s Mythos AI. Executives from Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs attended the meeting. Officials discussed threats posed by AI systems capable of identifying and exploiting software vulnerabilities across operating systems and web browsers. Testing reportedly showed Mythos found thousands of previously unknown software vulnerabilities, including zero-day flaws in major operating systems and web browsers.
Security researchers warned that tools capable of automatically discovering vulnerabilities could accelerate both defensive security work and malicious hacking if misused. Officials encouraged strengthened defenses against AI-assisted cyberattacks targeting financial infrastructure. Anthropic reported that Mythos Preview’s vulnerability-discovery capabilities were not intentionally trained but emerged from broader improvements in coding, reasoning, and autonomy.
The meeting summarized officials’ and researchers’ concerns about the dual-use risks of automated vulnerability discovery. Attendees were urged to prioritize defenses for financial infrastructure against AI-assisted exploitation.
Mythos AI emerged on the scene in March following the leak of draft materials, gaining recognition as Anthropic’s most capable AI model. During testing, Mythos identified thousands of previously unknown software vulnerabilities, including zero-day flaws, across major operating systems and web browsers. These capabilities were not the result of targeted training specifically for vulnerability discovery. Instead, they stemmed from broader enhancements in coding, reasoning, and autonomy. “The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them,” underscoring the dual-use nature of its capabilities.
Anthropic took a cautious approach by restricting access to Mythos to a small group of carefully selected cybersecurity organizations. This deliberate release was emphasized with Anthropic’s statement, “Given the strength of its capabilities, we’re being deliberate about how we release it.” The company highlighted its most advanced model as a “step change” in its development efforts, indicating significant advancements in AI capabilities.
To address the cybersecurity challenges, Anthropic founded Project Glasswing, a collaborative initiative with major technology and cybersecurity companies. This initiative aims to identify and proactively patch vulnerabilities before they can be exploited by attackers. Anthropic described the collaborative nature of the venture by stating, “Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone,” highlighting the need for collective effort among AI developers, software companies, security researchers, and governments worldwide. This initiative underlines the importance of shared responsibility in securing technological infrastructures.
Officials, Anthropic, and security researchers have taken a cautious approach to the cybersecurity risks posed by Mythos AI by convening warnings, restricting access, and signaling deliberate testing measures. Anthropic restricted access to a small group of cybersecurity organizations and characterized its release as deliberate, while researchers warned that automated vulnerability-discovery tools could accelerate both defensive security work and malicious hacking if misused. The response emphasizes collaboration among frontier AI developers, other software companies, security researchers, open-source maintainers, and governments.


