- Key insight: Anthropic's Mythos model has demonstrated the ability to autonomously identify and exploit software vulnerabilities, raising alarms among global banking regulators.
- What's at stake: Capable AI agents could weaponize flaws across consolidated cloud service providers, potentially triggering catastrophic breaches in the banking system.
- What's at stake: Tech leaders warn that restricting access to top AI cybersecurity tools could give threat actors an early advantage during a dangerous transition period.
Overview bullets generated by AI with editorial review
Bank of England Governor Andrew Bailey recently warned that Anthropic's new artificial intelligence model could "crack the whole cyber risk world open," according to remarks he made on Tuesday.
The governor's alarm over the model, known as Mythos, follows Anthropic's claims that the system can autonomously exploit software vulnerabilities, prompting urgent discussions among U.K. and U.S. financial regulators.
Reuters first reported Bailey's remarks, which he made at an event at Columbia University.
For U.S. banks managing complex technology stacks that blend state-of-the-art tools with decades-old software, the model underscores a rapidly shifting landscape in vulnerability management.
It also thrusts the financial industry into a heated debate over whether highly capable AI cybersecurity tools should be locked down by their creators or scrutinized openly by the broader security community.
Bailey noted in New York on Tuesday that regulators must urgently work out the extent to which the model can identify vulnerabilities in systems and exploit them for cyberattacks.
Anthropic markets the model as a watershed moment for the security ecosystem, claiming that Mythos Preview successfully identified and exploited zero-day vulnerabilities in every major operating system and web browser, Anthropic researchers wrote in an April 7 blog post.
However, government evaluators and
While the model successfully completed a 32-step corporate network attack simulation, it failed completely in operational technology environments, according to an April 13 report from the UK's AI Security Institute.
The institute's simulated test ranges "lack security features that are often present, such as active defenders and defensive tooling," meaning the model's ability to compromise a well-defended financial system remains highly uncertain, according to the report.
Meanwhile, tech leaders are actively criticizing Anthropic's decision to heavily restrict access to the model. Anthropic launched an initiative called
IBM Senior Vice President Rob Thomas pushed back against this closed-door approach in an April 9 press release. As AI reaches the scale of foundational infrastructure, "security improves more often through scrutiny than through concealment," Thomas said.
For U.S. bankers, Mythos serves as a reminder of the systemic threats that banks face. The financial industry relies heavily on cloud services providers to provide basic banking services, and these providers operate in a highly consolidated market, creating resilience risk for banks.
Regulators worry that, if capable AI agents can weaponize flaws at these providers at scale, it could trigger catastrophic breaches across the heavily regulated banking system.
Central banks eye financial stability in the AI era
Bailey noted in his Tuesday speech that cyber risk is a perpetual threat.
"It's the one that never goes away," Bailey reportedly said, adding that regulators must continually mitigate the risk because threat actors will always evolve.
He tied these operational vulnerabilities directly to the broader mandates of central banks, arguing that financial stability policy ultimately protects the public's trust in money.
To address the specific risks posed by Anthropic's new technology, the Bank of England's Cross Market Operational Resilience Group and its AI Taskforce plan to discuss the Mythos model in meetings scheduled within the next two weeks, according to a Saturday Bloomberg report.
Across the Atlantic, U.S. regulators are taking similar steps to ensure financial stability. Last week, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to an urgent meeting regarding concerns that the Mythos model will usher in an era of heightened cyber risk.
The bank executives summoned to that meeting addressed the matter publicly on quarterly earnings calls this week. Goldman Sachs CEO David Solomon told analysts Monday that the bank has access to the model and has begun using it for defensive purposes.
"We're hyper-aware of the enhanced capabilities of these new models," he said. "We have the model. We're working closely with Anthropic and all of our security vendors to kind of harness frontier capabilities wherever it's possible."
JPMorganChase CEO Jamie Dimon offered a more sobering take on Tuesday, warning analysts that AI has made cyber risk "worse" and "harder" while acknowledging that Mythos reveals the scale of patching work that remains. "It shows a lot more vulnerabilities need to be fixed," he said.
JPMorganChase CFO Jeremy Barnum said tools such as Mythos "can make it easier to find vulnerabilities, but then also potentially be deployed by bad actors in attack mode."
Morgan Stanley CEO Ted Pick, whose firm is also testing a beta version of the model, cast the moment in more optimistic terms on Wednesday.
"AI is our friend,"
The U.S. Treasury Department's technology team is currently seeking direct access to the model so it can begin hunting for software flaws, per a Tuesday report from Bloomberg.
The agency is reportedly pursuing this access despite the Pentagon designating Anthropic a U.S. supply chain risk earlier this year following a dispute over military use of the company's AI.
Pushback against concentrated AI power
Anthropic will not make Mythos Preview generally available, according to the company's April 7 announcement about the model. Instead, only members of Project Glasswing, which include select technology companies and financial partners including Morgan Stanley and JPMorganChase, have the ability to privately evaluate the model and prepare defenses, according to the announcement.
The private evaluation offers a "unique, early-stage opportunity to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure," according to Oat Opet, chief information security officer for the bank, as quoted in Anthropic's Glasswing announcement.
Security vendors within the Glasswing coalition defend this restricted strategy. CrowdStrike, a founding member of the project, argues that a clear division of labor is necessary, according to a blog post from last week.
"Model safety is the builder's responsibility," while deployment governance falls to security companies, according to CrowdStrike's post.
However, other technology leaders are highly skeptical of keeping the model locked down.
In his blog, IBM's Thomas strongly criticized the restricted access, warning against concentrating the understanding of these advanced systems within a small number of companies.
As AI transitions into foundational infrastructure, "opacity can no longer be the organizing principle for safety," according to Thomas. He asserted that open-source scrutiny is a prerequisite for resilience, adding that "security improves more often through scrutiny than through concealment."
IBM is not a member of Project Glasswing, but even within the coalition, tech industry leaders acknowledge the tension between closed models and the broader security community.
Linux Foundation CEO Jim Zemlin noted that open-source maintainers, whose code underpins critical systems like banking, have historically lacked the budgets of large security teams.
Zemlin warned that the industry is entering a dangerous transition period where threat actors might gain an early advantage.
To ensure a secure future for the software that powers the economy, access to top AI cybersecurity tools must be "evenly distributed and not concentrated in the hands of the few with the cash and the headcount," according to Zemlin.










