Anthropic's AI tool sparks cybersecurity panic

Key Speakers At The Nvidia GTC Conference
George Kurtz, co-founder and CEO of Crowdstrike, left; and Nikesh Arora, CEO at Palo Alto Networks. The two men saw the stock prices of their cybersecurity companies fall on Friday following an Anthropic announcement.
Kent Nishimura/Bloomberg, Ruhani Kaur/Bloomberg
  • Key insight: While Anthropic's new AI tool excels at finding bugs in source code, it lacks the real-time monitoring required to defend live enterprise networks.
  • What's at stake: Replacing dedicated security platforms with general-purpose AI models could create systemic vulnerabilities and widespread disruptions across the financial system.
  • Expert quote: "Claude Code Security finds bugs in your source code before they're exploited ... CrowdStrike detects and responds to threats at runtime." —CrowdStrike CEO George Kurtz said.

Overview bullets generated by AI with editorial review

Processing Content

Anthropic's recent launch of an artificial intelligence vulnerability scanner sent cybersecurity stocks tumbling, but vendors and analysts are pushing back against the narrative that AI will render existing security platforms obsolete.

On Friday, the AI research company unveiled Claude Code Security, a tool that uses AI reasoning to spot complex vulnerabilities in software code. The announcement sparked a sudden market sell-off that battered the shares of cybersecurity giants including CrowdStrike and Palo Alto Networks, as well as software supply chain companies like JFrog.

For U.S. banks, the market turbulence raises critical questions about whether the security vendors protecting their networks will be replaced by AI models, and the potential consequences of this.

For example, shifting security responsibilities entirely to a handful of foundational AI models could run afoul of U.S. Treasury Department warnings about concentration risks, which caution that relying too heavily on a few AI providers could create systemic vulnerabilities and widespread disruptions across the financial system.

Anthropic's product release causes a sell-off

Claude Code Security is a limited-preview tool that scans codebases for security vulnerabilities and suggests targeted software patches.

Rather than scanning for known patterns like traditional static analysis tools do, Claude reads and reasons about code in a way that emulates a human security researcher's thinking, attempting to understand how data moves through an application to catch complex flaws.

Anthropic has been leaning heavily into cybersecurity. Alongside releasing its Claude Opus 4.6 model (which the company claims found over 500 zero-day vulnerabilities in well-tested open-source codebases), it recently partnered with the Pacific Northwest National Laboratory to explore using AI to defend critical infrastructure.

Investors reacted to Friday's product launch with alarm, interpreting the AI advancement as a direct threat to the business models of established security vendors.

The Global X Cybersecurity ETF (BUG) fell 3.7% on the day of the announcement, closing at its lowest level since November 2023.

Software supply chain and application security vendor JFrog suffered the steepest decline, dropping 24.6%.

Identity and access management firm Okta fell 8.3%, while CrowdStrike dropped 7.32% amid intense short-selling pressure.

Palo Alto Networks experienced a more moderate decline of 1.1%.

Security vendors push back against market sentiment

Executives at the cybersecurity vendors quickly argued following Anthropic's Friday announcement that the market fundamentally misunderstands the difference between scanning code during development and defending a live network.

CrowdStrike CEO George Kurtz said he directly asked Claude to build a tool to replace his company's platform, according to a post he made on LinkedIn. Claude responded that it could not replicate CrowdStrike's real-time endpoint monitoring or automated incident response with a simple script.

"Claude Code Security finds bugs in your source code before they're exploited — proactive, development-stage security," Kurtz wrote. "CrowdStrike detects and responds to threats at runtime across live endpoints — reactive, operational security."

Other application security vendors agreed that AI reasoning is powerful for discovering vulnerabilities but insufficient for enforcing enterprise security.

AI models frequently introduce business logic and authorization risks themselves, according to Manoj Nair, chief innovation officer at developer security firm Snyk. Trust in the AI era must be built on evidence-backed verification outside of the AI model's cognitive loop, according to Nair.

"You can ask an AI to reason about a vulnerability," Nair wrote. "You cannot ask a probabilistic model to guarantee compliance, prove data flow or enforce enterprise policy across thousands of repositories."

Similarly, code quality vendor Sonar emphasized that Claude Code Security engages in a sampling-based, spot-checking approach, whereas enterprise security tools systematically evaluate every line of code to provide the consistent, auditable evidence required by compliance frameworks.

Palo Alto preempted the panic

Palo Alto Networks CEO Nikesh Arora got in an early word during his company's earnings call on Feb. 17, three days before Anthropic's announcement.

Responding to analyst questions about whether large language models pose an existential threat to security information and event management tools, Arora argued that AI is a net positive addition to security capabilities, not a replacement.

Security platforms require near-perfect precision to protect customers, a threshold that current AI models cannot meet on their own, he said.

"Until they get to 99%, 99.9% accuracy, [LLMs] are not a threat to delivering security," Arora said. "They are tools that can be used to summarize capabilities."

Arora also noted that security companies generate proprietary data by sitting at the network edges and blocking billions of attacks daily. Because a language model is not a system of record that generates this domain-specific threat data, it cannot replace the security product itself, according to Arora.

Instead of fragmenting the market, the rise of AI is actually driving enterprise customers to consolidate their security vendors into unified platforms, Arora said.

Analysts share vendors' skepticism

Financial analysts covering the cybersecurity sector echoed the vendors' defenses, viewing the market sell-off as an overreaction.

Bank of America Global Research analysts wrote in a Monday note that they disagree with investor concerns that AI will automate away meaningful parts of the security market. While AI tools like Claude represent progress in developer automation and code scanning, they remain too fragile for autonomous defense, according to the Bank of America report.

"Runtime security is continuous, always observing execution behavior; contextual, combining signals across endpoint, identity, network, and cloud; and defensive, with required accuracy hitting ~99.99% as false negatives have immediate consequences," the Bank of America analysts wrote. (Emphasis original)

Morningstar equity analyst Malik Ahmed Khan also pushed back against the panic, noting that the adversarial nature of cybersecurity means AI advancements will benefit both attackers and defenders.

"While LLMs can be trained on security methods and techniques, they simply don't have access to petabytes of real-time data gathered by large security vendors on a daily basis," Khan wrote.

How it all bears on systemic risk for banks

For the banking sector, the debate over replacing dedicated security platforms with general-purpose AI models touches on serious regulatory and systemic concerns.

U.S. banks are currently exploring how to integrate AI to improve efficiency and fraud detection, but the U.S. Treasury has explicitly warned about the dangers of industry-wide reliance on a small number of AI providers.

In a 2024 report on AI in the financial services sector, the Treasury Department noted that the high costs and computing power required to build generative AI models force smaller institutions to depend heavily on a few large tech companies.

"Respondents worried that this concentration risk could also lead to systemic and market vulnerabilities, as interruption at a single AI provider could create widespread disruptions across the financial system," according to the Treasury report.

If banks were to abandon their diversified, real-time cybersecurity platforms in favor of relying solely on a few foundational AI models for code analysis, they could exacerbate these exact systemic vulnerabilities while losing the active runtime defenses required to stop active breaches.

For reprint and licensing requests for this article, click here.
Cyber security Artificial intelligence Equities Technology
MORE FROM AMERICAN BANKER