India’s cyber defence faces questions from Claude Mythos
Enterprises brace for Anthropic’s latest AI model that can discover software flaws faster than institutions can react
In a Nutshell
Anthropic’s new AI, Claude Mythos, can find software flaws faster than defenders can react, raising urgent concerns for India’s cyber preparedness. Experts urge continuous monitoring, automated responses, and data-centric governance to address AI-driven risks.
India’s next cyber crisis may not begin with a dramatic breach or a ransom note, but with an AI system finding a software flaw that even those responsible for defending it don’t know exists.
With Claude Mythos, Anthropic’s newest model, this concern has become far more immediate. Described as an unreleased, general-purpose frontier system, Mythos is said to possess exceptionally strong coding and cyber capabilities, and has reportedly identified thousands of high-severity vulnerabilities, including flaws in major operating systems and web browsers.
This means AI systems are getting better at doing what elite security researchers do: Finding hidden weaknesses, working out how they can be exploited, and sometimes chaining several flaws together into a working attack.
If machines can find software flaws faster than institutions can respond, what does cyber preparedness look like? This question was recently discussed at a high-level meeting in New Delhi, where Finance Minister Nirmala Sitharaman and Electronics and IT Minister Ashwini Vaishnaw, met with bank chiefs and officials from the Reserve Bank of India, the National Payments Corporation of India, and CERT-In. The trigger was not a breach, but the recognition that the pace of cyber defence may have to change.
Most organisations already know software will have weaknesses—the challenge is to find, assess and fix them before attackers use them. With Mythos, that window is now getting shorter. “AI is reducing the time between discovery and exploitation, and that puts enormous pressure on enterprises,” says Yash Kadakia, founder of Security Brigade, a cyber security company.
“Many organisations are still built around security processes that take days or weeks: Identifying the issue, assessing its impact, prioritising it, getting approvals and then patching. In an AI-accelerated threat environment, that process may no longer be fast enough. Security can no longer be treated only as a prevention problem, but a question of speed, coordination and resilience.”
“Organisations should revise their current approach through the introduction of continuous monitoring, automated reactions, and techniques such as virtual patching to minimise the risks before the vulnerabilities get addressed permanently,” says JP Mishra, founder and CEO of Deep Algorithms, an AI cybersecurity startup.
However, faster discovery does not automatically mean faster repair. Companies often depend on software vendors or original equipment manufacturers for patches. Those fixes can take days or weeks. Once they arrive, installing them across complex systems can create operational risk. A rushed patch can break another part of the system, while a delayed one can leave a known weakness exposed.
Banking and financial services are especially exposed because the financial sector is deeply digitised. Payments move instantly, identity checks happen online. Documents, authentication layers, bank accounts, wallets, apps and third-party services are increasingly connected. That makes the system efficient, but it also means a weakness in one part of the stack can create risk elsewhere.
“The RBI framework is particularly instructive because it tracks patch latency as an explicit metric while simultaneously holding banks accountable for system availability and business continuity,” says Ashish Tandon, founder & CEO, Indusface, an application security company.
“Patching fast risks downtime; protecting uptime delays patching. AI may help banks find vulnerabilities faster, but it can also leave them with a formal record of problems they have not yet fixed. That creates both cyber risk and compliance risk,” Tandon adds.
“Mythos is the Y2K moment for cybersecurity and in regulated sectors like BFSI and fintech. The blast radius is amplified immediately by interconnected systems and exponential third-party supply chain risk,” says Pankit Desai, co-founder and CEO, Sequretek, a cybersecurity solutions company.
But Mythos is not only a banking problem. Any sector that depends on third-party software is exposed: Telecom, hospitals, logistics, retail, manufacturing, government services and cloud-based businesses. The risk often sits below the visible application layer, in operating systems, databases, middleware, APIs, identity systems and shared software components.
That matters because many organisations may use the same underlying tools. If a serious flaw is found in a widely used component, the impact can spread across sectors before each company has even finished checking whether it is affected.
For India, this makes preparedness a national coordination problem—banks, regulators, software vendors, cloud providers, public digital infrastructure operators and sectoral agencies may all need to move together.
There is another concern: AI systems may develop capabilities that were not fully expected in advance. This is what’s called emergent behaviour—when a system begins to show a capability that was not explicitly programmed for or anticipated by its creators.
“You cannot scan in advance for a capability that appeared without warning. The entire historical model of ‘identify the threat, build a defence’ breaks down when the threat emerges at runtime. That’s the uncomfortable reality Mythos forces organisations to face,” says Vrajesh Bhavsar, CEO of Operant AI, a runtime cybersecurity platform for Agentic AI.
“The capabilities that alarmed regulators weren’t programmed into Mythos; they emerged on their own. Nobody at Anthropic designed it to discover zero-days. It just did. That’s what emergent behaviour means in practice: An AI system crossing capability thresholds its creators never intended,” Bhavsar adds.
This is why some security experts argue that organisations cannot rely only on older methods, such as periodic audits or tools, that look for known attack patterns but will also need systems that monitor activity continuously and respond when something unusual happens.
“The path forward is safe enablement, building the governance architecture that allows AI to operate on enterprise data without losing control or accountability. This requires moving beyond perimeter-based security to a data-centric approach, where controls stay with the data wherever it flows,” says Vishal Gauri, CEO, Seclore, a security intelligence platform.
Organisations need to know where sensitive data sits, who can use it, how it moves across AI systems and third parties, and whether that usage can be audited. “With the DPDP Act and evolving RBI guidelines, this shift towards accountable, data-centric governance to enable the safe adoption of AI is becoming critical for banks,” Gauri adds.
Preparedness will also require companies to understand their own technology dependencies in far more detail. They will need live inventories of software components, faster vendor alerts, tested patching pipelines, virtual patching where permanent fixes are not yet available, and clear escalation rules for critical vulnerabilities.
“What enterprises must do right now is build granular, near-real-time component inventories, secure 24-hour OEM vulnerability notification rights contractually, and establish structured patch pipelines. Structured triage, not reactive urgency, is the only viable path forward,” says Desai of Sequretek.
India’s Mythos challenge, then, is larger than one Anthropic model. It is about whether the country’s cyber institutions can operate at the speed that frontier AI systems are beginning to set.