Anthropic’s latest model, Claude Mythos, may now be raising concerns among the UK’s financial institutions. According to a report by The Financial Times (FT), financial regulators are holding urgent discussions with the UK government’s main cybersecurity watchdog and the country’s biggest banks to assess risks linked to the model’s ability to identify vulnerabilities in key IT systems. Officials from the Bank of England, the Financial Conduct Authority, and HM Treasury are in talks with the National Cyber Security Centre, while leading British banks, insurers, and exchanges are expected to be warned about potential cybersecurity risks at a meeting scheduled in the coming fortnight.The response follows a move by US Treasury Secretary Scott Bessent, who has summoned leaders of major Wall Street banks to discuss the model’s advanced capability to detect cybersecurity weaknesses that could be exploited by bad actors. Anthropic, which recently released the Claude Mythos Preview to select customers, said the model has already “found thousands of high-severity vulnerabilities, including some in every major operating system and web browser”, some of which have gone undetected for decades. The company added that it may be “not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely”, warning that “the fallout — for economies, public safety, and national security — could be severe.”
What UK financial organisations are expected to discuss about Anthropic’s new Claude Mythos model
The report claims that the potential impact of the new AI model will be discussed at the next meeting of the UK’s Cross Market Operational Resilience Group (CMORG). This meeting will bring together the UK’s regulators and financial services companies to address sector-wide risks. CMORG is co-chaired by Duncan Mackinnon, the Bank of England’s executive director for supervisory risk, and David Postings, head of the UK Finance trade body for banks. Its members also include senior representatives from eight major UK banks, four financial infrastructure providers, two insurers, the NCSC, the FCA, and HM Treasury. David Raw, managing director for resilience at UK Finance, told FT, “We are aware of the press reports on the Anthropic AI development and the risks highlighted. UK Finance engages with our members and, through our public/private partnerships, on any significant operational risks that could affect the resilience of the UK financial services sector.”The Bank of England also has the option to convene a meeting with financial institutions within one to two hours through its Cross Market Business Continuity Group in case of an urgent threat, although it has not done so in this instance. This announcement comes after several major UK companies, including retailers M&S, the Co-op Group, and Harrods, as well as Jaguar Land Rover, were targeted by cyberattacks last year that disrupted their operations.Meanwhile, the UK’s AI Security Institute, the government’s unit for testing and researching risks in advanced AI models, has been evaluating Anthropic’s Mythos alongside other models such as Claude and OpenAI’s ChatGPT, the report adds.The government is also considering introducing standardised testing for general-purpose AI models used by UK lenders, following concerns raised by the Bank of England last year. The BoE’s Prudential Regulation Authority told bank executives in two meetings in October 2025 that their AI model monitoring was “not frequent enough,” the report cited slides from the events to claim.















