Goldman Sachs has stopped employees in Hong Kong from using Anthropic's artificial intelligence model Claude.
On April 29, fintech outlet Finextra reported that Goldman Sachs employees in Hong Kong were able to use Claude through the firm's internal AI platform until a few weeks ago, but access is now blocked.
The move was reported to apply only to staff working in Hong Kong. Goldman Sachs employees were also reported to still have access in Hong Kong to other AI models such as OpenAI's ChatGPT and Google's Gemini. Only Claude has been restricted separately.
Goldman Sachs was reported to have consulted with Anthropic and then interpreted its contract strictly, concluding that Hong Kong employees should not use any of Anthropic's products. The key issue was how to view, under the contract, the permitted scope of service in Hong Kong. Hong Kong employees used Claude through an internal platform until just a few weeks ago, but the company appears to have since reorganised how the contract applies.
The restriction comes as the conflict over AI technology between the United States and China grows. The U.S. government issued a warning last week aimed at Chinese companies stealing AI technology. In this environment, how to handle AI models from U.S. companies in regions close to China is spreading into an internal control issue within the financial industry.
AI models from U.S. companies are currently not provided in mainland China. In Hong Kong, some U.S. companies have used those models. This has highlighted that operating standards may differ by company over whether to view Hong Kong as a separate market or as a risk region linked to the mainland.
Anthropic's official guidance also aligns with the move. On Anthropic's website, Hong Kong is not listed as an official market for its application programming interface or Claude.ai. Goldman Sachs appears to have adjusted internal access permissions to reflect those conditions. The fact that the company stopped Hong Kong employees from using Claude while keeping other models indicates it is selectively applying AI by weighing supplier contracts and regional rules rather than retreating from AI adoption itself.
Anthropic has also recently been at the centre of safety-related concerns over its latest model, Mythos. The model is known to be able to reveal cybersecurity vulnerabilities faster than humans. As financial companies decide internally which AI models to allow, the trend of reviewing not only regional accessibility but also security risks and supplier control terms is strengthening.
The case shows that global financial firms' use of AI has moved beyond adopting simple productivity tools to a stage that weighs contracts, regional regulation and geopolitical risk together. In regions like Hong Kong, which are institutionally separate from mainland China but intersect with U.S. companies' China risk management, decisions over which models to allow and which to block are likely to remain a sensitive operational issue.