A vulnerability has been found that could enable supply chain attacks by abusing automation features in the AI coding tool Claude Code, SecurityWeek reported on May 7.
Adversa.AI researchers said that if an attacker uploads a GitHub repository with hidden malware, a developer could have the repository automatically downloaded while working with Claude Code. Once downloaded, the attacker can remotely control the developer's device.
When a developer starts a new task with Claude Code, the agent automatically searches for repositories to use. After downloading a malicious repository, a dialogue box appears asking whether to trust the folder. The box asks, "Did you create this project yourself or is it a project you trust?" The default setting is "trust." The moment the developer presses the enter key once, a malicious server created by the attacker automatically runs on the developer's computer with administrator privileges. Claude does not need to issue a separate command.
The risk is particularly high when Claude Code is used in CI/CD pipelines. The attack payload could read environment variables, deployment keys, signing certificates and runtime account credentials and be included in the build process, the report said.
Adversa.AI co-founder and chief technology officer Alex Polyakov (알렉스 폴랴코프) said, "Developers of widely used tools are realistic primary targets." He added, "An attack is entirely possible because Claude Code is installed on most developers' devices and developers routinely clone unfamiliar repositories."
Adversa.AI reported the issue to Anthropic, but Anthropic did not take any special measures, SecurityWeek said. Anthropic's position is that if a user agrees to trust a folder, that constitutes consent to everything inside the folder. Adversa.AI countered that whether consent given without knowing what is inside a folder is informed consent is open to debate.