Claude Code Vulnerabilities Highlight New AI-Powered Development Tool Risks
This disclosure underscores the balance required between the productivity benefits of AI-assisted development and the necessity of maintaining secure software supply chains.
Check Point Research has identified critical vulnerabilities in Anthropic’s Claude Code platform that allowed attackers to execute remote code and steal API credentials via malicious project configurations. These flaws exploited Hooks, Model Context Protocol (MCP) servers, and environment variables, enabling arbitrary shell command execution and exfiltration of Anthropic API keys when developers cloned and opened untrusted repositories. Following the discovery, Check Point worked closely with Anthropic to ensure all issues were patched before public disclosure.
Claude Code, an AI-powered command-line development tool, allows developers to delegate coding tasks from the terminal using natural language instructions. It supports operations such as file edits, Git repository management, automated testing, build integration, and shell command execution. The platform’s project-level configuration files, particularly .claude/settings.json, were found to be a potential attack vector because any contributor with commit access could define Hooks or modify MCP settings that execute automatically on collaborators’ machines.
The first vulnerability involved Hooks. Hooks are designed to trigger predefined commands at specific stages of a project workflow. Malicious actors could configure these Hooks to run shell commands without explicit user approval, enabling remote code execution. The second vulnerability leveraged MCP configuration files, which initially allowed commands to bypass user consent, further enabling arbitrary command execution. The third vulnerability involved the environment variable ANTHROPIC_BASE_URL, which could be manipulated to exfiltrate API keys before the user even interacted with the project, giving attackers full access to Claude Code Workspaces, including other developers’ shared files.
These flaws represented significant supply chain risks, as attackers could deliver malicious configurations through pull requests, publicly available repositories, or compromised internal enterprise codebases. The potential impact included unauthorized access to sensitive files, deletion or poisoning of workspace files, and unauthorized API usage leading to financial or operational consequences.
Anthropic addressed the issues by enhancing user consent dialogs, ensuring MCP servers could not execute without explicit approval, and blocking network operations, including API calls, until users approved the trust dialog. These fixes remediate the vulnerabilities described by Check Point Research.
The report emphasizes the growing security risks associated with AI-powered development tools. Configuration files, once considered passive metadata, now control active execution paths, creating new attack surfaces. Developers are urged to keep tools updated, scrutinize project configuration files, review repository changes carefully, and heed warnings about potentially unsafe files to mitigate these emerging threats.
This disclosure underscores the balance required between the productivity benefits of AI-assisted development and the necessity of maintaining secure software supply chains.

