CIS, Astrix, and Cequence Launch AI-Focused Cybersecurity Guidance to Secure Autonomous Systems
As part of the partnership, two companion guides will be created: one for AI Agent Environments, targeting the security of the agent system lifecycle, and another for Model Context Protocol (MCP) environments, which tackle risks such as credential exposure, ungoverned local execution, unapproved third-party connections, and uncontrolled data flows between models and tools.
The Center for Internet Security (CIS®), Astrix Security, and Cequence Security have formed a strategic partnership to develop new cybersecurity guidance focused on the unique risks of artificial intelligence (AI) and agentic systems. The initiative expands the CIS Critical Security Controls® into AI environments, addressing challenges created by autonomous decision-making, API access, and automated threats. As part of the partnership, two companion guides will be created: one for AI Agent Environments, targeting the security of the agent system lifecycle, and another for Model Context Protocol (MCP) environments, which tackle risks such as credential exposure, ungoverned local execution, unapproved third-party connections, and uncontrolled data flows between models and tools.
The guidance is designed to provide organizations with safeguards for environments where MCP agents, tools, and registries interact dynamically with enterprise systems. Astrix Security will focus on securing AI agents, MCP servers, and Non-Human Identities, including API keys, service accounts, and OAuth tokens. Cequence Security contributes expertise in enterprise application and API security, ensuring visibility, governance, and control over AI agent operations. The partnership combines standards, application defense, and AI-specific security knowledge to help organizations adopt AI responsibly and securely.
The new guidance is expected to be released in early 2026, supported by workshops, webinars, and additional resources from the three organizations. The initiative aims to translate cybersecurity recommendations into practical solutions, establishing a common framework for enterprises, vendors, and security leaders to secure AI ecosystems while promoting trust, transparency, and resilience across the artificial intelligence landscape.

