MCP Safety Audit: LLMs with the Model Context Protocol Allow Major Security Exploits
Abstract
To reduce development overhead and enable seamless integration between potential components comprising any given generative AI application, the Model Context Protocol (MCP) (Anthropic, 2024) has recently been released and subsequently widely adopted. The MCP is an open protocol that standardizes API calls to large language models (LLMs), data sources, and agentic tools. By connecting multiple MCP servers, each defined with a set of tools, resources, and prompts, users are able to define automated workflows fully driven by LLMs. However, we show that the current MCP design carries a wide range of security risks for end users. In particular, we demonstrate that industry-leading LLMs may be coerced into using MCP tools to compromise an AI developer's system through various attacks, such as malicious code execution, remote access control, and credential theft. To proactively mitigate these and related attacks, we introduce a safety auditing tool, MCPSafetyScanner, the first agentic tool to assess the security of an arbitrary MCP server. MCPScanner uses several agents to (a) automatically determine adversarial samples given an MCP server's tools and resources; (b) search for related vulnerabilities and remediations based on those samples; and (c) generate a security report detailing all findings. Our work highlights serious security issues with general-purpose agentic workflows while also providing a proactive tool to audit MCP server safety and address detected vulnerabilities before deployment. The described MCP server auditing tool, MCPSafetyScanner, is freely available at: https://github.com/johnhalloran321/mcpSafetyScanner
Community
Abstract
To reduce development overhead and enable seamless integration between potential components comprising any given generative AI application, the Model Context Protocol (MCP) (Anthropic, 2025d) has recently been released and, subsequently, widely adapted. The MCP is an open protocol which standardizes API calls to large language models (LLMs), data sources, and agentic tools. Thus, by connecting multiple MCP servers–each defined with a set of tools, resources, and prompts–users are able to define automated workflows fully driven by LLMs. However, we show that the current MCP design carries a wide range of security risks for end-users. In particular, we show that industry-leading LLMs may be coerced to use MCP tools and compromise an AI developer’s system through a wide range of attacks, e.g., malicious code execution, remote access control, and credential theft. In order to proactively mitigate the demonstrated (and related) attacks, we introduce a safety auditing tool, McpSafetyScanner, the first such agentic tool to assess the security of an arbitrary MCP server. McpSafetyScanner uses several agents to: a) automatically determine adversarial samples given an MCP server’s tools and resources, (b) search for related vulnerabilities and remediations given such samples, and (c) generate a security report detailing all findings. Our work thus sheds light on serious security issues with general purpose agentic workflows, while also providing a proactive tool to audit the safety of MCP servers and address detected vulnerabilities prior to deployment.
The described MCP server auditing tool, MCPSafetyScanner, is freely available at: https://github.com/johnhalloran321/mcpSafetyScanner.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions (2025)
- Enterprise-Grade Security for the Model Context Protocol (MCP): Frameworks and Mitigation Strategies (2025)
- MCP Bridge: A Lightweight, LLM-Agnostic RESTful Proxy for Model Context Protocol Servers (2025)
- Prompt Flow Integrity to Prevent Privilege Escalation in LLM Agents (2025)
- Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms (2025)
- Do LLMs Consider Security? An Empirical Study on Responses to Programming Questions (2025)
- SoK: Understanding Vulnerabilities in the Large Language Model Supply Chain (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper