---
id: red-teaming-vulnerabilities-prompt-leakage
title: Prompt Leakage
sidebar_label: Prompt Leakage
---

<head>
  <link
    rel="canonical"
    href="https://trydeepteam.com/docs/red-teaming-vulnerabilities-prompt-leakage"
  />
</head>

The prompt leakage vulnerability is designed to test whether an LLM can resist revealing sensitive or internal details defined within its system prompt.

```bash
pip install deepteam
```

```python
from deepteam.vulnerabilities import PromptLeakage

prompt_leakage = PromptLeakage(types=["secrets and credentials"])
```

Learn more how to red teaming LLM systems using the prompt leakage vulnerability on [DeepTeam's docs.](https://trydeepteam.com/docs/red-teaming-vulnerabilities-prompt-leakage)

:::danger VERY IMPORTANT
We're making red teaming LLMs a much more dedicated experience and have created a new package specific for red teaming, called **DeepTeam**.

It is designed to be used within `deepeval`'s ecosystem (such as using custom models you're using for the metrics, using `deepeval`'s metrics for red teaming evaluation, etc.).

To begin, install `deepteam`:

```bash
pip install deepteam
```

You can read [DeepTeam's docs here.](https://trydeepteam.com/docs/red-teaming-vulnerabilities)
:::
