Update README.md
Browse files
README.md
CHANGED
@@ -9,20 +9,24 @@ tags:
|
|
9 |
---
|
10 |
# Model Card for DistilBERT-PromptInjectionDetectorForCVs
|
11 |
|
12 |
-
## Model
|
13 |
-
This DistilBERT
|
14 |
|
15 |
-
## Research
|
16 |
-
The
|
17 |
|
18 |
## Training Data
|
19 |
-
|
|
|
|
|
|
|
|
|
20 |
|
21 |
## Intended Use
|
22 |
-
This model is
|
23 |
|
24 |
-
## Limitations and
|
25 |
-
|
26 |
|
27 |
-
##
|
28 |
-
|
|
|
9 |
---
|
10 |
# Model Card for DistilBERT-PromptInjectionDetectorForCVs
|
11 |
|
12 |
+
## Model Overview
|
13 |
+
This model, leveraging the DistilBERT architecture, has been fine-tuned to demonstrate a strategy for mitigating prompt injection attacks. While it is specifically tailored for a synthetic application that handles CVs, the underlying research and methodology are intended to be applicable across various domains. This model serves as an example of how fine-tuning with domain-specific data can enhance the detection of prompt injection attempts in a targeted use case.
|
14 |
|
15 |
+
## Research Context
|
16 |
+
The development of this model was part of broader research into general strategies for mitigating prompt injection attacks in Large Language Models (LLMs). The detailed findings and methodology are discussed in our [research blog](http://placeholder), with the synthetic CV application available [here](http://placeholder) serving as a practical demonstration.
|
17 |
|
18 |
## Training Data
|
19 |
+
To fine-tune this model, we combined a domain-specific dataset (legitimate CVs) with examples of prompt injections, resulting in a custom dataset that provides a nuanced perspective on detecting prompt injection attacks. This approach leverages the strengths of both:
|
20 |
+
- **CV Dataset:** [Resume Dataset](https://huggingface.co/datasets/Lakshmi12/Resume_Dataset)
|
21 |
+
- **Prompt Injection Dataset:** [Prompt Injections](https://huggingface.co/datasets/deepset/prompt-injections)
|
22 |
+
|
23 |
+
The custom dataset includes legitimate CVs, pure prompt injection examples, and CVs embedded with prompt injection attempts, creating a rich training environment for the model.
|
24 |
|
25 |
## Intended Use
|
26 |
+
This model is a demonstration of how a domain-specific approach can be applied to mitigate prompt injection attacks within a particular context, in this case, a synthetic CV application. It is important to note that this model is not intended for direct production use but rather to serve as an example within a broader strategy for securing LLMs against such attacks.
|
27 |
|
28 |
+
## Limitations and Considerations
|
29 |
+
The challenge of prompt injection in LLMs is an ongoing research area, with no definitive solution currently available. While this model demonstrates a possible mitigation strategy within a specific domain, it is essential to recognize that it does not offer a comprehensive solution to the problem. Future prompt injection techniques may still succeed, underscoring the importance of continuous research and adaptation of mitigation strategies.
|
30 |
|
31 |
+
## Conclusion
|
32 |
+
Our research aims to contribute to the broader discussion on securing LLMs against prompt injection attacks. This model, while specific to a synthetic application, showcases a piece of the puzzle in addressing these challenges. We encourage further exploration and development of strategies to fortify models against evolving threats in this space.
|