Spaces:
Runtime error
Runtime error
DeWitt Gibson
commited on
Commit
·
b9edbb6
1
Parent(s):
f957ad6
add youtube videos
Browse files- README.md +9 -1
- docs/README.md +46 -0
README.md
CHANGED
|
@@ -19,7 +19,15 @@ license: apache-2.0
|
|
| 19 |
[](https://www.python.org/downloads/)
|
| 20 |
[](https://github.com/psf/black)
|
| 21 |
|
| 22 |
-
Comprehensive LLM AI Model protection toolset aligned to addressing OWASP vulnerabilities in Large Language Models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
**Author:** [DeWitt Gibson](https://www.linkedin.com/in/dewitt-gibson/)
|
| 25 |
|
|
|
|
| 19 |
[](https://www.python.org/downloads/)
|
| 20 |
[](https://github.com/psf/black)
|
| 21 |
|
| 22 |
+
Comprehensive LLM AI Model protection toolset aligned to addressing OWASP vulnerabilities in Large Language Models.
|
| 23 |
+
|
| 24 |
+
LLMGuardian is a cybersecurity toolset designed to protect production Generative AI applications by addressing the OWASP LLM Top 10 vulnerabilities. This toolset offers comprehensive features like Prompt Injection Detection, Data Leakage Prevention, and a Streamlit Interactive Dashboard for monitoring threats. The OWASP Top 10 for LLM Applications 2025 comprehensively lists and explains the ten most critical security risks specific to LLMs, such as Prompt Injection, Sensitive Information Disclosure, Supply Chain vulnerabilities, and Excessive Agency.
|
| 25 |
+
|
| 26 |
+
## 🎥 Demo Video
|
| 27 |
+
|
| 28 |
+
Watch the LLMGuardian demonstration and walkthrough:
|
| 29 |
+
|
| 30 |
+
[](https://youtu.be/vzMJXuoS-ko?si=umzS-6eqKl8mMtY_)
|
| 31 |
|
| 32 |
**Author:** [DeWitt Gibson](https://www.linkedin.com/in/dewitt-gibson/)
|
| 33 |
|
docs/README.md
CHANGED
|
@@ -1,5 +1,51 @@
|
|
| 1 |
# LLM Guardian Documentation
|
| 2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
# Command Line Interface
|
| 4 |
|
| 5 |
**cli_interface.py**
|
|
|
|
| 1 |
# LLM Guardian Documentation
|
| 2 |
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
LLMGuardian is a comprehensive security framework designed to protect Large Language Model (LLM) applications from the top security risks outlined in the OWASP Top 10 for LLM Applications. Watch our introduction video to learn more:
|
| 6 |
+
|
| 7 |
+
[](https://youtu.be/ERy37m5_kuk?si=mkKEy01Z4__qvxlr)
|
| 8 |
+
|
| 9 |
+
## Key Features
|
| 10 |
+
|
| 11 |
+
- **Real-time Threat Detection**: Advanced pattern recognition for prompt injection, jailbreaking, and malicious inputs
|
| 12 |
+
- **Privacy Protection**: Comprehensive PII detection and data sanitization
|
| 13 |
+
- **Vector Security**: Embedding validation and RAG operation protection
|
| 14 |
+
- **Agency Control**: Permission management and action validation for LLM operations
|
| 15 |
+
- **Comprehensive Monitoring**: Usage tracking, behavior analysis, and audit logging
|
| 16 |
+
- **Multi-layered Defense**: Input sanitization, output validation, and content filtering
|
| 17 |
+
- **Enterprise Ready**: Scalable architecture with cloud deployment support
|
| 18 |
+
|
| 19 |
+
## Architecture
|
| 20 |
+
|
| 21 |
+
LLMGuardian follows a modular architecture with the following core packages:
|
| 22 |
+
|
| 23 |
+
- **Core**: Configuration management, security services, rate limiting, and logging
|
| 24 |
+
- **Defenders**: Input sanitization, output validation, content filtering, and token validation
|
| 25 |
+
- **Monitors**: Usage monitoring, behavior analysis, threat detection, and audit logging
|
| 26 |
+
- **Vectors**: Embedding validation, vector scanning, RAG protection, and storage security
|
| 27 |
+
- **Agency**: Permission management, action validation, and scope limitation
|
| 28 |
+
- **Dashboard**: Web-based monitoring and control interface
|
| 29 |
+
- **CLI**: Command-line interface for security operations
|
| 30 |
+
|
| 31 |
+
## Quick Start
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
# Install LLMGuardian
|
| 35 |
+
pip install llmguardian
|
| 36 |
+
|
| 37 |
+
# Basic usage
|
| 38 |
+
from llmguardian import LLMGuardian
|
| 39 |
+
|
| 40 |
+
guardian = LLMGuardian()
|
| 41 |
+
result = guardian.scan_prompt("Your prompt here")
|
| 42 |
+
|
| 43 |
+
if result.is_safe:
|
| 44 |
+
print("Prompt is safe to process")
|
| 45 |
+
else:
|
| 46 |
+
print(f"Security risks detected: {result.risks}")
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
# Command Line Interface
|
| 50 |
|
| 51 |
**cli_interface.py**
|