DeWitt Gibson commited on
Commit
b9edbb6
·
1 Parent(s): f957ad6

add youtube videos

Browse files
Files changed (2) hide show
  1. README.md +9 -1
  2. docs/README.md +46 -0
README.md CHANGED
@@ -19,7 +19,15 @@ license: apache-2.0
19
  [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
20
  [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
21
 
22
- Comprehensive LLM AI Model protection toolset aligned to addressing OWASP vulnerabilities in Large Language Models
 
 
 
 
 
 
 
 
23
 
24
  **Author:** [DeWitt Gibson](https://www.linkedin.com/in/dewitt-gibson/)
25
 
 
19
  [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
20
  [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
21
 
22
+ Comprehensive LLM AI Model protection toolset aligned to addressing OWASP vulnerabilities in Large Language Models.
23
+
24
+ LLMGuardian is a cybersecurity toolset designed to protect production Generative AI applications by addressing the OWASP LLM Top 10 vulnerabilities. This toolset offers comprehensive features like Prompt Injection Detection, Data Leakage Prevention, and a Streamlit Interactive Dashboard for monitoring threats. The OWASP Top 10 for LLM Applications 2025 comprehensively lists and explains the ten most critical security risks specific to LLMs, such as Prompt Injection, Sensitive Information Disclosure, Supply Chain vulnerabilities, and Excessive Agency.
25
+
26
+ ## 🎥 Demo Video
27
+
28
+ Watch the LLMGuardian demonstration and walkthrough:
29
+
30
+ [![LLMGuardian Demo](https://img.shields.io/badge/YouTube-Demo%20Video-red?style=for-the-badge&logo=youtube)](https://youtu.be/vzMJXuoS-ko?si=umzS-6eqKl8mMtY_)
31
 
32
  **Author:** [DeWitt Gibson](https://www.linkedin.com/in/dewitt-gibson/)
33
 
docs/README.md CHANGED
@@ -1,5 +1,51 @@
1
  # LLM Guardian Documentation
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  # Command Line Interface
4
 
5
  **cli_interface.py**
 
1
  # LLM Guardian Documentation
2
 
3
+ ## Overview
4
+
5
+ LLMGuardian is a comprehensive security framework designed to protect Large Language Model (LLM) applications from the top security risks outlined in the OWASP Top 10 for LLM Applications. Watch our introduction video to learn more:
6
+
7
+ [![LLMGuardian Introduction](https://img.youtube.com/vi/ERy37m5_kuk/0.jpg)](https://youtu.be/ERy37m5_kuk?si=mkKEy01Z4__qvxlr)
8
+
9
+ ## Key Features
10
+
11
+ - **Real-time Threat Detection**: Advanced pattern recognition for prompt injection, jailbreaking, and malicious inputs
12
+ - **Privacy Protection**: Comprehensive PII detection and data sanitization
13
+ - **Vector Security**: Embedding validation and RAG operation protection
14
+ - **Agency Control**: Permission management and action validation for LLM operations
15
+ - **Comprehensive Monitoring**: Usage tracking, behavior analysis, and audit logging
16
+ - **Multi-layered Defense**: Input sanitization, output validation, and content filtering
17
+ - **Enterprise Ready**: Scalable architecture with cloud deployment support
18
+
19
+ ## Architecture
20
+
21
+ LLMGuardian follows a modular architecture with the following core packages:
22
+
23
+ - **Core**: Configuration management, security services, rate limiting, and logging
24
+ - **Defenders**: Input sanitization, output validation, content filtering, and token validation
25
+ - **Monitors**: Usage monitoring, behavior analysis, threat detection, and audit logging
26
+ - **Vectors**: Embedding validation, vector scanning, RAG protection, and storage security
27
+ - **Agency**: Permission management, action validation, and scope limitation
28
+ - **Dashboard**: Web-based monitoring and control interface
29
+ - **CLI**: Command-line interface for security operations
30
+
31
+ ## Quick Start
32
+
33
+ ```bash
34
+ # Install LLMGuardian
35
+ pip install llmguardian
36
+
37
+ # Basic usage
38
+ from llmguardian import LLMGuardian
39
+
40
+ guardian = LLMGuardian()
41
+ result = guardian.scan_prompt("Your prompt here")
42
+
43
+ if result.is_safe:
44
+ print("Prompt is safe to process")
45
+ else:
46
+ print(f"Security risks detected: {result.risks}")
47
+ ```
48
+
49
  # Command Line Interface
50
 
51
  **cli_interface.py**