Spaces:
Sleeping
Sleeping
restore
Browse files- README.md +86 -33
- app.py +20 -0
- exposuregpt_simple.py +1 -7
- gitattributes +35 -0
- requirements.txt +3 -11
README.md
CHANGED
|
@@ -1,49 +1,102 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
-
|
| 4 |
|
| 5 |
-
|
| 6 |
-
- "Agent working now..."
|
| 7 |
-
- "π€ LLM interpreting input..."
|
| 8 |
-
- "π Checking OSINT data..."
|
| 9 |
-
- "π§ AI generating security analysis..."
|
| 10 |
-
- "π Compiling intelligence report..."
|
| 11 |
-
- "β
Intelligence gathering complete!"
|
| 12 |
|
| 13 |
-
|
|
|
|
| 14 |
|
| 15 |
-
##
|
| 16 |
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
##
|
| 23 |
|
| 24 |
-
|
| 25 |
-
2. Set Repository Secrets:
|
| 26 |
-
- `OPENAI_API_KEY` = your OpenAI API key
|
| 27 |
-
- `SHODAN_API_KEY` = your Shodan API key
|
| 28 |
-
3. Space will auto-rebuild and deploy
|
| 29 |
|
| 30 |
-
|
|
|
|
| 31 |
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
```
|
| 34 |
-
|
|
|
|
|
|
|
| 35 |
```
|
| 36 |
|
| 37 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
|
| 45 |
-
|
| 46 |
|
| 47 |
-
|
| 48 |
|
| 49 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: ExposureGPT
|
| 3 |
+
emoji: π―
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: "5.0.0"
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
license: mit
|
| 11 |
+
short_description: Simplified OSINT Intelligence Platform with MCP Support
|
| 12 |
+
---
|
| 13 |
|
| 14 |
+
# π― ExposureGPT - Simplified OSINT Intelligence
|
| 15 |
|
| 16 |
+
**Single MCP tool for comprehensive security intelligence using Shodan + OpenAI**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
[](https://huggingface.co/spaces/ACloudCenter/ExposureGPT)
|
| 19 |
+
[](https://huggingface.co/spaces/ACloudCenter/ExposureGPT)
|
| 20 |
|
| 21 |
+
## π Features
|
| 22 |
|
| 23 |
+
- **Single Tool**: One comprehensive OSINT intelligence gathering function
|
| 24 |
+
- **Shodan Integration**: Real infrastructure and device discovery
|
| 25 |
+
- **AI Analysis**: GPT-4o-mini powered security insights
|
| 26 |
+
- **MCP Server**: Built-in Model Context Protocol server for AI assistants
|
| 27 |
+
- **Risk Assessment**: Automated security scoring and recommendations
|
| 28 |
+
- **Simple Interface**: Single input, comprehensive output
|
| 29 |
|
| 30 |
+
## π§ Configuration
|
| 31 |
|
| 32 |
+
β οΈ **Required**: Set these environment variables in your Space settings:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
- `SHODAN_API_KEY` - Your Shodan API key (get from https://shodan.io)
|
| 35 |
+
- `OPENAI_API_KEY` - Your OpenAI API key (get from https://openai.com)
|
| 36 |
|
| 37 |
+
## π€ MCP Integration
|
| 38 |
+
|
| 39 |
+
This Space automatically serves as an MCP server that AI assistants like Claude can use!
|
| 40 |
+
|
| 41 |
+
**MCP Endpoint**: `https://acloudcenter-exposuregpt.hf.space/gradio_api/mcp/sse`
|
| 42 |
+
|
| 43 |
+
**Claude Desktop Configuration**:
|
| 44 |
+
```json
|
| 45 |
+
{
|
| 46 |
+
"mcpServers": {
|
| 47 |
+
"exposuregpt": {
|
| 48 |
+
"command": "npx",
|
| 49 |
+
"args": ["mcp-remote", "https://acloudcenter-exposuregpt.hf.space/gradio_api/mcp/sse"]
|
| 50 |
+
}
|
| 51 |
+
}
|
| 52 |
+
}
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
## π Available Tool
|
| 56 |
+
|
| 57 |
+
**`intelligence_gathering(target: str)`**
|
| 58 |
+
- Comprehensive OSINT analysis for any domain, IP address, or organization
|
| 59 |
+
- Uses Shodan for infrastructure discovery and vulnerability detection
|
| 60 |
+
- AI-powered analysis with actionable security recommendations
|
| 61 |
+
- Returns detailed security report with risk assessment
|
| 62 |
+
|
| 63 |
+
## π‘ Usage Examples
|
| 64 |
+
|
| 65 |
+
### Web Interface
|
| 66 |
+
- **Domain**: `google.com` - Analyze domain infrastructure
|
| 67 |
+
- **IP Address**: `8.8.8.8` - Scan specific IP for services
|
| 68 |
+
- **Organization**: `Microsoft Corp` - Corporate intelligence gathering
|
| 69 |
+
|
| 70 |
+
### Via AI Assistant (Claude)
|
| 71 |
```
|
| 72 |
+
"Analyze the security posture of example.com"
|
| 73 |
+
"What are the security risks for tesla.com?"
|
| 74 |
+
"Perform OSINT analysis on 1.1.1.1"
|
| 75 |
```
|
| 76 |
|
| 77 |
+
## π‘οΈ Security & Ethics
|
| 78 |
+
|
| 79 |
+
This tool is designed for:
|
| 80 |
+
- β
Security awareness and education
|
| 81 |
+
- β
Authorized penetration testing
|
| 82 |
+
- β
Risk assessment for your own organization
|
| 83 |
+
- β
Academic research
|
| 84 |
+
|
| 85 |
+
**Not for:**
|
| 86 |
+
- β Unauthorized reconnaissance
|
| 87 |
+
- β Malicious activities
|
| 88 |
+
- β Privacy violations
|
| 89 |
+
|
| 90 |
+
## π How It Works
|
| 91 |
|
| 92 |
+
1. **Input Analysis**: Automatically detects if target is domain, IP, or organization
|
| 93 |
+
2. **Shodan Query**: Searches for exposed infrastructure and services
|
| 94 |
+
3. **Risk Assessment**: Analyzes vulnerabilities and calculates risk scores
|
| 95 |
+
4. **AI Analysis**: GPT-4o-mini generates security insights and recommendations
|
| 96 |
+
5. **Comprehensive Report**: Formatted intelligence report with actionable findings
|
| 97 |
|
| 98 |
+
Perfect for security researchers, penetration testers, and AI assistants needing OSINT capabilities.
|
| 99 |
|
| 100 |
+
---
|
| 101 |
|
| 102 |
+
*Built for the 2025 Gradio Agents & MCP Hackathon*
|
app.py
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
ExposureGPT - HuggingFace Spaces App
|
| 4 |
+
Simplified OSINT Intelligence Platform with MCP Support
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import os
|
| 8 |
+
import sys
|
| 9 |
+
|
| 10 |
+
# Set up environment for HuggingFace Spaces
|
| 11 |
+
os.environ.setdefault('GRADIO_SERVER_NAME', '0.0.0.0')
|
| 12 |
+
os.environ.setdefault('GRADIO_SERVER_PORT', '7860')
|
| 13 |
+
|
| 14 |
+
# Import and run the simplified version
|
| 15 |
+
from exposuregpt_simple import main
|
| 16 |
+
|
| 17 |
+
if __name__ == "__main__":
|
| 18 |
+
# Force web mode for HuggingFace Spaces
|
| 19 |
+
sys.argv = ['app.py', '--port', '7860', '--share']
|
| 20 |
+
main()
|
exposuregpt_simple.py
CHANGED
|
@@ -52,7 +52,7 @@ if OpenAI and OPENAI_API_KEY:
|
|
| 52 |
logger.error(f"β OpenAI connection failed: {e}")
|
| 53 |
|
| 54 |
|
| 55 |
-
def intelligence_gathering(target: str
|
| 56 |
"""
|
| 57 |
Comprehensive OSINT intelligence gathering for domains, IPs, or organizations.
|
| 58 |
|
|
@@ -67,10 +67,8 @@ def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
|
|
| 67 |
"""
|
| 68 |
try:
|
| 69 |
logger.info(f"π― Starting intelligence gathering for: {target}")
|
| 70 |
-
progress(0, desc="Agent working now...")
|
| 71 |
|
| 72 |
# Step 1: LLM interprets and clarifies user input
|
| 73 |
-
progress(0.1, desc="π€ LLM interpreting input...")
|
| 74 |
interpreted_target = _interpret_user_input(target)
|
| 75 |
|
| 76 |
# Check if LLM needs clarification
|
|
@@ -82,7 +80,6 @@ def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
|
|
| 82 |
logger.info(f"π€ LLM interpreted '{target}' as '{interpreted_target}'")
|
| 83 |
|
| 84 |
# Step 2: Gather raw intelligence data
|
| 85 |
-
progress(0.3, desc="π Checking OSINT data...")
|
| 86 |
shodan_data = _gather_shodan_intelligence(interpreted_target)
|
| 87 |
|
| 88 |
# Check if we have any data to work with
|
|
@@ -90,18 +87,15 @@ def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
|
|
| 90 |
return f"β Cannot analyze {interpreted_target}: {shodan_data['error']}\n\nPlease configure API keys and try again."
|
| 91 |
|
| 92 |
# Step 3: Generate AI-powered analysis
|
| 93 |
-
progress(0.6, desc="π§ AI generating security analysis...")
|
| 94 |
ai_analysis = _generate_ai_analysis(interpreted_target, shodan_data)
|
| 95 |
|
| 96 |
# Step 4: Format comprehensive report
|
| 97 |
-
progress(0.9, desc="π Compiling intelligence report...")
|
| 98 |
report = _format_intelligence_report(interpreted_target, shodan_data, ai_analysis)
|
| 99 |
|
| 100 |
# Add interpretation note if target was changed
|
| 101 |
if interpreted_target != target:
|
| 102 |
report = f"π€ **LLM Interpretation**: Analyzed '{interpreted_target}' based on your query: '{target}'\n\n" + report
|
| 103 |
|
| 104 |
-
progress(1.0, desc="β
Intelligence gathering complete!")
|
| 105 |
logger.info(f"β
Intelligence gathering completed for {interpreted_target}")
|
| 106 |
return report
|
| 107 |
|
|
|
|
| 52 |
logger.error(f"β OpenAI connection failed: {e}")
|
| 53 |
|
| 54 |
|
| 55 |
+
def intelligence_gathering(target: str) -> str:
|
| 56 |
"""
|
| 57 |
Comprehensive OSINT intelligence gathering for domains, IPs, or organizations.
|
| 58 |
|
|
|
|
| 67 |
"""
|
| 68 |
try:
|
| 69 |
logger.info(f"π― Starting intelligence gathering for: {target}")
|
|
|
|
| 70 |
|
| 71 |
# Step 1: LLM interprets and clarifies user input
|
|
|
|
| 72 |
interpreted_target = _interpret_user_input(target)
|
| 73 |
|
| 74 |
# Check if LLM needs clarification
|
|
|
|
| 80 |
logger.info(f"π€ LLM interpreted '{target}' as '{interpreted_target}'")
|
| 81 |
|
| 82 |
# Step 2: Gather raw intelligence data
|
|
|
|
| 83 |
shodan_data = _gather_shodan_intelligence(interpreted_target)
|
| 84 |
|
| 85 |
# Check if we have any data to work with
|
|
|
|
| 87 |
return f"β Cannot analyze {interpreted_target}: {shodan_data['error']}\n\nPlease configure API keys and try again."
|
| 88 |
|
| 89 |
# Step 3: Generate AI-powered analysis
|
|
|
|
| 90 |
ai_analysis = _generate_ai_analysis(interpreted_target, shodan_data)
|
| 91 |
|
| 92 |
# Step 4: Format comprehensive report
|
|
|
|
| 93 |
report = _format_intelligence_report(interpreted_target, shodan_data, ai_analysis)
|
| 94 |
|
| 95 |
# Add interpretation note if target was changed
|
| 96 |
if interpreted_target != target:
|
| 97 |
report = f"π€ **LLM Interpretation**: Analyzed '{interpreted_target}' based on your query: '{target}'\n\n" + report
|
| 98 |
|
|
|
|
| 99 |
logger.info(f"β
Intelligence gathering completed for {interpreted_target}")
|
| 100 |
return report
|
| 101 |
|
gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
requirements.txt
CHANGED
|
@@ -1,12 +1,4 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
openai>=1.0.0
|
| 4 |
shodan
|
| 5 |
-
|
| 6 |
-
pandas
|
| 7 |
-
numpy
|
| 8 |
-
beautifulsoup4
|
| 9 |
-
dnspython
|
| 10 |
-
python-dotenv
|
| 11 |
-
pydantic
|
| 12 |
-
cryptography
|
|
|
|
| 1 |
+
gradio[mcp]
|
| 2 |
+
openai
|
|
|
|
| 3 |
shodan
|
| 4 |
+
python-dotenv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|