ACloudCenter commited on
Commit
53193f8
Β·
verified Β·
1 Parent(s): 3256216
Files changed (5) hide show
  1. README.md +86 -33
  2. app.py +20 -0
  3. exposuregpt_simple.py +1 -7
  4. gitattributes +35 -0
  5. requirements.txt +3 -11
README.md CHANGED
@@ -1,49 +1,102 @@
1
- # ExposureGPT - Manual Upload v2 (Progress Indicators)
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- ## NEW: Native Progress Indicators ✨
4
 
5
- This version includes native Gradio progress indicators showing real-time status:
6
- - "Agent working now..."
7
- - "πŸ€– LLM interpreting input..."
8
- - "πŸ” Checking OSINT data..."
9
- - "🧠 AI generating security analysis..."
10
- - "πŸ“‹ Compiling intelligence report..."
11
- - "βœ… Intelligence gathering complete!"
12
 
13
- Users now see exactly what's happening and won't repeatedly click the button!
 
14
 
15
- ## Files for HuggingFace Spaces Upload
16
 
17
- 1. **app.py** - HuggingFace Spaces entry point
18
- 2. **exposuregpt_simple.py** - Main application with progress indicators
19
- 3. **requirements.txt** - Dependencies for HF Spaces
20
- 4. **README.md** - This file
 
 
21
 
22
- ## HuggingFace Spaces Setup
23
 
24
- 1. Upload all 4 files to your HuggingFace Space
25
- 2. Set Repository Secrets:
26
- - `OPENAI_API_KEY` = your OpenAI API key
27
- - `SHODAN_API_KEY` = your Shodan API key
28
- 3. Space will auto-rebuild and deploy
29
 
30
- ## MCP Endpoint
 
31
 
32
- Once deployed, your MCP endpoint will be:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ```
34
- https://your-username-spacename.hf.space/gradio_api/mcp/sse
 
 
35
  ```
36
 
37
- ## What's New in v2
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- - βœ… Native Gradio progress indicators
40
- - βœ… Real-time status updates during analysis
41
- - βœ… Better user experience (no more button mashing!)
42
- - βœ… Same powerful OSINT + LLM analysis
43
- - βœ… Full MCP server compatibility
44
 
45
- ## Working Example
46
 
47
- Live at: https://acloudcenter-exposuregpt.hf.space
48
 
49
- Enjoy the enhanced user experience! πŸš€
 
1
+ ---
2
+ title: ExposureGPT
3
+ emoji: 🎯
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: "5.0.0"
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ short_description: Simplified OSINT Intelligence Platform with MCP Support
12
+ ---
13
 
14
+ # 🎯 ExposureGPT - Simplified OSINT Intelligence
15
 
16
+ **Single MCP tool for comprehensive security intelligence using Shodan + OpenAI**
 
 
 
 
 
 
17
 
18
+ [![πŸš€ Live on HuggingFace](https://img.shields.io/badge/πŸš€-Live%20on%20HuggingFace-blue)](https://huggingface.co/spaces/ACloudCenter/ExposureGPT)
19
+ [![MCP Server](https://img.shields.io/badge/πŸ€–-MCP%20Server-green)](https://huggingface.co/spaces/ACloudCenter/ExposureGPT)
20
 
21
+ ## πŸš€ Features
22
 
23
+ - **Single Tool**: One comprehensive OSINT intelligence gathering function
24
+ - **Shodan Integration**: Real infrastructure and device discovery
25
+ - **AI Analysis**: GPT-4o-mini powered security insights
26
+ - **MCP Server**: Built-in Model Context Protocol server for AI assistants
27
+ - **Risk Assessment**: Automated security scoring and recommendations
28
+ - **Simple Interface**: Single input, comprehensive output
29
 
30
+ ## πŸ”§ Configuration
31
 
32
+ ⚠️ **Required**: Set these environment variables in your Space settings:
 
 
 
 
33
 
34
+ - `SHODAN_API_KEY` - Your Shodan API key (get from https://shodan.io)
35
+ - `OPENAI_API_KEY` - Your OpenAI API key (get from https://openai.com)
36
 
37
+ ## πŸ€– MCP Integration
38
+
39
+ This Space automatically serves as an MCP server that AI assistants like Claude can use!
40
+
41
+ **MCP Endpoint**: `https://acloudcenter-exposuregpt.hf.space/gradio_api/mcp/sse`
42
+
43
+ **Claude Desktop Configuration**:
44
+ ```json
45
+ {
46
+ "mcpServers": {
47
+ "exposuregpt": {
48
+ "command": "npx",
49
+ "args": ["mcp-remote", "https://acloudcenter-exposuregpt.hf.space/gradio_api/mcp/sse"]
50
+ }
51
+ }
52
+ }
53
+ ```
54
+
55
+ ## πŸ“Š Available Tool
56
+
57
+ **`intelligence_gathering(target: str)`**
58
+ - Comprehensive OSINT analysis for any domain, IP address, or organization
59
+ - Uses Shodan for infrastructure discovery and vulnerability detection
60
+ - AI-powered analysis with actionable security recommendations
61
+ - Returns detailed security report with risk assessment
62
+
63
+ ## πŸ’‘ Usage Examples
64
+
65
+ ### Web Interface
66
+ - **Domain**: `google.com` - Analyze domain infrastructure
67
+ - **IP Address**: `8.8.8.8` - Scan specific IP for services
68
+ - **Organization**: `Microsoft Corp` - Corporate intelligence gathering
69
+
70
+ ### Via AI Assistant (Claude)
71
  ```
72
+ "Analyze the security posture of example.com"
73
+ "What are the security risks for tesla.com?"
74
+ "Perform OSINT analysis on 1.1.1.1"
75
  ```
76
 
77
+ ## πŸ›‘οΈ Security & Ethics
78
+
79
+ This tool is designed for:
80
+ - βœ… Security awareness and education
81
+ - βœ… Authorized penetration testing
82
+ - βœ… Risk assessment for your own organization
83
+ - βœ… Academic research
84
+
85
+ **Not for:**
86
+ - ❌ Unauthorized reconnaissance
87
+ - ❌ Malicious activities
88
+ - ❌ Privacy violations
89
+
90
+ ## πŸ” How It Works
91
 
92
+ 1. **Input Analysis**: Automatically detects if target is domain, IP, or organization
93
+ 2. **Shodan Query**: Searches for exposed infrastructure and services
94
+ 3. **Risk Assessment**: Analyzes vulnerabilities and calculates risk scores
95
+ 4. **AI Analysis**: GPT-4o-mini generates security insights and recommendations
96
+ 5. **Comprehensive Report**: Formatted intelligence report with actionable findings
97
 
98
+ Perfect for security researchers, penetration testers, and AI assistants needing OSINT capabilities.
99
 
100
+ ---
101
 
102
+ *Built for the 2025 Gradio Agents & MCP Hackathon*
app.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ ExposureGPT - HuggingFace Spaces App
4
+ Simplified OSINT Intelligence Platform with MCP Support
5
+ """
6
+
7
+ import os
8
+ import sys
9
+
10
+ # Set up environment for HuggingFace Spaces
11
+ os.environ.setdefault('GRADIO_SERVER_NAME', '0.0.0.0')
12
+ os.environ.setdefault('GRADIO_SERVER_PORT', '7860')
13
+
14
+ # Import and run the simplified version
15
+ from exposuregpt_simple import main
16
+
17
+ if __name__ == "__main__":
18
+ # Force web mode for HuggingFace Spaces
19
+ sys.argv = ['app.py', '--port', '7860', '--share']
20
+ main()
exposuregpt_simple.py CHANGED
@@ -52,7 +52,7 @@ if OpenAI and OPENAI_API_KEY:
52
  logger.error(f"❌ OpenAI connection failed: {e}")
53
 
54
 
55
- def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
56
  """
57
  Comprehensive OSINT intelligence gathering for domains, IPs, or organizations.
58
 
@@ -67,10 +67,8 @@ def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
67
  """
68
  try:
69
  logger.info(f"🎯 Starting intelligence gathering for: {target}")
70
- progress(0, desc="Agent working now...")
71
 
72
  # Step 1: LLM interprets and clarifies user input
73
- progress(0.1, desc="πŸ€– LLM interpreting input...")
74
  interpreted_target = _interpret_user_input(target)
75
 
76
  # Check if LLM needs clarification
@@ -82,7 +80,6 @@ def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
82
  logger.info(f"πŸ€– LLM interpreted '{target}' as '{interpreted_target}'")
83
 
84
  # Step 2: Gather raw intelligence data
85
- progress(0.3, desc="πŸ” Checking OSINT data...")
86
  shodan_data = _gather_shodan_intelligence(interpreted_target)
87
 
88
  # Check if we have any data to work with
@@ -90,18 +87,15 @@ def intelligence_gathering(target: str, progress=gr.Progress()) -> str:
90
  return f"❌ Cannot analyze {interpreted_target}: {shodan_data['error']}\n\nPlease configure API keys and try again."
91
 
92
  # Step 3: Generate AI-powered analysis
93
- progress(0.6, desc="🧠 AI generating security analysis...")
94
  ai_analysis = _generate_ai_analysis(interpreted_target, shodan_data)
95
 
96
  # Step 4: Format comprehensive report
97
- progress(0.9, desc="πŸ“‹ Compiling intelligence report...")
98
  report = _format_intelligence_report(interpreted_target, shodan_data, ai_analysis)
99
 
100
  # Add interpretation note if target was changed
101
  if interpreted_target != target:
102
  report = f"πŸ€– **LLM Interpretation**: Analyzed '{interpreted_target}' based on your query: '{target}'\n\n" + report
103
 
104
- progress(1.0, desc="βœ… Intelligence gathering complete!")
105
  logger.info(f"βœ… Intelligence gathering completed for {interpreted_target}")
106
  return report
107
 
 
52
  logger.error(f"❌ OpenAI connection failed: {e}")
53
 
54
 
55
+ def intelligence_gathering(target: str) -> str:
56
  """
57
  Comprehensive OSINT intelligence gathering for domains, IPs, or organizations.
58
 
 
67
  """
68
  try:
69
  logger.info(f"🎯 Starting intelligence gathering for: {target}")
 
70
 
71
  # Step 1: LLM interprets and clarifies user input
 
72
  interpreted_target = _interpret_user_input(target)
73
 
74
  # Check if LLM needs clarification
 
80
  logger.info(f"πŸ€– LLM interpreted '{target}' as '{interpreted_target}'")
81
 
82
  # Step 2: Gather raw intelligence data
 
83
  shodan_data = _gather_shodan_intelligence(interpreted_target)
84
 
85
  # Check if we have any data to work with
 
87
  return f"❌ Cannot analyze {interpreted_target}: {shodan_data['error']}\n\nPlease configure API keys and try again."
88
 
89
  # Step 3: Generate AI-powered analysis
 
90
  ai_analysis = _generate_ai_analysis(interpreted_target, shodan_data)
91
 
92
  # Step 4: Format comprehensive report
 
93
  report = _format_intelligence_report(interpreted_target, shodan_data, ai_analysis)
94
 
95
  # Add interpretation note if target was changed
96
  if interpreted_target != target:
97
  report = f"πŸ€– **LLM Interpretation**: Analyzed '{interpreted_target}' based on your query: '{target}'\n\n" + report
98
 
 
99
  logger.info(f"βœ… Intelligence gathering completed for {interpreted_target}")
100
  return report
101
 
gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
requirements.txt CHANGED
@@ -1,12 +1,4 @@
1
- # Core Dependencies for HuggingFace Spaces
2
- gradio[mcp]>=5.0.0
3
- openai>=1.0.0
4
  shodan
5
- requests
6
- pandas
7
- numpy
8
- beautifulsoup4
9
- dnspython
10
- python-dotenv
11
- pydantic
12
- cryptography
 
1
+ gradio[mcp]
2
+ openai
 
3
  shodan
4
+ python-dotenv