Spaces:
Build error
A newer version of the Gradio SDK is available:
5.43.1
title: System Performance Monitor UI
emoji: 📊
colorFrom: blue
colorTo: gray
sdk: gradio
sdk_version: 5.35.0
app_file: app.py
python_version: 3.1
license: mit
ProMonitor 🌖
Features
ProMonitor offers a comprehensive interface for system monitoring and management, with a focus on security and usability:
- Dashboard: Displays real-time system metrics (CPU, memory, GPU, disk usage) and health status (Healthy, Degraded, Unhealthy) with a refreshable view.
- History: Visualizes historical CPU, memory, and GPU usage over a user-specified time range (1–1440 minutes) using tables and line charts.
- Resource Limits: Allows setting and viewing CPU, memory, and GPU power limits, with validation and feedback.
- Alerts: Shows active system alerts with severity levels (critical, warning) in a clear, color-coded table.
- Processes: Lists top CPU- and memory-intensive processes, with options to kill, suspend, or resume processes (secured with API key).
- System Info: Provides static system information (platform, CPU, memory, disk, GPU) in a structured format.
- Emergency Stop: Enables termination of high-memory processes with confirmation and API key authentication.
Prerequisites
- Python: 3.10+ (as specified in .hf_space.yml)
- FastAPI Backend: The backend (app/main.py) must be running and accessible (default: http://localhost:8000).
- Dependencies: Listed in requirements.txt (see Installation).
Installation
1 Clone the Repository:
git clone https://huggingface.co/spaces//ProMonitor
2 cd ProMonitor
3
4 Install Dependencies:
pip install -r requirements.txt
5
Dependencies include:
◦ gradio==5.35.0
◦ requests==2.32.3
◦ pydantic==2.9.2
◦ python-dotenv==1.0.1
◦ logging==0.5.1.2
6 Configure Environment: Create a .env file in the project root:
API_BASE_URL=http://localhost:8000
7 API_KEY=your_secure_api_key
8
◦ Replace your_secure_api_key with the API key required by the FastAPI backend.
◦ Ensure .env is listed in .gitignore to prevent committing sensitive data.
9 Run the FastAPI Backend: Start the backend (assumed to be in app/main.py):
uvicorn app.main:app --host 0.0.0.0 --port 8000
10
If the backend is hosted elsewhere, update API_BASE_URL in .env.
11 Run the UI: Launch the Gradio app:
python app.py
12
The UI will be available at http://localhost:7860.
Usage
1 Access the UI: Open http://localhost:7860 in a web browser to access the ProMonitor interface.
2 Navigate Tabs:
◦ Dashboard: View real-time metrics and health status. Click “Refresh Metrics” to update.
◦ History: Enter a time range (1–1440 minutes) to view historical data in tables and charts.
◦ Resource Limits: Set CPU, memory, and GPU limits using input fields. View current limits.
◦ Alerts: Monitor active alerts with severity indicators (red for critical, yellow for warning).
◦ Processes: View top processes and perform actions (kill, suspend, resume) with a valid API key.
◦ System Info: Review static system details (platform, CPU, memory, etc.).
◦ Emergency Stop: Trigger an emergency stop to terminate high-memory processes (requires API key and confirmation).
3 Secure Actions:
◦ For process actions and emergency stop, provide the API key in the respective input fields.
◦ Ensure the backend validates the API key to prevent unauthorized access.
Deployment on Hugging Face Spaces
ProMonitor is configured for deployment on Hugging Face Spaces with the following setup:
1 Push to Hugging Face Spaces:
◦ Create a new Space on Hugging Face: https://huggingface.co/new
◦ Select “Gradio” as the SDK and configure it as a public or private Space.
◦ Push the repository containing app.py, requirements.txt, .hf_space.yml, and this README.md.
2 Configure Secrets:
◦ In the Space’s Settings > Secrets, add:
▪ API_BASE_URL: URL of the FastAPI backend (e.g., https://your-backend.com).
▪ API_KEY: The API key for secure endpoint access.
◦ Avoid including .env in the repository; use Spaces secrets for sensitive data.
3 Build and Deploy:
◦ Hugging Face Spaces will automatically build the container using requirements.txt and run app.py.
◦ Check the Space’s logs for any errors during startup.
◦ Access the deployed UI at https://huggingface.co/spaces//ProMonitor.
4 Backend Hosting:
◦ The FastAPI backend must be hosted separately (e.g., on another Hugging Face Space, AWS, or a local server with tunneling via ngrok).
◦ Update API_BASE_URL in Spaces secrets to point to the backend.
◦ Ensure the backend implements API key authentication for endpoints like /processes/action and /emergency/stop.
Security Considerations
• API Key Authentication:
◦ The backend (app/main.py) must validate the Authorization header (e.g., Bearer ).
◦ Update the backend to include middleware for API key verification:
from fastapi import Depends, HTTPException, Security
◦ from fastapi.security import APIKeyHeader
◦
◦ api_key_header = APIKeyHeader(name="Authorization")
◦ def verify_api_key(api_key: str = Security(api_key_header)):
◦ if api_key != f"Bearer {os.getenv('API_KEY')}":
◦ raise HTTPException(status_code=403, detail="Invalid API key")
◦ return api_key
◦
Apply to sensitive endpoints: @app.post("/emergency/stop", dependencies=[Depends(verify_api_key)]).
• CORS:
◦ In the FastAPI backend, replace allow_origins=["*"] with specific origins (e.g., the Hugging Face Space URL) to prevent unauthorized access.
◦ Example: allow_origins=["https://-promonitor.hf.space"].
• Sensitive Data:
◦ Do not commit .env or sensitive configuration files to the repository.
◦ Use Hugging Face Spaces secrets for API_KEY and API_BASE_URL.
• Rate Limiting:
◦ Implement rate limiting in the Gradio UI (e.g., using the ratelimit library) to prevent overwhelming the backend.
◦ Consider adding rate limiting to the FastAPI backend using slowapi.
Testing
To ensure the UI works correctly, test it locally before deploying: 1 Install Testing Dependencies: pip install pytest pytest-httpx pytest-asyncio 2 3 Create a Test File (test_ui.py): import pytest 4 from httpx import AsyncClient 5 from app import APIClient 6 7 @pytest.mark.asyncio 8 async def test_get_metrics(): 9 async with AsyncClient() as client: 10 async def mock_get(*args, **kwargs): 11 return type("Response", (), {"json": lambda: {"status": "success", "data": {}}, "status_code": 200})() 12 client.get = mock_get 13 api_client = APIClient("http://test") 14 response = api_client.get_metrics() 15 assert response["status"] == "success" 16 17 @pytest.mark.asyncio 18 async def test_get_history(): 19 async with AsyncClient() as client: 20 async def mock_get(*args, **kwargs): 21 return type("Response", (), {"json": lambda: {"status": "success", "data": []}, "status_code": 200})() 22 client.get = mock_get 23 api_client = APIClient("http://test") 24 response = api_client.get_history(60) 25 assert response["status"] == "success" 26 27 Run Tests: pytest test_ui.py 28 29 Test Locally: ◦ Start the FastAPI backend: uvicorn app.main:app --host 0.0.0.0 --port 8000 ◦ Run the Gradio UI: python app.py ◦ Verify all tabs and functionality in the browser.
Troubleshooting
- Backend Unreachable:
- Ensure the FastAPI backend is running and accessible at API_BASE_URL.
- Check network connectivity and firewall settings.
- Use a tunneling service like ngrok for local testing: ngrok http 8000.
- API Key Errors:
- Verify the API_KEY matches the backend’s expected value.
- Check backend logs for authentication errors.
- Gradio UI Issues:
- Review ui.log for error messages.
- Ensure gradio==5.35.0 is installed correctly.
- Check Hugging Face Spaces logs for deployment issues.
Contributing
Contributions are welcome! To contribute:
1 Fork the repository.
2 Create a feature branch: git checkout -b feature/your-feature.
3 Commit changes: git commit -m "Add your feature".
4 Push to the branch: git push origin feature/your-feature.
5 Open a pull request on the Hugging Face Space repository.
Please include tests and follow PEP 8 guidelines.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Acknowledgments
- Built with Gradio for the UI and FastAPI for the backend.
- Designed for cybersecurity and system performance monitoring on HP Z440 workstations.
- Hosted on Hugging Face Spaces.