Yaswanth-Bolla commited on
Commit
4af0351
·
1 Parent(s): a0e2f2d

Updated api

Browse files
Files changed (5) hide show
  1. README.md +156 -0
  2. app.py +18 -10
  3. dashboard_analytics.py +18 -12
  4. enterprise_ra.py +17 -9
  5. threat_ra.py +17 -9
README.md CHANGED
@@ -845,6 +845,162 @@ The API includes intelligent fallback responses when AI services are unavailable
845
  - **Risk Mitigation**: Provides category-based mitigation strategies
846
  - **Geographic Assessment**: Returns general location risk factors
847
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
848
  ## 🔧 Technical Architecture
849
 
850
  ### Components
 
845
  - **Risk Mitigation**: Provides category-based mitigation strategies
846
  - **Geographic Assessment**: Returns general location risk factors
847
 
848
+ ## 🔧 Deployment Troubleshooting
849
+
850
+ ### Hugging Face Spaces Deployment Issues
851
+
852
+ #### **Problem**: "Connection error" or "Error generating risks: Connection error."
853
+ **Symptoms**:
854
+ - 500 Internal Server Error responses
855
+ - API endpoints failing with connection errors
856
+ - "GROQ API connection failed" messages
857
+
858
+ **Solutions**:
859
+
860
+ 1. **Check GROQ API Key Configuration**:
861
+ - Go to your Hugging Face Space settings
862
+ - Navigate to "Variables and secrets"
863
+ - Ensure `GROQ_API_KEY` is set as a **Secret** (not a variable)
864
+ - Verify the API key is valid and has sufficient credits
865
+ - Restart the Space after adding/updating the key
866
+
867
+ 2. **Verify Environment Variable Format**:
868
+ ```bash
869
+ # Correct format in HF Spaces:
870
+ GROQ_API_KEY=gsk_your_actual_api_key_here
871
+ ```
872
+
873
+ 3. **Test API Key Validity**:
874
+ ```bash
875
+ curl -X GET "https://api.groq.com/openai/v1/models" \
876
+ -H "Authorization: Bearer gsk_your_api_key_here"
877
+ ```
878
+
879
+ 4. **Check Space Logs**:
880
+ - Go to your Space's "Logs" tab
881
+ - Look for startup errors or API connection failures
882
+ - Restart the Space if environment variables were updated
883
+
884
+ #### **Problem**: 404 Not Found on API endpoints
885
+ **Symptoms**:
886
+ - Endpoints returning 404 errors
887
+ - API documentation not showing expected routes
888
+
889
+ **Solutions**:
890
+
891
+ 1. **Verify Endpoint URLs**:
892
+ ```bash
893
+ # Correct URLs for HF Spaces:
894
+ https://your-space-name.hf.space/enterprise/api/enterprise-ra/generate-risks
895
+ https://your-space-name.hf.space/threat/api/threat-ra/generate-threat-risks
896
+ https://your-space-name.hf.space/dashboard/api/dashboard/generate-kpis
897
+ ```
898
+
899
+ 2. **Check API Documentation**:
900
+ ```bash
901
+ # Access Swagger UI:
902
+ https://your-space-name.hf.space/docs
903
+ ```
904
+
905
+ #### **Problem**: Space not starting or constant restarts
906
+ **Symptoms**:
907
+ - Space shows "Building" status indefinitely
908
+ - Frequent application restarts
909
+ - Import errors in logs
910
+
911
+ **Solutions**:
912
+
913
+ 1. **Check Dependencies**:
914
+ - Ensure all packages in `requirements.txt` are compatible
915
+ - Verify Python version compatibility (use Python 3.10)
916
+
917
+ 2. **Review Dockerfile**:
918
+ ```dockerfile
919
+ FROM python:3.10-slim
920
+ RUN mkdir -p /data/huggingface && chmod -R 777 /data/huggingface
921
+ ENV HF_HOME=/data/huggingface
922
+ WORKDIR /app
923
+ COPY requirements.txt .
924
+ RUN pip install --no-cache-dir -r requirements.txt
925
+ COPY . .
926
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
927
+ ```
928
+
929
+ 3. **Port Configuration**:
930
+ - Hugging Face Spaces require port 7860
931
+ - Ensure Dockerfile uses the correct port
932
+
933
+ ### Local Development Issues
934
+
935
+ #### **Problem**: Import errors or module not found
936
+ **Solutions**:
937
+ ```bash
938
+ # Ensure all dependencies are installed:
939
+ pip install -r requirements.txt
940
+
941
+ # Verify Python path:
942
+ export PYTHONPATH="${PYTHONPATH}:$(pwd)"
943
+ ```
944
+
945
+ #### **Problem**: GROQ API rate limits
946
+ **Solutions**:
947
+ - Implement exponential backoff
948
+ - Consider upgrading GROQ plan for higher rate limits
949
+ - Use fallback responses during high traffic
950
+
951
+ ### Testing Deployment
952
+
953
+ #### **Test Endpoints Programmatically**:
954
+ ```python
955
+ import requests
956
+ import json
957
+
958
+ # Test HF Space endpoint
959
+ base_url = "https://your-space-name.hf.space"
960
+ test_data = {
961
+ "category": "Financial",
962
+ "department": "Finance",
963
+ "business_context": "Test deployment",
964
+ "specific_concerns": "API connectivity test",
965
+ "number_of_risks": 1
966
+ }
967
+
968
+ response = requests.post(
969
+ f"{base_url}/enterprise/api/enterprise-ra/generate-risks",
970
+ json=test_data,
971
+ timeout=30
972
+ )
973
+
974
+ print(f"Status: {response.status_code}")
975
+ print(f"Response: {response.text}")
976
+ ```
977
+
978
+ #### **Quick Health Check**:
979
+ ```bash
980
+ # Check if the Space is responding:
981
+ curl -X GET "https://your-space-name.hf.space/docs"
982
+
983
+ # Test a simple endpoint:
984
+ curl -X POST "https://your-space-name.hf.space/enterprise/api/enterprise-ra/generate-risks" \
985
+ -H "Content-Type: application/json" \
986
+ -d '{"category":"Test","department":"Test","business_context":"Health check","specific_concerns":"Deployment test","number_of_risks":1}'
987
+ ```
988
+
989
+ ### Performance Optimization
990
+
991
+ 1. **Cold Start Mitigation**:
992
+ - HF Spaces may have cold starts after inactivity
993
+ - First request after idle period may be slower
994
+ - Consider implementing a keepalive mechanism
995
+
996
+ 2. **Timeout Settings**:
997
+ - Set appropriate timeouts for API calls (30+ seconds)
998
+ - Handle timeout gracefully with fallback responses
999
+
1000
+ 3. **Resource Monitoring**:
1001
+ - Monitor Space resource usage in HF interface
1002
+ - Consider upgrading to faster hardware tiers for production
1003
+
1004
  ## 🔧 Technical Architecture
1005
 
1006
  ### Components
app.py CHANGED
@@ -19,20 +19,28 @@ app.include_router(threat_ra_router, prefix="/threat")
19
  app.include_router(dashboard_router, prefix="/dashboard")
20
 
21
  # Environment Variables
22
- GROQ_API_KEY = os.environ.get("GROQ_API_KEY").strip()
 
 
23
 
24
  # Model Setup
25
  def generate_response(system_prompt: str, user_message: str):
 
 
 
26
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
27
- response = client.chat.completions.create(
28
- model="llama3-8b-8192", # Try this supported model
29
- messages=[
30
- {"role": "system", "content": system_prompt},
31
- {"role": "user", "content": user_message}
32
- ],
33
- temperature=0.4
34
- )
35
- return response.choices[0].message.content
 
 
 
36
 
37
  # Request models
38
  class Message(BaseModel):
 
19
  app.include_router(dashboard_router, prefix="/dashboard")
20
 
21
  # Environment Variables
22
+ GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
23
+ if GROQ_API_KEY:
24
+ GROQ_API_KEY = GROQ_API_KEY.strip()
25
 
26
  # Model Setup
27
  def generate_response(system_prompt: str, user_message: str):
28
+ if not GROQ_API_KEY:
29
+ raise Exception("GROQ_API_KEY environment variable is not set")
30
+
31
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
32
+ try:
33
+ response = client.chat.completions.create(
34
+ model="llama3-8b-8192", # Try this supported model
35
+ messages=[
36
+ {"role": "system", "content": system_prompt},
37
+ {"role": "user", "content": user_message}
38
+ ],
39
+ temperature=0.4
40
+ )
41
+ return response.choices[0].message.content
42
+ except Exception as e:
43
+ raise Exception(f"GROQ API connection failed: {str(e)}")
44
 
45
  # Request models
46
  class Message(BaseModel):
dashboard_analytics.py CHANGED
@@ -8,21 +8,27 @@ import json
8
  from datetime import datetime, timedelta
9
  import random
10
 
11
- # Environment Variables
12
  GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
13
-
14
- # Model Setup
15
  def generate_response(system_prompt: str, user_message: str):
 
 
 
16
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
17
- response = client.chat.completions.create(
18
- model="llama3-8b-8192",
19
- messages=[
20
- {"role": "system", "content": system_prompt},
21
- {"role": "user", "content": user_message}
22
- ],
23
- temperature=0.4
24
- )
25
- return response.choices[0].message.content
 
 
 
26
 
27
  # Request Models
28
  class DashboardAnalyticsRequest(BaseModel):
 
8
  from datetime import datetime, timedelta
9
  import random
10
 
11
+ # Environment Variables
12
  GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
13
+ if GROQ_API_KEY:
14
+ GROQ_API_KEY = GROQ_API_KEY.strip()# Model Setup
15
  def generate_response(system_prompt: str, user_message: str):
16
+ if not GROQ_API_KEY:
17
+ raise Exception("GROQ_API_KEY environment variable is not set")
18
+
19
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
20
+ try:
21
+ response = client.chat.completions.create(
22
+ model="llama3-8b-8192",
23
+ messages=[
24
+ {"role": "system", "content": system_prompt},
25
+ {"role": "user", "content": user_message}
26
+ ],
27
+ temperature=0.4
28
+ )
29
+ return response.choices[0].message.content
30
+ except Exception as e:
31
+ raise Exception(f"GROQ API connection failed: {str(e)}")
32
 
33
  # Request Models
34
  class DashboardAnalyticsRequest(BaseModel):
enterprise_ra.py CHANGED
@@ -9,19 +9,27 @@ import uuid
9
 
10
  # Environment Variables
11
  GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
 
 
12
 
13
  # Model Setup
14
  def generate_response(system_prompt: str, user_message: str):
 
 
 
15
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
16
- response = client.chat.completions.create(
17
- model="llama3-8b-8192",
18
- messages=[
19
- {"role": "system", "content": system_prompt},
20
- {"role": "user", "content": user_message}
21
- ],
22
- temperature=0.4
23
- )
24
- return response.choices[0].message.content
 
 
 
25
 
26
  # Request Models
27
  class RiskGenerationRequest(BaseModel):
 
9
 
10
  # Environment Variables
11
  GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
12
+ if GROQ_API_KEY:
13
+ GROQ_API_KEY = GROQ_API_KEY.strip()
14
 
15
  # Model Setup
16
  def generate_response(system_prompt: str, user_message: str):
17
+ if not GROQ_API_KEY:
18
+ raise Exception("GROQ_API_KEY environment variable is not set")
19
+
20
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
21
+ try:
22
+ response = client.chat.completions.create(
23
+ model="llama3-8b-8192",
24
+ messages=[
25
+ {"role": "system", "content": system_prompt},
26
+ {"role": "user", "content": user_message}
27
+ ],
28
+ temperature=0.4
29
+ )
30
+ return response.choices[0].message.content
31
+ except Exception as e:
32
+ raise Exception(f"GROQ API connection failed: {str(e)}")
33
 
34
  # Request Models
35
  class RiskGenerationRequest(BaseModel):
threat_ra.py CHANGED
@@ -9,19 +9,27 @@ import uuid
9
 
10
  # Environment Variables
11
  GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
 
 
12
 
13
  # Model Setup
14
  def generate_response(system_prompt: str, user_message: str):
 
 
 
15
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
16
- response = client.chat.completions.create(
17
- model="llama3-8b-8192",
18
- messages=[
19
- {"role": "system", "content": system_prompt},
20
- {"role": "user", "content": user_message}
21
- ],
22
- temperature=0.4
23
- )
24
- return response.choices[0].message.content
 
 
 
25
 
26
  # Request Models
27
  class ThreatRiskGenerationRequest(BaseModel):
 
9
 
10
  # Environment Variables
11
  GROQ_API_KEY = os.environ.get("GROQ_API_KEY")
12
+ if GROQ_API_KEY:
13
+ GROQ_API_KEY = GROQ_API_KEY.strip()
14
 
15
  # Model Setup
16
  def generate_response(system_prompt: str, user_message: str):
17
+ if not GROQ_API_KEY:
18
+ raise Exception("GROQ_API_KEY environment variable is not set")
19
+
20
  client = openai.OpenAI(api_key=GROQ_API_KEY, base_url="https://api.groq.com/openai/v1")
21
+ try:
22
+ response = client.chat.completions.create(
23
+ model="llama3-8b-8192",
24
+ messages=[
25
+ {"role": "system", "content": system_prompt},
26
+ {"role": "user", "content": user_message}
27
+ ],
28
+ temperature=0.4
29
+ )
30
+ return response.choices[0].message.content
31
+ except Exception as e:
32
+ raise Exception(f"GROQ API connection failed: {str(e)}")
33
 
34
  # Request Models
35
  class ThreatRiskGenerationRequest(BaseModel):