root commited on
Commit
94c8770
·
1 Parent(s): 702e569

Revamp app with chat studio and tooling

Browse files
Dockerfile CHANGED
@@ -7,4 +7,6 @@ RUN pip install --no-cache-dir --upgrade -r requirements.txt
7
 
8
  COPY . .
9
 
 
 
10
  CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
 
7
 
8
  COPY . .
9
 
10
+ RUN mkdir -p templates static/css static/js
11
+
12
  CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: Sheikh LLM
3
  emoji: 🚀
4
  colorFrom: blue
5
  colorTo: purple
@@ -7,29 +7,38 @@ sdk: docker
7
  pinned: false
8
  ---
9
 
10
- # Sheikh LLM Space
11
 
12
- This is an automated FastAPI application deployed on Hugging Face Spaces.
13
 
14
  ## Features
15
- - FastAPI backend with auto-generated docs
16
- - Docker deployment
17
- - Health monitoring endpoints
18
- - Ready for LLM integration
 
19
 
20
- ## API Documentation
21
- Visit `/docs` for interactive Swagger UI documentation.
 
22
 
23
- ## Endpoints
24
- - `GET /` - Homepage with UI
25
- - `GET /health` - Health check
26
- - `GET /api/status` - API status
27
- - `POST /api/chat` - Chat endpoint
28
-
29
- ## Local Development
30
  ```bash
31
  git clone git@hf.co:spaces/RecentCoders/sheikh-llm
32
  cd sheikh-llm
 
 
33
  pip install -r requirements.txt
34
  uvicorn app:app --reload --port 7860
35
- ```
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Sheikh LLM Studio
3
  emoji: 🚀
4
  colorFrom: blue
5
  colorTo: purple
 
7
  pinned: false
8
  ---
9
 
10
+ # Sheikh LLM Studio
11
 
12
+ Full-featured FastAPI application deployed on Hugging Face Spaces with multi-model chat, tooling, and model-creation workflows.
13
 
14
  ## Features
15
+ - Web UI with real-time chat experience and adjustable generation settings
16
+ - Multi-model support backed by Hugging Face gated models via `InferenceClient`
17
+ - Tool integration endpoints for search and code execution prototypes
18
+ - Model Studio workflow to queue fine-tuning jobs and monitor status
19
+ - WebSocket endpoint for streaming-style interactions
20
 
21
+ ## Configuration
22
+ 1. Add an `HF_TOKEN` repository secret in your Space with access to the desired gated models.
23
+ 2. Optional: adjust available models in `app.py` under `Config.AVAILABLE_MODELS`.
24
 
25
+ ## Development
 
 
 
 
 
 
26
  ```bash
27
  git clone git@hf.co:spaces/RecentCoders/sheikh-llm
28
  cd sheikh-llm
29
+ python -m venv .venv
30
+ source .venv/bin/activate
31
  pip install -r requirements.txt
32
  uvicorn app:app --reload --port 7860
33
+ ```
34
+
35
+ ## Deployment
36
+ ```bash
37
+ ./deploy.sh
38
+ ```
39
+
40
+ After pushing, monitor the build logs on your Space and test the endpoints:
41
+ - `https://recentcoders-sheikh-llm.hf.space/`
42
+ - `https://recentcoders-sheikh-llm.hf.space/chat`
43
+ - `https://recentcoders-sheikh-llm.hf.space/docs`
44
+ - `https://recentcoders-sheikh-llm.hf.space/health`
app.py CHANGED
@@ -1,136 +1,226 @@
1
- from fastapi import FastAPI, HTTPException
2
- from fastapi.responses import HTMLResponse, JSONResponse
3
- from pydantic import BaseModel
 
 
4
  import os
5
- from transformers import AutoModelForCausalLM, AutoTokenizer
6
- import torch
 
 
 
 
 
 
 
7
 
8
  app = FastAPI(
9
- title="Sheikh LLM API",
10
- description="A powerful LLM API deployed on Hugging Face Spaces",
11
- version="1.0.0"
12
  )
13
 
14
- # Load model and tokenizer
15
- tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
16
- model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  class ChatRequest(BaseModel):
19
  message: str
20
- max_tokens: int = 100
 
 
 
 
21
 
22
  class ChatResponse(BaseModel):
23
  response: str
 
24
  status: str
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  @app.get("/", response_class=HTMLResponse)
27
- def home():
28
- return """
29
- <!DOCTYPE html>
30
- <html>
31
- <head>
32
- <title>Sheikh LLM</title>
33
- <style>
34
- body {
35
- font-family: Arial, sans-serif;
36
- margin: 0;
37
- padding: 20px;
38
- background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
39
- color: white;
40
- }
41
- .container {
42
- max-width: 800px;
43
- margin: 0 auto;
44
- background: rgba(255,255,255,0.1);
45
- padding: 30px;
46
- border-radius: 15px;
47
- backdrop-filter: blur(10px);
48
- }
49
- .header {
50
- text-align: center;
51
- margin-bottom: 30px;
52
- }
53
- .endpoints {
54
- background: rgba(255,255,255,0.2);
55
- padding: 20px;
56
- border-radius: 10px;
57
- margin: 20px 0;
58
- }
59
- a { color: #ffd700; text-decoration: none; }
60
- a:hover { text-decoration: underline; }
61
- </style>
62
- </head>
63
- <body>
64
- <div class="container">
65
- <div class="header">
66
- <h1>🚀 Sheikh LLM Space</h1>
67
- <p>Welcome to your automated Hugging Face Space!</p>
68
- </div>
69
-
70
- <div class="endpoints">
71
- <h2>📡 API Endpoints:</h2>
72
- <ul>
73
- <li><a href="/health" target="_blank">GET /health</a> - Health check</li>
74
- <li><a href="/api/status" target="_blank">GET /api/status</a> - API status</li>
75
- <li><a href="/docs" target="_blank">GET /docs</a> - Interactive API documentation</li>
76
- </ul>
77
- </div>
78
-
79
- <div class="endpoints">
80
- <h2>⚡ Quick Test:</h2>
81
- <p>Try this curl command to test the API:</p>
82
- <code style="background: black; padding: 10px; display: block; border-radius: 5px;">
83
- curl -X GET "https://recentcoders-sheikh-llm.hf.space/health"
84
- </code>
85
- </div>
86
- </div>
87
- </body>
88
- </html>
89
- """
90
 
91
- @app.get("/health")
92
- async def health_check():
93
- return JSONResponse({
94
- "status": "healthy",
95
- "service": "sheikh-llm",
96
- "version": "1.0.0",
97
- "environment": "production"
98
- })
99
-
100
- @app.get("/api/status")
101
- async def api_status():
102
- return {
103
- "service": "sheikh-llm",
104
- "version": "1.0.0",
105
- "status": "running",
106
- "endpoints": [
107
- {"path": "/", "method": "GET", "description": "Homepage"},
108
- {"path": "/health", "method": "GET", "description": "Health check"},
109
- {"path": "/api/status", "method": "GET", "description": "API status"},
110
- {"path": "/docs", "method": "GET", "description": "Swagger UI"}
111
- ]
112
- }
113
 
114
  @app.post("/api/chat", response_model=ChatResponse)
115
- async def chat_endpoint(request: ChatRequest):
116
- """Chat endpoint that uses a Hugging Face model"""
117
  if not request.message.strip():
118
  raise HTTPException(status_code=400, detail="Message cannot be empty")
119
 
120
- # Encode the new user input, add the eos_token and return a tensor in Pytorch
121
- new_user_input_ids = tokenizer.encode(request.message + tokenizer.eos_token, return_tensors='pt')
 
 
 
 
 
 
122
 
123
- # Generate a response
124
- chat_history_ids = model.generate(new_user_input_ids, max_length=request.max_tokens, pad_token_id=tokenizer.eos_token_id)
 
125
 
126
- # Decode the response
127
- response_text = tokenizer.decode(chat_history_ids[:, new_user_input_ids.shape[-1]:][0], skip_special_tokens=True)
128
-
129
- return ChatResponse(
130
- response=response_text,
131
- status="success"
132
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
133
 
134
- if __name__ == "__main__":
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
  import uvicorn
136
- uvicorn.run(app, host="0.0.0.0", port=7860)
 
 
1
+ from __future__ import annotations
2
+
3
+ import asyncio
4
+ import json
5
+ import logging
6
  import os
7
+ import time
8
+ from typing import Any, Dict, List, Optional
9
+
10
+ from fastapi import FastAPI, HTTPException, Request, WebSocket, WebSocketDisconnect
11
+ from fastapi.responses import HTMLResponse
12
+ from fastapi.staticfiles import StaticFiles
13
+ from fastapi.templating import Jinja2Templates
14
+ from huggingface_hub import InferenceClient
15
+ from pydantic import BaseModel
16
 
17
  app = FastAPI(
18
+ title="Sheikh LLM Studio",
19
+ description="Advanced LLM platform with chat, tools, and model workflows",
20
+ version="2.0.0",
21
  )
22
 
23
+ logging.basicConfig(level=logging.INFO)
24
+ logger = logging.getLogger(__name__)
25
+
26
+ STATIC_DIR = "static"
27
+ TEMPLATES_DIR = "templates"
28
+ app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static")
29
+ templates = Jinja2Templates(directory=TEMPLATES_DIR)
30
+
31
+
32
+ class Config:
33
+ HF_TOKEN: Optional[str] = os.getenv("HF_TOKEN")
34
+ AVAILABLE_MODELS: Dict[str, str] = {
35
+ "mistral-small": "mistralai/Mistral-Small-3.1-24B-Instruct-2503",
36
+ "mistral-large": "mistralai/Mistral-Large-Instruct-2411",
37
+ "mistral-7b": "mistralai/Mistral-7B-Instruct-v0.3",
38
+ "baby-grok": "IntelligentEstate/Baby_Grok3-1.5b-iQ4_K_M-GGUF",
39
+ }
40
+
41
 
42
  class ChatRequest(BaseModel):
43
  message: str
44
+ model: str = "mistral-small"
45
+ max_tokens: int = 500
46
+ temperature: float = 0.7
47
+ stream: bool = False
48
+
49
 
50
  class ChatResponse(BaseModel):
51
  response: str
52
+ model: str
53
  status: str
54
 
55
+
56
+ class ToolRequest(BaseModel):
57
+ tool: str
58
+ parameters: Dict[str, Any]
59
+
60
+
61
+ class ModelConfig(BaseModel):
62
+ base_model: str
63
+ dataset_path: str
64
+ training_config: Dict[str, Any]
65
+
66
+
67
+ connected_clients: List[WebSocket] = []
68
+
69
+
70
+ @app.on_event("startup")
71
+ async def startup_event() -> None:
72
+ logger.info("Starting Sheikh LLM Studio")
73
+ if not Config.HF_TOKEN:
74
+ logger.warning("HF_TOKEN not set; gated models will not be accessible.")
75
+
76
+
77
  @app.get("/", response_class=HTMLResponse)
78
+ async def home(request: Request) -> HTMLResponse:
79
+ return templates.TemplateResponse("index.html", {"request": request})
80
+
81
+
82
+ @app.get("/chat", response_class=HTMLResponse)
83
+ async def chat_interface(request: Request) -> HTMLResponse:
84
+ return templates.TemplateResponse("chat.html", {"request": request})
85
+
86
+
87
+ @app.get("/studio", response_class=HTMLResponse)
88
+ async def model_studio(request: Request) -> HTMLResponse:
89
+ return templates.TemplateResponse("studio.html", {"request": request})
90
+
91
+
92
+ @app.get("/api/models")
93
+ async def get_available_models() -> Dict[str, Any]:
94
+ return {"models": Config.AVAILABLE_MODELS, "status": "success"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  @app.post("/api/chat", response_model=ChatResponse)
98
+ async def chat_completion(request: ChatRequest) -> ChatResponse:
 
99
  if not request.message.strip():
100
  raise HTTPException(status_code=400, detail="Message cannot be empty")
101
 
102
+ if request.model not in Config.AVAILABLE_MODELS:
103
+ raise HTTPException(status_code=400, detail="Unknown model selection")
104
+
105
+ if Config.HF_TOKEN is None:
106
+ raise HTTPException(status_code=500, detail="HF_TOKEN environment variable is not set")
107
+
108
+ model_id = Config.AVAILABLE_MODELS[request.model]
109
+ client = InferenceClient(model=model_id, token=Config.HF_TOKEN)
110
 
111
+ prompt = request.message
112
+ if "mistral" in request.model:
113
+ prompt = f"<s>[INST] {request.message.strip()} [/INST]"
114
 
115
+ try:
116
+ if request.stream:
117
+ generated_text = ""
118
+ for chunk in client.text_generation(
119
+ prompt,
120
+ max_new_tokens=request.max_tokens,
121
+ temperature=request.temperature,
122
+ stream=True,
123
+ ):
124
+ generated_text += getattr(chunk, "token", "")
125
+ await asyncio.sleep(0)
126
+ else:
127
+ generated_text = client.text_generation(
128
+ prompt,
129
+ max_new_tokens=request.max_tokens,
130
+ temperature=request.temperature,
131
+ )
132
+ except Exception as exc: # pragma: no cover - external service
133
+ logger.error("Chat generation failed: %s", exc)
134
+ raise HTTPException(status_code=502, detail=f"Model error: {exc}") from exc
135
 
136
+ return ChatResponse(response=generated_text, model=request.model, status="success")
137
+
138
+
139
+ @app.websocket("/ws/chat")
140
+ async def websocket_chat(websocket: WebSocket) -> None:
141
+ await websocket.accept()
142
+ connected_clients.append(websocket)
143
+ try:
144
+ while True:
145
+ data = await websocket.receive_text()
146
+ message_data = json.loads(data)
147
+ user_message = message_data.get("message", "")
148
+ response_text = f"Echo: {user_message}"
149
+ for index in range(1, len(response_text) + 1):
150
+ await websocket.send_text(json.dumps({"chunk": response_text[:index], "done": False}))
151
+ await asyncio.sleep(0.1)
152
+ await websocket.send_text(json.dumps({"chunk": response_text, "done": True}))
153
+ except WebSocketDisconnect:
154
+ connected_clients.remove(websocket)
155
+
156
+
157
+ @app.post("/api/tools/search")
158
+ async def search_tool(request: ToolRequest) -> Dict[str, Any]:
159
+ if request.tool != "web_search":
160
+ raise HTTPException(status_code=400, detail="Unknown tool")
161
+
162
+ query = request.parameters.get("query", "")
163
+ return {
164
+ "tool": "web_search",
165
+ "results": [
166
+ {"title": f"Result 1 for {query}", "url": "#"},
167
+ {"title": f"Result 2 for {query}", "url": "#"},
168
+ ],
169
+ "status": "success",
170
+ }
171
+
172
+
173
+ @app.post("/api/tools/code")
174
+ async def code_tool(request: ToolRequest) -> Dict[str, Any]:
175
+ if request.tool != "execute_python":
176
+ raise HTTPException(status_code=400, detail="Unknown tool")
177
+
178
+ code = request.parameters.get("code", "")
179
+ return {
180
+ "tool": "execute_python",
181
+ "output": f"Executed code: {code}",
182
+ "status": "success",
183
+ }
184
+
185
+
186
+ @app.post("/api/studio/create-model")
187
+ async def create_model(config: ModelConfig) -> Dict[str, Any]:
188
+ job_id = f"train_{int(time.time())}"
189
+ training_job = {
190
+ "job_id": job_id,
191
+ "status": "queued",
192
+ "base_model": config.base_model,
193
+ "dataset_path": config.dataset_path,
194
+ "config": config.training_config,
195
+ }
196
+ return {
197
+ "job": training_job,
198
+ "message": "Training job queued successfully",
199
+ "status": "success",
200
+ }
201
+
202
+
203
+ @app.get("/api/studio/jobs/{job_id}")
204
+ async def get_training_job(job_id: str) -> Dict[str, Any]:
205
+ return {
206
+ "job_id": job_id,
207
+ "status": "completed",
208
+ "progress": 100,
209
+ "model_url": f"https://huggingface.co/RecentCoders/{job_id}",
210
+ }
211
+
212
+
213
+ @app.get("/health")
214
+ async def health_check() -> Dict[str, Any]:
215
+ return {
216
+ "status": "healthy",
217
+ "service": "sheikh-llm-studio",
218
+ "version": "2.0.0",
219
+ "features": ["chat", "tools", "model_studio", "websockets"],
220
+ }
221
+
222
+
223
+ if __name__ == "__main__": # pragma: no cover
224
  import uvicorn
225
+
226
+ uvicorn.run(app, host="0.0.0.0", port=7860)
deploy.sh CHANGED
@@ -1,22 +1,23 @@
1
  #!/bin/bash
2
 
3
- echo "🚀 Starting deployment to Hugging Face Space..."
4
 
5
- # Check if we're in the right directory
6
  if [ ! -f "app.py" ]; then
7
- echo "❌ Error: app.py not found. Make sure you're in the sheikh-llm directory"
8
  exit 1
9
  fi
10
 
11
- # Add all changes
12
  git add .
13
 
14
- # Commit with timestamp
15
- git commit -m "Auto-deploy: $(date '+%Y-%m-%d %H:%M:%S')" || true
16
 
17
- # Push to Hugging Face
18
  git push origin main
19
 
20
  echo "✅ Deployment completed!"
21
- echo "📊 Space URL: https://huggingface.co/spaces/RecentCoders/sheikh-llm"
22
- echo " Build will start automatically. Check logs in your Space."
 
 
 
 
 
 
1
  #!/bin/bash
2
 
3
+ echo "🚀 Deploying Sheikh LLM Studio to Hugging Face Space..."
4
 
 
5
  if [ ! -f "app.py" ]; then
6
+ echo "❌ Error: Must be in sheikh-llm directory"
7
  exit 1
8
  fi
9
 
 
10
  git add .
11
 
12
+ git commit -m "Deploy v2.0: Chat interface + Model Studio + Tools - $(date '+%Y-%m-%d %H:%M:%S')" || true
 
13
 
 
14
  git push origin main
15
 
16
  echo "✅ Deployment completed!"
17
+ echo "🌐 Space URL: https://huggingface.co/spaces/RecentCoders/sheikh-llm"
18
+ echo " Features deployed:"
19
+ echo " - Multi-model chat interface"
20
+ echo " - Model creation studio"
21
+ echo " - Tool integration (search, code)"
22
+ echo " - WebSocket support"
23
+ echo " - Training job management"
requirements.txt CHANGED
@@ -2,5 +2,10 @@ fastapi==0.104.1
2
  uvicorn[standard]==0.24.0
3
  pydantic==2.5.0
4
  python-multipart==0.0.6
5
- transformers
6
- torch
 
 
 
 
 
 
2
  uvicorn[standard]==0.24.0
3
  pydantic==2.5.0
4
  python-multipart==0.0.6
5
+ transformers==4.37.0
6
+ torch==2.1.0
7
+ accelerate==0.24.0
8
+ huggingface_hub==0.20.0
9
+ requests==2.31.0
10
+ aiofiles==23.2.0
11
+ jinja2==3.1.2
static/css/style.css ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ /* Placeholder stylesheet for Sheikh LLM Studio */
2
+ body {}
static/js/app.js ADDED
@@ -0,0 +1 @@
 
 
1
+ // Placeholder script for Sheikh LLM Studio enhancements
templates/base.html ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>{% block title %}Sheikh LLM Studio{% endblock %}</title>
7
+ <link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" rel="stylesheet">
8
+ <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
9
+ <style>
10
+ :root {
11
+ --primary: #667eea;
12
+ --secondary: #764ba2;
13
+ --success: #10b981;
14
+ --warning: #f59e0b;
15
+ --error: #ef4444;
16
+ }
17
+ * {
18
+ margin: 0;
19
+ padding: 0;
20
+ box-sizing: border-box;
21
+ }
22
+ body {
23
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
24
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
25
+ min-height: 100vh;
26
+ color: #333;
27
+ }
28
+ .navbar {
29
+ background: rgba(255, 255, 255, 0.95);
30
+ backdrop-filter: blur(10px);
31
+ padding: 1rem 2rem;
32
+ box-shadow: 0 4px 6px -1px rgba(0, 0, 0, 0.1);
33
+ }
34
+ .nav-container {
35
+ max-width: 1200px;
36
+ margin: 0 auto;
37
+ display: flex;
38
+ justify-content: space-between;
39
+ align-items: center;
40
+ }
41
+ .logo {
42
+ font-size: 1.5rem;
43
+ font-weight: bold;
44
+ color: var(--primary);
45
+ }
46
+ .nav-links {
47
+ display: flex;
48
+ gap: 2rem;
49
+ }
50
+ .nav-links a {
51
+ text-decoration: none;
52
+ color: #666;
53
+ font-weight: 500;
54
+ transition: color 0.3s;
55
+ }
56
+ .nav-links a:hover,
57
+ .nav-links a.active {
58
+ color: var(--primary);
59
+ }
60
+ .container {
61
+ max-width: 1200px;
62
+ margin: 2rem auto;
63
+ padding: 0 1rem;
64
+ }
65
+ .card {
66
+ background: rgba(255, 255, 255, 0.95);
67
+ backdrop-filter: blur(10px);
68
+ border-radius: 15px;
69
+ padding: 2rem;
70
+ box-shadow: 0 10px 25px rgba(0, 0, 0, 0.1);
71
+ margin-bottom: 2rem;
72
+ }
73
+ .btn {
74
+ background: var(--primary);
75
+ color: white;
76
+ border: none;
77
+ padding: 0.75rem 1.5rem;
78
+ border-radius: 8px;
79
+ cursor: pointer;
80
+ font-weight: 500;
81
+ transition: all 0.3s;
82
+ }
83
+ .btn:hover {
84
+ background: var(--secondary);
85
+ transform: translateY(-2px);
86
+ }
87
+ .feature-grid {
88
+ display: grid;
89
+ grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
90
+ gap: 1.5rem;
91
+ margin-top: 2rem;
92
+ }
93
+ .feature-card {
94
+ background: white;
95
+ padding: 1.5rem;
96
+ border-radius: 10px;
97
+ text-align: center;
98
+ transition: transform 0.3s;
99
+ }
100
+ .feature-card:hover {
101
+ transform: translateY(-5px);
102
+ }
103
+ .feature-icon {
104
+ font-size: 2rem;
105
+ color: var(--primary);
106
+ margin-bottom: 1rem;
107
+ }
108
+ </style>
109
+ </head>
110
+ <body>
111
+ <nav class="navbar">
112
+ <div class="nav-container">
113
+ <div class="logo">
114
+ <i class="fas fa-robot"></i> Sheikh LLM Studio
115
+ </div>
116
+ <div class="nav-links">
117
+ <a href="/" class="{% if request.url.path == '/' %}active{% endif %}">
118
+ <i class="fas fa-home"></i> Home
119
+ </a>
120
+ <a href="/chat" class="{% if request.url.path == '/chat' %}active{% endif %}">
121
+ <i class="fas fa-comments"></i> Chat
122
+ </a>
123
+ <a href="/studio" class="{% if request.url.path == '/studio' %}active{% endif %}">
124
+ <i class="fas fa-cogs"></i> Model Studio
125
+ </a>
126
+ </div>
127
+ </div>
128
+ </nav>
129
+ <main>
130
+ {% block content %}{% endblock %}
131
+ </main>
132
+ <script>
133
+ function showNotification(message, type = 'info') {
134
+ console.log(`${type}: ${message}`);
135
+ }
136
+ function formatTimestamp(date = new Date()) {
137
+ return date.toLocaleString();
138
+ }
139
+ </script>
140
+ </body>
141
+ </html>
templates/chat.html ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% extends "base.html" %}
2
+
3
+ {% block title %}Chat Interface - Sheikh LLM Studio{% endblock %}
4
+
5
+ {% block content %}
6
+ <div class="container">
7
+ <div class="card">
8
+ <h1><i class="fas fa-comments"></i> AI Chat Interface</h1>
9
+ <p>Chat with multiple AI models and use advanced tools</p>
10
+ </div>
11
+ <div class="card">
12
+ <div class="chat-container" style="display: grid; grid-template-columns: 1fr 300px; gap: 2rem;">
13
+ <div class="chat-messages" id="chatMessages" style="height: 500px; overflow-y: auto; border: 1px solid #e1e5e9; border-radius: 10px; padding: 1rem; background: #f8f9fa;">
14
+ <div class="welcome-message">
15
+ <div style="text-align: center; color: #666; margin-top: 2rem;">
16
+ <i class="fas fa-robot fa-3x" style="color: #667eea; margin-bottom: 1rem;"></i>
17
+ <h3>Welcome to Sheikh LLM Chat!</h3>
18
+ <p>Select a model and start chatting. You can also use tools like web search and code execution.</p>
19
+ </div>
20
+ </div>
21
+ </div>
22
+ <div class="controls-panel">
23
+ <div style="margin-bottom: 1.5rem;">
24
+ <label for="modelSelect"><strong>Select Model:</strong></label>
25
+ <select id="modelSelect" class="btn" style="width: 100%; margin-top: 0.5rem;">
26
+ <option value="mistral-small">Mistral Small (24B)</option>
27
+ <option value="mistral-large">Mistral Large</option>
28
+ <option value="mistral-7b">Mistral 7B</option>
29
+ <option value="baby-grok">Baby Grok (1.5B)</option>
30
+ </select>
31
+ </div>
32
+ <div style="margin-bottom: 1.5rem;">
33
+ <label><strong>Tools:</strong></label>
34
+ <div style="display: flex; flex-direction: column; gap: 0.5rem; margin-top: 0.5rem;">
35
+ <button class="btn" onclick="useTool('search')" style="background: #10b981;">
36
+ <i class="fas fa-search"></i> Web Search
37
+ </button>
38
+ <button class="btn" onclick="useTool('code')" style="background: #f59e0b;">
39
+ <i class="fas fa-code"></i> Code Execution
40
+ </button>
41
+ </div>
42
+ </div>
43
+ <div style="margin-bottom: 1.5rem;">
44
+ <label><strong>Parameters:</strong></label>
45
+ <div style="margin-top: 0.5rem;">
46
+ <label>Max Tokens: <span id="maxTokensValue">500</span></label>
47
+ <input type="range" id="maxTokens" min="100" max="2000" value="500" style="width: 100%; margin: 0.5rem 0;">
48
+ <label>Temperature: <span id="tempValue">0.7</span></label>
49
+ <input type="range" id="temperature" min="0" max="1" step="0.1" value="0.7" style="width: 100%; margin: 0.5rem 0;">
50
+ </div>
51
+ </div>
52
+ </div>
53
+ </div>
54
+ <div style="display: flex; gap: 1rem; margin-top: 1rem;">
55
+ <input type="text" id="messageInput" placeholder="Type your message here..." style="flex: 1; padding: 1rem; border: 1px solid #e1e5e9; border-radius: 8px;" onkeypress="if(event.key === 'Enter') sendMessage()">
56
+ <button class="btn" onclick="sendMessage()" style="padding: 1rem 2rem;">
57
+ <i class="fas fa-paper-plane"></i> Send
58
+ </button>
59
+ </div>
60
+ </div>
61
+ </div>
62
+ <script>
63
+ let chatHistory = [];
64
+ function updateSliderValues() {
65
+ document.getElementById('maxTokensValue').textContent = document.getElementById('maxTokens').value;
66
+ document.getElementById('tempValue').textContent = document.getElementById('temperature').value;
67
+ }
68
+ document.getElementById('maxTokens').addEventListener('input', updateSliderValues);
69
+ document.getElementById('temperature').addEventListener('input', updateSliderValues);
70
+ function addMessage(role, content) {
71
+ const messagesDiv = document.getElementById('chatMessages');
72
+ const messageDiv = document.createElement('div');
73
+ messageDiv.className = `message ${role}`;
74
+ messageDiv.innerHTML = `
75
+ <div style="margin: 1rem 0; padding: 1rem; border-radius: 10px; ${role === 'user' ? 'background: #667eea; color: white; margin-left: 2rem;' : 'background: white; border: 1px solid #e1e5e9; margin-right: 2rem;'}">
76
+ <strong>${role === 'user' ? 'You' : 'AI'}:</strong>
77
+ <div>${content}</div>
78
+ <small style="opacity: 0.7; font-size: 0.8rem;">${new Date().toLocaleTimeString()}</small>
79
+ </div>
80
+ `;
81
+ messagesDiv.appendChild(messageDiv);
82
+ messagesDiv.scrollTop = messagesDiv.scrollHeight;
83
+ }
84
+ async function sendMessage() {
85
+ const input = document.getElementById('messageInput');
86
+ const message = input.value.trim();
87
+ if (!message) return;
88
+ addMessage('user', message);
89
+ input.value = '';
90
+ const model = document.getElementById('modelSelect').value;
91
+ const maxTokens = document.getElementById('maxTokens').value;
92
+ const temperature = document.getElementById('temperature').value;
93
+ try {
94
+ const response = await fetch('/api/chat', {
95
+ method: 'POST',
96
+ headers: {'Content-Type': 'application/json'},
97
+ body: JSON.stringify({
98
+ message: message,
99
+ model: model,
100
+ max_tokens: parseInt(maxTokens),
101
+ temperature: parseFloat(temperature),
102
+ stream: false
103
+ })
104
+ });
105
+ const data = await response.json();
106
+ if (response.ok && data.status === 'success') {
107
+ addMessage('assistant', data.response);
108
+ } else {
109
+ addMessage('assistant', `Error: ${data.detail || 'Unknown error'}`);
110
+ }
111
+ } catch (error) {
112
+ addMessage('assistant', `Connection error: ${error.message}`);
113
+ }
114
+ }
115
+ async function useTool(tool) {
116
+ const message = prompt(`Enter parameters for ${tool}:`);
117
+ if (!message) return;
118
+ try {
119
+ const endpoint = tool === 'search' ? '/api/tools/search' : '/api/tools/code';
120
+ const response = await fetch(endpoint, {
121
+ method: 'POST',
122
+ headers: {'Content-Type': 'application/json'},
123
+ body: JSON.stringify({
124
+ tool: tool === 'search' ? 'web_search' : 'execute_python',
125
+ parameters: { query: message, code: message }
126
+ })
127
+ });
128
+ const data = await response.json();
129
+ addMessage('assistant', `Tool result (${tool}): ${JSON.stringify(data, null, 2)}`);
130
+ } catch (error) {
131
+ addMessage('assistant', `Tool error: ${error.message}`);
132
+ }
133
+ }
134
+ updateSliderValues();
135
+ document.getElementById('messageInput').focus();
136
+ </script>
137
+ {% endblock %}
templates/index.html ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% extends "base.html" %}
2
+
3
+ {% block title %}Home - Sheikh LLM Studio{% endblock %}
4
+
5
+ {% block content %}
6
+ <div class="container">
7
+ <div class="card">
8
+ <h1><i class="fas fa-rocket"></i> Sheikh LLM Studio</h1>
9
+ <p>Deploy, experiment, and build with advanced language models inside your Hugging Face Space.</p>
10
+ <div style="margin-top: 1.5rem; display: flex; gap: 1rem; flex-wrap: wrap;">
11
+ <a class="btn" href="/chat"><i class="fas fa-comments"></i> Launch Chat</a>
12
+ <a class="btn" href="/studio" style="background: var(--success);"><i class="fas fa-cogs"></i> Open Model Studio</a>
13
+ <a class="btn" href="/docs" style="background: var(--warning);"><i class="fas fa-book"></i> View API Docs</a>
14
+ </div>
15
+ </div>
16
+ <div class="card">
17
+ <h2>Platform Capabilities</h2>
18
+ <div class="feature-grid">
19
+ <div class="feature-card">
20
+ <div class="feature-icon"><i class="fas fa-robot"></i></div>
21
+ <h3>Multi-Model Chat</h3>
22
+ <p>Switch between gated Hugging Face models and customize temperature and token limits.</p>
23
+ </div>
24
+ <div class="feature-card">
25
+ <div class="feature-icon"><i class="fas fa-toolbox"></i></div>
26
+ <h3>Tool Integrations</h3>
27
+ <p>Extend responses with web search prototypes and code execution utilities.</p>
28
+ </div>
29
+ <div class="feature-card">
30
+ <div class="feature-icon"><i class="fas fa-flask"></i></div>
31
+ <h3>Model Studio</h3>
32
+ <p>Draft new training jobs and track deployment progress from one interface.</p>
33
+ </div>
34
+ </div>
35
+ </div>
36
+ </div>
37
+ {% endblock %}
templates/studio.html ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% extends "base.html" %}
2
+
3
+ {% block title %}Model Studio - Sheikh LLM Studio{% endblock %}
4
+
5
+ {% block content %}
6
+ <div class="container">
7
+ <div class="card">
8
+ <h1><i class="fas fa-cogs"></i> Model Creation Studio</h1>
9
+ <p>Create, train, and deploy your own AI models</p>
10
+ </div>
11
+ <div class="feature-grid">
12
+ <div class="feature-card">
13
+ <div class="feature-icon"><i class="fas fa-plus-circle"></i></div>
14
+ <h3>Create New Model</h3>
15
+ <p>Fine-tune existing models with your data</p>
16
+ <button class="btn" onclick="showCreateModal()">Create Model</button>
17
+ </div>
18
+ <div class="feature-card">
19
+ <div class="feature-icon"><i class="fas fa-list"></i></div>
20
+ <h3>My Models</h3>
21
+ <p>Manage your created models</p>
22
+ <button class="btn" onclick="loadMyModels()">View Models</button>
23
+ </div>
24
+ <div class="feature-card">
25
+ <div class="feature-icon"><i class="fas fa-chart-line"></i></div>
26
+ <h3>Training Jobs</h3>
27
+ <p>Monitor model training progress</p>
28
+ <button class="btn" onclick="loadTrainingJobs()">View Jobs</button>
29
+ </div>
30
+ </div>
31
+ <div class="card" id="modelCreationForm" style="display: none;">
32
+ <h3>Create New Model</h3>
33
+ <form id="createModelForm">
34
+ <div style="display: grid; grid-template-columns: 1fr 1fr; gap: 1rem; margin-bottom: 1rem;">
35
+ <div>
36
+ <label><strong>Base Model:</strong></label>
37
+ <select id="baseModel" class="btn" style="width: 100%; margin-top: 0.5rem;">
38
+ <option value="mistralai/Mistral-7B-Instruct-v0.3">Mistral 7B Instruct</option>
39
+ <option value="microsoft/DialoGPT-medium">DialoGPT Medium</option>
40
+ </select>
41
+ </div>
42
+ <div>
43
+ <label><strong>Model Name:</strong></label>
44
+ <input type="text" id="modelName" placeholder="my-custom-model" style="width: 100%; padding: 0.75rem; border: 1px solid #e1e5e9; border-radius: 8px; margin-top: 0.5rem;">
45
+ </div>
46
+ </div>
47
+ <div style="margin-bottom: 1rem;">
48
+ <label><strong>Training Dataset:</strong></label>
49
+ <input type="text" id="datasetPath" placeholder="path/to/dataset" style="width: 100%; padding: 0.75rem; border: 1px solid #e1e5e9; border-radius: 8px; margin-top: 0.5rem;">
50
+ </div>
51
+ <div style="margin-bottom: 1rem;">
52
+ <label><strong>Training Configuration:</strong></label>
53
+ <textarea id="trainingConfig" style="width: 100%; height: 100px; padding: 0.75rem; border: 1px solid #e1e5e9; border-radius: 8px; margin-top: 0.5rem;" placeholder='{"epochs": 3, "learning_rate": 2e-5, "batch_size": 4}'></textarea>
54
+ </div>
55
+ <div style="display: flex; gap: 1rem;">
56
+ <button type="button" class="btn" onclick="createModel()" style="background: var(--success);">
57
+ <i class="fas fa-rocket"></i> Start Training
58
+ </button>
59
+ <button type="button" class="btn" onclick="hideCreateModal()" style="background: var(--error);">
60
+ <i class="fas fa-times"></i> Cancel
61
+ </button>
62
+ </div>
63
+ </form>
64
+ </div>
65
+ <div class="card" id="jobsDisplay" style="display: none;">
66
+ <h3>Training Jobs</h3>
67
+ <div id="jobsList"></div>
68
+ </div>
69
+ </div>
70
+ <script>
71
+ function showCreateModal() {
72
+ document.getElementById('modelCreationForm').style.display = 'block';
73
+ document.getElementById('jobsDisplay').style.display = 'none';
74
+ }
75
+ function hideCreateModal() {
76
+ document.getElementById('modelCreationForm').style.display = 'none';
77
+ }
78
+ async function createModel() {
79
+ const baseModel = document.getElementById('baseModel').value;
80
+ const modelName = document.getElementById('modelName').value;
81
+ const datasetPath = document.getElementById('datasetPath').value;
82
+ const trainingConfig = document.getElementById('trainingConfig').value;
83
+ if (!modelName || !datasetPath) {
84
+ alert('Please fill in all required fields');
85
+ return;
86
+ }
87
+ try {
88
+ const config = trainingConfig ? JSON.parse(trainingConfig) : { epochs: 3, learning_rate: 2e-5, batch_size: 4 };
89
+ const response = await fetch('/api/studio/create-model', {
90
+ method: 'POST',
91
+ headers: { 'Content-Type': 'application/json' },
92
+ body: JSON.stringify({
93
+ base_model: baseModel,
94
+ dataset_path: datasetPath,
95
+ training_config: config
96
+ })
97
+ });
98
+ const data = await response.json();
99
+ if (data.status === 'success') {
100
+ alert(`Training started! Job ID: ${data.job.job_id}`);
101
+ hideCreateModal();
102
+ loadTrainingJobs();
103
+ } else {
104
+ alert('Error starting training: ' + data.detail);
105
+ }
106
+ } catch (error) {
107
+ alert('Error: ' + error.message);
108
+ }
109
+ }
110
+ async function loadTrainingJobs() {
111
+ document.getElementById('jobsDisplay').style.display = 'block';
112
+ document.getElementById('modelCreationForm').style.display = 'none';
113
+ const jobsList = document.getElementById('jobsList');
114
+ jobsList.innerHTML = `
115
+ <div style="padding: 1rem; background: #f8f9fa; border-radius: 8px; margin: 0.5rem 0;">
116
+ <strong>Job: train_12345</strong>
117
+ <div>Status: <span style="color: var(--success);">Completed</span></div>
118
+ <div>Model: <a href="#">RecentCoders/my-custom-model</a></div>
119
+ </div>
120
+ `;
121
+ }
122
+ function loadMyModels() {
123
+ alert('My Models feature coming soon!');
124
+ }
125
+ </script>
126
+ {% endblock %}