Spaces:
Sleeping
Sleeping
Commit
·
0e33469
1
Parent(s):
ef1c061
Rm submodule
Browse files- audio.md +188 -0
- requirements.txt +3 -0
- src/api/routes/audio.py +141 -0
- src/main.py +2 -1
- src/services/audio_transcription.py +177 -0
- static/css/styles.css +104 -0
- static/js/app.js +25 -0
- static/js/audio/recorder.js +417 -0
- static/js/chat/messaging.js +0 -162
- static/js/chat/sessions.js +0 -164
- static/js/ui/doctor.js +0 -124
- static/js/ui/handlers.js +0 -68
- static/js/ui/patient.js +0 -246
- static/js/ui/settings.js +0 -74
audio.md
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Audio Transcription Feature
|
| 2 |
+
|
| 3 |
+
This document describes the audio transcription functionality integrated into the Medical AI Assistant.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
The audio transcription feature allows users to record voice input using their microphone and automatically transcribe it to text using NVIDIA's Riva API. The transcribed text is then inserted into the chat input field for editing before submission.
|
| 8 |
+
|
| 9 |
+
## Features
|
| 10 |
+
|
| 11 |
+
- **Voice Recording**: Click and hold the microphone button to record audio
|
| 12 |
+
- **Real-time Feedback**: Visual indicators show recording and processing states
|
| 13 |
+
- **Automatic Transcription**: Audio is automatically sent to NVIDIA Riva API for transcription
|
| 14 |
+
- **Text Editing**: Transcribed text is inserted into the chat input for review and editing
|
| 15 |
+
- **Error Handling**: Graceful error handling with user-friendly messages
|
| 16 |
+
|
| 17 |
+
## Technical Implementation
|
| 18 |
+
|
| 19 |
+
### Backend Components
|
| 20 |
+
|
| 21 |
+
1. **Audio Transcription Service** (`src/services/audio_transcription.py`)
|
| 22 |
+
- Handles communication with NVIDIA Riva API
|
| 23 |
+
- Supports WAV, OPUS, and FLAC audio formats
|
| 24 |
+
- Includes audio format validation
|
| 25 |
+
- Provides both file and bytes-based transcription
|
| 26 |
+
|
| 27 |
+
2. **Audio API Routes** (`src/api/routes/audio.py`)
|
| 28 |
+
- `/audio/transcribe` - POST endpoint for audio transcription
|
| 29 |
+
- `/audio/supported-formats` - GET endpoint for supported formats
|
| 30 |
+
- `/audio/health` - GET endpoint for service health check
|
| 31 |
+
|
| 32 |
+
### Frontend Components
|
| 33 |
+
|
| 34 |
+
1. **Audio Recorder** (`static/js/audio/recorder.js`)
|
| 35 |
+
- `AudioRecorder` class for handling audio recording
|
| 36 |
+
- `AudioRecordingUI` class for UI management
|
| 37 |
+
- MediaRecorder API integration
|
| 38 |
+
- Real-time visual feedback
|
| 39 |
+
|
| 40 |
+
2. **CSS Styles** (`static/css/styles.css`)
|
| 41 |
+
- Recording state indicators
|
| 42 |
+
- Visual feedback for transcribed text
|
| 43 |
+
- Responsive design for mobile devices
|
| 44 |
+
|
| 45 |
+
## API Configuration
|
| 46 |
+
|
| 47 |
+
### NVIDIA Riva API Setup
|
| 48 |
+
|
| 49 |
+
1. **API Keys**: Set environment variables for NVIDIA API keys:
|
| 50 |
+
```bash
|
| 51 |
+
export NVIDIA_API_1="your_api_key_1"
|
| 52 |
+
export NVIDIA_API_2="your_api_key_2"
|
| 53 |
+
# ... up to NVIDIA_API_5
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
2. **Function ID**: The service uses the Whisper model function ID:
|
| 57 |
+
```
|
| 58 |
+
b702f636-f60c-4a3d-a6f4-f3568c13bd7d
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
3. **Server Endpoint**:
|
| 62 |
+
```
|
| 63 |
+
grpc.nvcf.nvidia.com:443
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### Supported Audio Formats
|
| 67 |
+
|
| 68 |
+
- **WAV**: 16-bit, mono, 16kHz recommended
|
| 69 |
+
- **OPUS**: WebM container with Opus codec
|
| 70 |
+
- **FLAC**: Lossless compression
|
| 71 |
+
|
| 72 |
+
## Usage
|
| 73 |
+
|
| 74 |
+
### For Users
|
| 75 |
+
|
| 76 |
+
1. **Start Recording**: Click and hold the microphone button
|
| 77 |
+
2. **Speak**: Record your voice input (up to 30 seconds)
|
| 78 |
+
3. **Stop Recording**: Release the button to stop recording
|
| 79 |
+
4. **Review Text**: Transcribed text appears in the chat input
|
| 80 |
+
5. **Edit if Needed**: Modify the text before sending
|
| 81 |
+
6. **Send Message**: Submit the message as usual
|
| 82 |
+
|
| 83 |
+
### For Developers
|
| 84 |
+
|
| 85 |
+
1. **Install Dependencies**:
|
| 86 |
+
```bash
|
| 87 |
+
pip install nvidia-riva-client
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
2. **Test the Service**:
|
| 91 |
+
```bash
|
| 92 |
+
python test_audio_transcription.py
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
3. **Start the Server**:
|
| 96 |
+
```bash
|
| 97 |
+
python start.py
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
## Browser Compatibility
|
| 101 |
+
|
| 102 |
+
- **Chrome/Chromium**: Full support
|
| 103 |
+
- **Firefox**: Full support
|
| 104 |
+
- **Safari**: Full support (iOS 14.3+)
|
| 105 |
+
- **Edge**: Full support
|
| 106 |
+
|
| 107 |
+
## Security Considerations
|
| 108 |
+
|
| 109 |
+
- **Microphone Access**: Requires user permission
|
| 110 |
+
- **Audio Data**: Transmitted securely to NVIDIA API
|
| 111 |
+
- **No Local Storage**: Audio data is not stored locally
|
| 112 |
+
- **API Key Rotation**: Automatic key rotation for reliability
|
| 113 |
+
|
| 114 |
+
## Troubleshooting
|
| 115 |
+
|
| 116 |
+
### Common Issues
|
| 117 |
+
|
| 118 |
+
1. **Microphone Access Denied**
|
| 119 |
+
- Ensure browser has microphone permissions
|
| 120 |
+
- Check browser settings for site permissions
|
| 121 |
+
|
| 122 |
+
2. **Transcription Fails**
|
| 123 |
+
- Verify NVIDIA API keys are set correctly
|
| 124 |
+
- Check network connectivity
|
| 125 |
+
- Ensure audio format is supported
|
| 126 |
+
|
| 127 |
+
3. **No Audio Detected**
|
| 128 |
+
- Check microphone is working
|
| 129 |
+
- Ensure audio levels are adequate
|
| 130 |
+
- Try speaking closer to the microphone
|
| 131 |
+
|
| 132 |
+
### Debug Mode
|
| 133 |
+
|
| 134 |
+
Enable debug logging by opening browser developer tools and checking the console for detailed error messages.
|
| 135 |
+
|
| 136 |
+
## Future Enhancements
|
| 137 |
+
|
| 138 |
+
- **Language Selection**: Support for multiple languages
|
| 139 |
+
- **Real-time Transcription**: Live transcription during recording
|
| 140 |
+
- **Audio Quality Settings**: Configurable audio parameters
|
| 141 |
+
- **Offline Support**: Local transcription capabilities
|
| 142 |
+
- **Voice Commands**: Special voice commands for UI control
|
| 143 |
+
|
| 144 |
+
## API Reference
|
| 145 |
+
|
| 146 |
+
### POST /audio/transcribe
|
| 147 |
+
|
| 148 |
+
Transcribe an audio file to text.
|
| 149 |
+
|
| 150 |
+
**Request:**
|
| 151 |
+
- `file`: Audio file (multipart/form-data)
|
| 152 |
+
- `language_code`: Language code (form data, default: "en")
|
| 153 |
+
|
| 154 |
+
**Response:**
|
| 155 |
+
```json
|
| 156 |
+
{
|
| 157 |
+
"success": true,
|
| 158 |
+
"transcribed_text": "Hello, this is a test.",
|
| 159 |
+
"language_code": "en",
|
| 160 |
+
"file_name": "recording.webm"
|
| 161 |
+
}
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
### GET /audio/supported-formats
|
| 165 |
+
|
| 166 |
+
Get list of supported audio formats.
|
| 167 |
+
|
| 168 |
+
**Response:**
|
| 169 |
+
```json
|
| 170 |
+
{
|
| 171 |
+
"supported_formats": [".wav", ".opus", ".flac"],
|
| 172 |
+
"description": "Supported audio formats for transcription"
|
| 173 |
+
}
|
| 174 |
+
```
|
| 175 |
+
|
| 176 |
+
### GET /audio/health
|
| 177 |
+
|
| 178 |
+
Check audio transcription service health.
|
| 179 |
+
|
| 180 |
+
**Response:**
|
| 181 |
+
```json
|
| 182 |
+
{
|
| 183 |
+
"service": "audio_transcription",
|
| 184 |
+
"status": "available",
|
| 185 |
+
"nvidia_keys_available": true,
|
| 186 |
+
"supported_formats": [".wav", ".opus", ".flac"]
|
| 187 |
+
}
|
| 188 |
+
```
|
requirements.txt
CHANGED
|
@@ -45,6 +45,9 @@ pytest-asyncio==0.21.1
|
|
| 45 |
black==23.11.0
|
| 46 |
flake8==6.1.0
|
| 47 |
|
|
|
|
|
|
|
|
|
|
| 48 |
# Optional: GPU support for embeddings (uncomment if using CUDA)
|
| 49 |
# torch==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118
|
| 50 |
# faiss-gpu==1.7.4
|
|
|
|
| 45 |
black==23.11.0
|
| 46 |
flake8==6.1.0
|
| 47 |
|
| 48 |
+
# NVIDIA Riva client for speech-to-text
|
| 49 |
+
nvidia-riva-client
|
| 50 |
+
|
| 51 |
# Optional: GPU support for embeddings (uncomment if using CUDA)
|
| 52 |
# torch==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118
|
| 53 |
# faiss-gpu==1.7.4
|
src/api/routes/audio.py
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# api/routes/audio.py
|
| 2 |
+
|
| 3 |
+
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File, Form
|
| 4 |
+
from fastapi.responses import JSONResponse
|
| 5 |
+
|
| 6 |
+
from src.core.state import MedicalState, get_state
|
| 7 |
+
from src.services.audio_transcription import (
|
| 8 |
+
transcribe_audio_bytes,
|
| 9 |
+
validate_audio_format,
|
| 10 |
+
get_supported_formats
|
| 11 |
+
)
|
| 12 |
+
from src.utils.logger import get_logger
|
| 13 |
+
|
| 14 |
+
logger = get_logger("AUDIO_API", __name__)
|
| 15 |
+
|
| 16 |
+
router = APIRouter(prefix="/audio", tags=["audio"])
|
| 17 |
+
|
| 18 |
+
@router.post("/transcribe")
|
| 19 |
+
async def transcribe_audio(
|
| 20 |
+
file: UploadFile = File(...),
|
| 21 |
+
language_code: str = Form(default="en"),
|
| 22 |
+
state: MedicalState = Depends(get_state)
|
| 23 |
+
) -> JSONResponse:
|
| 24 |
+
"""
|
| 25 |
+
Transcribe audio file to text using NVIDIA Riva API.
|
| 26 |
+
|
| 27 |
+
Args:
|
| 28 |
+
file: Audio file (WAV, OPUS, or FLAC format)
|
| 29 |
+
language_code: Language code for transcription (default: 'en')
|
| 30 |
+
state: Application state
|
| 31 |
+
|
| 32 |
+
Returns:
|
| 33 |
+
JSON response with transcribed text
|
| 34 |
+
"""
|
| 35 |
+
try:
|
| 36 |
+
# Validate file type
|
| 37 |
+
if not file.content_type or not any(
|
| 38 |
+
file.content_type.startswith(f"audio/{fmt}")
|
| 39 |
+
for fmt in ["wav", "opus", "flac"]
|
| 40 |
+
):
|
| 41 |
+
# Also check file extension as fallback
|
| 42 |
+
file_extension = file.filename.split('.')[-1].lower() if file.filename else ""
|
| 43 |
+
if file_extension not in get_supported_formats():
|
| 44 |
+
raise HTTPException(
|
| 45 |
+
status_code=400,
|
| 46 |
+
detail=f"Unsupported audio format. Supported formats: {', '.join(get_supported_formats())}"
|
| 47 |
+
)
|
| 48 |
+
|
| 49 |
+
# Read audio data
|
| 50 |
+
audio_bytes = await file.read()
|
| 51 |
+
|
| 52 |
+
if len(audio_bytes) == 0:
|
| 53 |
+
raise HTTPException(status_code=400, detail="Empty audio file")
|
| 54 |
+
|
| 55 |
+
# Validate audio format
|
| 56 |
+
if not validate_audio_format(audio_bytes):
|
| 57 |
+
raise HTTPException(
|
| 58 |
+
status_code=400,
|
| 59 |
+
detail="Invalid audio format. Please ensure the file is a valid WAV, OPUS, or FLAC file."
|
| 60 |
+
)
|
| 61 |
+
|
| 62 |
+
# Transcribe audio
|
| 63 |
+
logger.info(f"Transcribing audio file: {file.filename} (language: {language_code})")
|
| 64 |
+
transcribed_text = await transcribe_audio_bytes(
|
| 65 |
+
audio_bytes,
|
| 66 |
+
language_code,
|
| 67 |
+
state.nvidia_rotator
|
| 68 |
+
)
|
| 69 |
+
|
| 70 |
+
if transcribed_text is None:
|
| 71 |
+
raise HTTPException(
|
| 72 |
+
status_code=500,
|
| 73 |
+
detail="Transcription failed. Please try again or check your audio file."
|
| 74 |
+
)
|
| 75 |
+
|
| 76 |
+
return JSONResponse(
|
| 77 |
+
status_code=200,
|
| 78 |
+
content={
|
| 79 |
+
"success": True,
|
| 80 |
+
"transcribed_text": transcribed_text,
|
| 81 |
+
"language_code": language_code,
|
| 82 |
+
"file_name": file.filename
|
| 83 |
+
}
|
| 84 |
+
)
|
| 85 |
+
|
| 86 |
+
except HTTPException:
|
| 87 |
+
raise
|
| 88 |
+
except Exception as e:
|
| 89 |
+
logger.error(f"Audio transcription error: {e}")
|
| 90 |
+
raise HTTPException(
|
| 91 |
+
status_code=500,
|
| 92 |
+
detail="Internal server error during transcription"
|
| 93 |
+
)
|
| 94 |
+
|
| 95 |
+
@router.get("/supported-formats")
|
| 96 |
+
async def get_audio_formats() -> JSONResponse:
|
| 97 |
+
"""
|
| 98 |
+
Get list of supported audio formats for transcription.
|
| 99 |
+
|
| 100 |
+
Returns:
|
| 101 |
+
JSON response with supported formats
|
| 102 |
+
"""
|
| 103 |
+
return JSONResponse(
|
| 104 |
+
status_code=200,
|
| 105 |
+
content={
|
| 106 |
+
"supported_formats": get_supported_formats(),
|
| 107 |
+
"description": "Supported audio formats for transcription"
|
| 108 |
+
}
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
@router.get("/health")
|
| 112 |
+
async def audio_health_check(state: MedicalState = Depends(get_state)) -> JSONResponse:
|
| 113 |
+
"""
|
| 114 |
+
Check if audio transcription service is available.
|
| 115 |
+
|
| 116 |
+
Returns:
|
| 117 |
+
JSON response with service status
|
| 118 |
+
"""
|
| 119 |
+
try:
|
| 120 |
+
# Check if NVIDIA API keys are available
|
| 121 |
+
nvidia_keys_available = len([k for k in state.nvidia_rotator.keys if k]) > 0
|
| 122 |
+
|
| 123 |
+
return JSONResponse(
|
| 124 |
+
status_code=200,
|
| 125 |
+
content={
|
| 126 |
+
"service": "audio_transcription",
|
| 127 |
+
"status": "available" if nvidia_keys_available else "unavailable",
|
| 128 |
+
"nvidia_keys_available": nvidia_keys_available,
|
| 129 |
+
"supported_formats": get_supported_formats()
|
| 130 |
+
}
|
| 131 |
+
)
|
| 132 |
+
except Exception as e:
|
| 133 |
+
logger.error(f"Audio health check error: {e}")
|
| 134 |
+
return JSONResponse(
|
| 135 |
+
status_code=500,
|
| 136 |
+
content={
|
| 137 |
+
"service": "audio_transcription",
|
| 138 |
+
"status": "error",
|
| 139 |
+
"error": str(e)
|
| 140 |
+
}
|
| 141 |
+
)
|
src/main.py
CHANGED
|
@@ -17,7 +17,7 @@ except ImportError:
|
|
| 17 |
except Exception as e:
|
| 18 |
print(f"⚠️ Error loading .env file: {e}")
|
| 19 |
|
| 20 |
-
from src.api.routes import chat, session, static, system, user
|
| 21 |
from src.core.state import MedicalState
|
| 22 |
from src.utils.logger import get_logger
|
| 23 |
|
|
@@ -107,3 +107,4 @@ app.include_router(user.router)
|
|
| 107 |
app.include_router(session.router)
|
| 108 |
app.include_router(system.router)
|
| 109 |
app.include_router(static.router)
|
|
|
|
|
|
| 17 |
except Exception as e:
|
| 18 |
print(f"⚠️ Error loading .env file: {e}")
|
| 19 |
|
| 20 |
+
from src.api.routes import audio, chat, session, static, system, user
|
| 21 |
from src.core.state import MedicalState
|
| 22 |
from src.utils.logger import get_logger
|
| 23 |
|
|
|
|
| 107 |
app.include_router(session.router)
|
| 108 |
app.include_router(system.router)
|
| 109 |
app.include_router(static.router)
|
| 110 |
+
app.include_router(audio.router)
|
src/services/audio_transcription.py
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# services/audio_transcription.py
|
| 2 |
+
|
| 3 |
+
import os
|
| 4 |
+
import tempfile
|
| 5 |
+
from typing import Optional
|
| 6 |
+
|
| 7 |
+
from src.utils.logger import get_logger
|
| 8 |
+
from src.utils.rotator import APIKeyRotator, robust_post_json
|
| 9 |
+
|
| 10 |
+
logger = get_logger("AUDIO_TRANSCRIPTION", __name__)
|
| 11 |
+
|
| 12 |
+
# NVIDIA Riva configuration
|
| 13 |
+
RIVA_SERVER = "grpc.nvcf.nvidia.com:443"
|
| 14 |
+
RIVA_FUNCTION_ID = "b702f636-f60c-4a3d-a6f4-f3568c13bd7d"
|
| 15 |
+
|
| 16 |
+
async def transcribe_audio_file(
|
| 17 |
+
audio_file_path: str,
|
| 18 |
+
language_code: str = "en",
|
| 19 |
+
rotator: APIKeyRotator
|
| 20 |
+
) -> Optional[str]:
|
| 21 |
+
"""
|
| 22 |
+
Transcribe audio file using NVIDIA Riva API.
|
| 23 |
+
|
| 24 |
+
Args:
|
| 25 |
+
audio_file_path: Path to the audio file (WAV, OPUS, or FLAC format)
|
| 26 |
+
language_code: Language code for transcription (e.g., 'en', 'fr', 'multi')
|
| 27 |
+
rotator: API key rotator for authentication
|
| 28 |
+
|
| 29 |
+
Returns:
|
| 30 |
+
Transcribed text or None if transcription fails
|
| 31 |
+
"""
|
| 32 |
+
try:
|
| 33 |
+
# Check if file exists
|
| 34 |
+
if not os.path.exists(audio_file_path):
|
| 35 |
+
logger.error(f"Audio file not found: {audio_file_path}")
|
| 36 |
+
return None
|
| 37 |
+
|
| 38 |
+
# Get API key
|
| 39 |
+
api_key = rotator.get_key()
|
| 40 |
+
if not api_key:
|
| 41 |
+
logger.error("No NVIDIA API key available for transcription")
|
| 42 |
+
return None
|
| 43 |
+
|
| 44 |
+
# Prepare the request
|
| 45 |
+
url = f"https://{RIVA_SERVER}/v1/speech/transcribe"
|
| 46 |
+
|
| 47 |
+
headers = {
|
| 48 |
+
"Authorization": f"Bearer {api_key}",
|
| 49 |
+
"Content-Type": "application/octet-stream"
|
| 50 |
+
}
|
| 51 |
+
|
| 52 |
+
# Read audio file
|
| 53 |
+
with open(audio_file_path, 'rb') as audio_file:
|
| 54 |
+
audio_data = audio_file.read()
|
| 55 |
+
|
| 56 |
+
# Prepare metadata
|
| 57 |
+
metadata = {
|
| 58 |
+
"function-id": RIVA_FUNCTION_ID,
|
| 59 |
+
"language-code": language_code
|
| 60 |
+
}
|
| 61 |
+
|
| 62 |
+
# Add metadata to headers
|
| 63 |
+
for key, value in metadata.items():
|
| 64 |
+
headers[f"x-{key}"] = value
|
| 65 |
+
|
| 66 |
+
# Make the request
|
| 67 |
+
logger.info(f"Transcribing audio file: {audio_file_path} (language: {language_code})")
|
| 68 |
+
|
| 69 |
+
# Use httpx for binary data upload
|
| 70 |
+
import httpx
|
| 71 |
+
|
| 72 |
+
async with httpx.AsyncClient(timeout=60) as client:
|
| 73 |
+
response = await client.post(
|
| 74 |
+
url,
|
| 75 |
+
headers=headers,
|
| 76 |
+
content=audio_data
|
| 77 |
+
)
|
| 78 |
+
|
| 79 |
+
if response.status_code in (401, 403, 429) or (500 <= response.status_code < 600):
|
| 80 |
+
logger.warning(f"HTTP {response.status_code} from Riva API. Rotating key and retrying...")
|
| 81 |
+
rotator.rotate()
|
| 82 |
+
# Retry with new key
|
| 83 |
+
api_key = rotator.get_key()
|
| 84 |
+
if api_key:
|
| 85 |
+
headers["Authorization"] = f"Bearer {api_key}"
|
| 86 |
+
response = await client.post(url, headers=headers, content=audio_data)
|
| 87 |
+
|
| 88 |
+
response.raise_for_status()
|
| 89 |
+
|
| 90 |
+
# Parse response
|
| 91 |
+
result = response.json()
|
| 92 |
+
transcribed_text = result.get("transcript", "").strip()
|
| 93 |
+
|
| 94 |
+
if transcribed_text:
|
| 95 |
+
logger.info(f"Successfully transcribed audio: {len(transcribed_text)} characters")
|
| 96 |
+
return transcribed_text
|
| 97 |
+
else:
|
| 98 |
+
logger.warning("Transcription returned empty result")
|
| 99 |
+
return None
|
| 100 |
+
|
| 101 |
+
except Exception as e:
|
| 102 |
+
logger.error(f"Audio transcription failed: {e}")
|
| 103 |
+
return None
|
| 104 |
+
|
| 105 |
+
async def transcribe_audio_bytes(
|
| 106 |
+
audio_bytes: bytes,
|
| 107 |
+
language_code: str = "en",
|
| 108 |
+
rotator: APIKeyRotator
|
| 109 |
+
) -> Optional[str]:
|
| 110 |
+
"""
|
| 111 |
+
Transcribe audio bytes using NVIDIA Riva API.
|
| 112 |
+
|
| 113 |
+
Args:
|
| 114 |
+
audio_bytes: Audio data as bytes
|
| 115 |
+
language_code: Language code for transcription (e.g., 'en', 'fr', 'multi')
|
| 116 |
+
rotator: API key rotator for authentication
|
| 117 |
+
|
| 118 |
+
Returns:
|
| 119 |
+
Transcribed text or None if transcription fails
|
| 120 |
+
"""
|
| 121 |
+
try:
|
| 122 |
+
# Create temporary file
|
| 123 |
+
with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as temp_file:
|
| 124 |
+
temp_file.write(audio_bytes)
|
| 125 |
+
temp_file_path = temp_file.name
|
| 126 |
+
|
| 127 |
+
try:
|
| 128 |
+
# Transcribe the temporary file
|
| 129 |
+
result = await transcribe_audio_file(temp_file_path, language_code, rotator)
|
| 130 |
+
return result
|
| 131 |
+
finally:
|
| 132 |
+
# Clean up temporary file
|
| 133 |
+
try:
|
| 134 |
+
os.unlink(temp_file_path)
|
| 135 |
+
except OSError:
|
| 136 |
+
pass
|
| 137 |
+
|
| 138 |
+
except Exception as e:
|
| 139 |
+
logger.error(f"Audio bytes transcription failed: {e}")
|
| 140 |
+
return None
|
| 141 |
+
|
| 142 |
+
def validate_audio_format(audio_bytes: bytes) -> bool:
|
| 143 |
+
"""
|
| 144 |
+
Validate if the audio bytes are in a supported format.
|
| 145 |
+
|
| 146 |
+
Args:
|
| 147 |
+
audio_bytes: Audio data as bytes
|
| 148 |
+
|
| 149 |
+
Returns:
|
| 150 |
+
True if format is supported, False otherwise
|
| 151 |
+
"""
|
| 152 |
+
# Check for common audio file signatures
|
| 153 |
+
if len(audio_bytes) < 12:
|
| 154 |
+
return False
|
| 155 |
+
|
| 156 |
+
# WAV format check (RIFF header)
|
| 157 |
+
if audio_bytes[:4] == b'RIFF' and audio_bytes[8:12] == b'WAVE':
|
| 158 |
+
return True
|
| 159 |
+
|
| 160 |
+
# OPUS format check (OggS header)
|
| 161 |
+
if audio_bytes[:4] == b'OggS':
|
| 162 |
+
return True
|
| 163 |
+
|
| 164 |
+
# FLAC format check (fLaC header)
|
| 165 |
+
if audio_bytes[:4] == b'fLaC':
|
| 166 |
+
return True
|
| 167 |
+
|
| 168 |
+
return False
|
| 169 |
+
|
| 170 |
+
def get_supported_formats() -> list[str]:
|
| 171 |
+
"""
|
| 172 |
+
Get list of supported audio formats.
|
| 173 |
+
|
| 174 |
+
Returns:
|
| 175 |
+
List of supported format extensions
|
| 176 |
+
"""
|
| 177 |
+
return ['.wav', '.opus', '.flac']
|
static/css/styles.css
CHANGED
|
@@ -497,6 +497,7 @@ body {
|
|
| 497 |
border-radius: 12px;
|
| 498 |
padding: var(--spacing-md);
|
| 499 |
transition: border-color var(--transition-fast);
|
|
|
|
| 500 |
}
|
| 501 |
|
| 502 |
.input-wrapper:focus-within {
|
|
@@ -583,6 +584,109 @@ body {
|
|
| 583 |
color: var(--text-primary);
|
| 584 |
}
|
| 585 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 586 |
/* Modals */
|
| 587 |
.modal {
|
| 588 |
display: none;
|
|
|
|
| 497 |
border-radius: 12px;
|
| 498 |
padding: var(--spacing-md);
|
| 499 |
transition: border-color var(--transition-fast);
|
| 500 |
+
position: relative;
|
| 501 |
}
|
| 502 |
|
| 503 |
.input-wrapper:focus-within {
|
|
|
|
| 584 |
color: var(--text-primary);
|
| 585 |
}
|
| 586 |
|
| 587 |
+
/* Audio recording states */
|
| 588 |
+
.microphone-btn.recording-ready {
|
| 589 |
+
color: var(--text-primary);
|
| 590 |
+
}
|
| 591 |
+
|
| 592 |
+
.microphone-btn.recording-active {
|
| 593 |
+
background-color: var(--error-color);
|
| 594 |
+
color: white;
|
| 595 |
+
animation: pulse 1s infinite;
|
| 596 |
+
}
|
| 597 |
+
|
| 598 |
+
.microphone-btn.recording-processing {
|
| 599 |
+
background-color: var(--warning-color);
|
| 600 |
+
color: white;
|
| 601 |
+
animation: spin 1s linear infinite;
|
| 602 |
+
}
|
| 603 |
+
|
| 604 |
+
@keyframes pulse {
|
| 605 |
+
0% { opacity: 1; }
|
| 606 |
+
50% { opacity: 0.7; }
|
| 607 |
+
100% { opacity: 1; }
|
| 608 |
+
}
|
| 609 |
+
|
| 610 |
+
@keyframes spin {
|
| 611 |
+
from { transform: rotate(0deg); }
|
| 612 |
+
to { transform: rotate(360deg); }
|
| 613 |
+
}
|
| 614 |
+
|
| 615 |
+
/* Recording indicator */
|
| 616 |
+
.recording-indicator {
|
| 617 |
+
position: absolute;
|
| 618 |
+
top: -40px;
|
| 619 |
+
left: 50%;
|
| 620 |
+
transform: translateX(-50%);
|
| 621 |
+
background-color: var(--error-color);
|
| 622 |
+
color: white;
|
| 623 |
+
padding: 8px 16px;
|
| 624 |
+
border-radius: 20px;
|
| 625 |
+
font-size: 0.875rem;
|
| 626 |
+
font-weight: 500;
|
| 627 |
+
display: none;
|
| 628 |
+
z-index: 1000;
|
| 629 |
+
box-shadow: var(--shadow-md);
|
| 630 |
+
}
|
| 631 |
+
|
| 632 |
+
.recording-indicator i {
|
| 633 |
+
margin-right: 6px;
|
| 634 |
+
animation: pulse 1s infinite;
|
| 635 |
+
}
|
| 636 |
+
|
| 637 |
+
/* Audio messages */
|
| 638 |
+
.audio-error-message,
|
| 639 |
+
.audio-success-message {
|
| 640 |
+
position: absolute;
|
| 641 |
+
top: -40px;
|
| 642 |
+
left: 50%;
|
| 643 |
+
transform: translateX(-50%);
|
| 644 |
+
padding: 8px 16px;
|
| 645 |
+
border-radius: 20px;
|
| 646 |
+
font-size: 0.875rem;
|
| 647 |
+
font-weight: 500;
|
| 648 |
+
display: none;
|
| 649 |
+
z-index: 1000;
|
| 650 |
+
box-shadow: var(--shadow-md);
|
| 651 |
+
}
|
| 652 |
+
|
| 653 |
+
.audio-error-message {
|
| 654 |
+
background-color: var(--error-color);
|
| 655 |
+
color: white;
|
| 656 |
+
}
|
| 657 |
+
|
| 658 |
+
.audio-success-message {
|
| 659 |
+
background-color: var(--success-color);
|
| 660 |
+
color: white;
|
| 661 |
+
}
|
| 662 |
+
|
| 663 |
+
/* Transcribed text highlighting */
|
| 664 |
+
.chat-input.transcribed {
|
| 665 |
+
border-color: var(--success-color);
|
| 666 |
+
background-color: rgba(34, 197, 94, 0.05);
|
| 667 |
+
}
|
| 668 |
+
|
| 669 |
+
.chat-input.transcribed::placeholder {
|
| 670 |
+
color: var(--success-color);
|
| 671 |
+
}
|
| 672 |
+
|
| 673 |
+
/* Audio recording button enhancements */
|
| 674 |
+
.microphone-btn.recording-enabled {
|
| 675 |
+
position: relative;
|
| 676 |
+
}
|
| 677 |
+
|
| 678 |
+
.microphone-btn.recording-enabled::after {
|
| 679 |
+
content: '';
|
| 680 |
+
position: absolute;
|
| 681 |
+
top: -2px;
|
| 682 |
+
right: -2px;
|
| 683 |
+
width: 8px;
|
| 684 |
+
height: 8px;
|
| 685 |
+
background-color: var(--success-color);
|
| 686 |
+
border-radius: 50%;
|
| 687 |
+
border: 2px solid var(--bg-primary);
|
| 688 |
+
}
|
| 689 |
+
|
| 690 |
/* Modals */
|
| 691 |
.modal {
|
| 692 |
display: none;
|
static/js/app.js
CHANGED
|
@@ -8,6 +8,9 @@
|
|
| 8 |
// import { attachSettingsUI } from './ui/settings.js';
|
| 9 |
// import { attachSessionsUI } from './chat/sessions.js';
|
| 10 |
// import { attachMessagingUI } from './chat/messaging.js';
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
class MedicalChatbotApp {
|
| 13 |
constructor() {
|
|
@@ -18,6 +21,7 @@ class MedicalChatbotApp {
|
|
| 18 |
this.memory = new Map(); // In-memory storage for STM/demo
|
| 19 |
this.isLoading = false;
|
| 20 |
this.doctors = this.loadDoctors();
|
|
|
|
| 21 |
|
| 22 |
this.init();
|
| 23 |
}
|
|
@@ -53,6 +57,9 @@ class MedicalChatbotApp {
|
|
| 53 |
const prefs = JSON.parse(localStorage.getItem('medicalChatbotPreferences') || '{}');
|
| 54 |
this.setTheme(prefs.theme || 'auto');
|
| 55 |
this.setupTheme();
|
|
|
|
|
|
|
|
|
|
| 56 |
}
|
| 57 |
|
| 58 |
setupEventListeners() {
|
|
@@ -1625,6 +1632,24 @@ How can I assist you today?`;
|
|
| 1625 |
if (createBtn) createBtn.addEventListener('click', () => modal.classList.remove('show'));
|
| 1626 |
}
|
| 1627 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1628 |
// ================================================================================
|
| 1629 |
// MESSAGING.JS FUNCTIONALITY
|
| 1630 |
// ================================================================================
|
|
|
|
| 8 |
// import { attachSettingsUI } from './ui/settings.js';
|
| 9 |
// import { attachSessionsUI } from './chat/sessions.js';
|
| 10 |
// import { attachMessagingUI } from './chat/messaging.js';
|
| 11 |
+
// Audio recording functionality will be incorporated below
|
| 12 |
+
|
| 13 |
+
// Audio recording functionality will be incorporated below
|
| 14 |
|
| 15 |
class MedicalChatbotApp {
|
| 16 |
constructor() {
|
|
|
|
| 21 |
this.memory = new Map(); // In-memory storage for STM/demo
|
| 22 |
this.isLoading = false;
|
| 23 |
this.doctors = this.loadDoctors();
|
| 24 |
+
this.audioRecorder = null; // Audio recording UI
|
| 25 |
|
| 26 |
this.init();
|
| 27 |
}
|
|
|
|
| 57 |
const prefs = JSON.parse(localStorage.getItem('medicalChatbotPreferences') || '{}');
|
| 58 |
this.setTheme(prefs.theme || 'auto');
|
| 59 |
this.setupTheme();
|
| 60 |
+
|
| 61 |
+
// Initialize audio recording
|
| 62 |
+
this.initializeAudioRecording();
|
| 63 |
}
|
| 64 |
|
| 65 |
setupEventListeners() {
|
|
|
|
| 1632 |
if (createBtn) createBtn.addEventListener('click', () => modal.classList.remove('show'));
|
| 1633 |
}
|
| 1634 |
|
| 1635 |
+
// ================================================================================
|
| 1636 |
+
// AUDIO RECORDING FUNCTIONALITY
|
| 1637 |
+
// ================================================================================
|
| 1638 |
+
async initializeAudioRecording() {
|
| 1639 |
+
try {
|
| 1640 |
+
this.audioRecorder = new AudioRecordingUI(this);
|
| 1641 |
+
const success = await this.audioRecorder.initialize();
|
| 1642 |
+
|
| 1643 |
+
if (success) {
|
| 1644 |
+
console.log('[Audio] Audio recording initialized successfully');
|
| 1645 |
+
} else {
|
| 1646 |
+
console.warn('[Audio] Audio recording initialization failed');
|
| 1647 |
+
}
|
| 1648 |
+
} catch (error) {
|
| 1649 |
+
console.error('[Audio] Failed to initialize audio recording:', error);
|
| 1650 |
+
}
|
| 1651 |
+
}
|
| 1652 |
+
|
| 1653 |
// ================================================================================
|
| 1654 |
// MESSAGING.JS FUNCTIONALITY
|
| 1655 |
// ================================================================================
|
static/js/audio/recorder.js
ADDED
|
@@ -0,0 +1,417 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
// audio/recorder.js
|
| 2 |
+
// Audio recording and transcription functionality
|
| 3 |
+
|
| 4 |
+
export class AudioRecorder {
|
| 5 |
+
constructor() {
|
| 6 |
+
this.mediaRecorder = null;
|
| 7 |
+
this.audioChunks = [];
|
| 8 |
+
this.isRecording = false;
|
| 9 |
+
this.audioContext = null;
|
| 10 |
+
this.audioStream = null;
|
| 11 |
+
}
|
| 12 |
+
|
| 13 |
+
async initialize() {
|
| 14 |
+
try {
|
| 15 |
+
// Request microphone access
|
| 16 |
+
this.audioStream = await navigator.mediaDevices.getUserMedia({
|
| 17 |
+
audio: {
|
| 18 |
+
sampleRate: 16000, // 16kHz for better speech recognition
|
| 19 |
+
channelCount: 1, // Mono
|
| 20 |
+
echoCancellation: true,
|
| 21 |
+
noiseSuppression: true,
|
| 22 |
+
autoGainControl: true
|
| 23 |
+
}
|
| 24 |
+
});
|
| 25 |
+
|
| 26 |
+
// Create MediaRecorder
|
| 27 |
+
this.mediaRecorder = new MediaRecorder(this.audioStream, {
|
| 28 |
+
mimeType: 'audio/webm;codecs=opus'
|
| 29 |
+
});
|
| 30 |
+
|
| 31 |
+
// Set up event handlers
|
| 32 |
+
this.mediaRecorder.ondataavailable = (event) => {
|
| 33 |
+
if (event.data.size > 0) {
|
| 34 |
+
this.audioChunks.push(event.data);
|
| 35 |
+
}
|
| 36 |
+
};
|
| 37 |
+
|
| 38 |
+
this.mediaRecorder.onstop = () => {
|
| 39 |
+
this.processRecording();
|
| 40 |
+
};
|
| 41 |
+
|
| 42 |
+
return true;
|
| 43 |
+
} catch (error) {
|
| 44 |
+
console.error('Failed to initialize audio recorder:', error);
|
| 45 |
+
throw new Error('Microphone access denied or not available');
|
| 46 |
+
}
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
startRecording() {
|
| 50 |
+
if (!this.mediaRecorder || this.isRecording) {
|
| 51 |
+
return false;
|
| 52 |
+
}
|
| 53 |
+
|
| 54 |
+
try {
|
| 55 |
+
this.audioChunks = [];
|
| 56 |
+
this.mediaRecorder.start();
|
| 57 |
+
this.isRecording = true;
|
| 58 |
+
console.log('Audio recording started');
|
| 59 |
+
return true;
|
| 60 |
+
} catch (error) {
|
| 61 |
+
console.error('Failed to start recording:', error);
|
| 62 |
+
return false;
|
| 63 |
+
}
|
| 64 |
+
}
|
| 65 |
+
|
| 66 |
+
stopRecording() {
|
| 67 |
+
if (!this.mediaRecorder || !this.isRecording) {
|
| 68 |
+
return false;
|
| 69 |
+
}
|
| 70 |
+
|
| 71 |
+
try {
|
| 72 |
+
this.mediaRecorder.stop();
|
| 73 |
+
this.isRecording = false;
|
| 74 |
+
console.log('Audio recording stopped');
|
| 75 |
+
return true;
|
| 76 |
+
} catch (error) {
|
| 77 |
+
console.error('Failed to stop recording:', error);
|
| 78 |
+
return false;
|
| 79 |
+
}
|
| 80 |
+
}
|
| 81 |
+
|
| 82 |
+
async processRecording() {
|
| 83 |
+
if (this.audioChunks.length === 0) {
|
| 84 |
+
console.warn('No audio data recorded');
|
| 85 |
+
return null;
|
| 86 |
+
}
|
| 87 |
+
|
| 88 |
+
try {
|
| 89 |
+
// Create audio blob
|
| 90 |
+
const audioBlob = new Blob(this.audioChunks, { type: 'audio/webm' });
|
| 91 |
+
|
| 92 |
+
// Convert to WAV format for better compatibility
|
| 93 |
+
const wavBlob = await this.convertToWav(audioBlob);
|
| 94 |
+
|
| 95 |
+
// Transcribe audio
|
| 96 |
+
const transcribedText = await this.transcribeAudio(wavBlob);
|
| 97 |
+
|
| 98 |
+
return transcribedText;
|
| 99 |
+
} catch (error) {
|
| 100 |
+
console.error('Failed to process recording:', error);
|
| 101 |
+
throw error;
|
| 102 |
+
}
|
| 103 |
+
}
|
| 104 |
+
|
| 105 |
+
async convertToWav(webmBlob) {
|
| 106 |
+
try {
|
| 107 |
+
// For now, we'll use the webm blob directly
|
| 108 |
+
// In a production environment, you might want to convert to WAV
|
| 109 |
+
// This requires additional libraries like lamejs or similar
|
| 110 |
+
return webmBlob;
|
| 111 |
+
} catch (error) {
|
| 112 |
+
console.error('Failed to convert to WAV:', error);
|
| 113 |
+
return webmBlob; // Fallback to original blob
|
| 114 |
+
}
|
| 115 |
+
}
|
| 116 |
+
|
| 117 |
+
async transcribeAudio(audioBlob) {
|
| 118 |
+
try {
|
| 119 |
+
const formData = new FormData();
|
| 120 |
+
formData.append('file', audioBlob, 'recording.webm');
|
| 121 |
+
formData.append('language_code', 'en');
|
| 122 |
+
|
| 123 |
+
const response = await fetch('/audio/transcribe', {
|
| 124 |
+
method: 'POST',
|
| 125 |
+
body: formData
|
| 126 |
+
});
|
| 127 |
+
|
| 128 |
+
if (!response.ok) {
|
| 129 |
+
const errorData = await response.json();
|
| 130 |
+
throw new Error(errorData.detail || 'Transcription failed');
|
| 131 |
+
}
|
| 132 |
+
|
| 133 |
+
const result = await response.json();
|
| 134 |
+
return result.transcribed_text;
|
| 135 |
+
} catch (error) {
|
| 136 |
+
console.error('Transcription failed:', error);
|
| 137 |
+
throw error;
|
| 138 |
+
}
|
| 139 |
+
}
|
| 140 |
+
|
| 141 |
+
cleanup() {
|
| 142 |
+
if (this.audioStream) {
|
| 143 |
+
this.audioStream.getTracks().forEach(track => track.stop());
|
| 144 |
+
this.audioStream = null;
|
| 145 |
+
}
|
| 146 |
+
|
| 147 |
+
if (this.audioContext) {
|
| 148 |
+
this.audioContext.close();
|
| 149 |
+
this.audioContext = null;
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
this.mediaRecorder = null;
|
| 153 |
+
this.audioChunks = [];
|
| 154 |
+
this.isRecording = false;
|
| 155 |
+
}
|
| 156 |
+
|
| 157 |
+
isAvailable() {
|
| 158 |
+
return !!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia);
|
| 159 |
+
}
|
| 160 |
+
}
|
| 161 |
+
|
| 162 |
+
// Audio recording UI controller
|
| 163 |
+
export class AudioRecordingUI {
|
| 164 |
+
constructor(app) {
|
| 165 |
+
this.app = app;
|
| 166 |
+
this.recorder = new AudioRecorder();
|
| 167 |
+
this.microphoneBtn = null;
|
| 168 |
+
this.isInitialized = false;
|
| 169 |
+
this.recordingTimeout = null;
|
| 170 |
+
}
|
| 171 |
+
|
| 172 |
+
async initialize() {
|
| 173 |
+
if (!this.recorder.isAvailable()) {
|
| 174 |
+
console.warn('Audio recording not supported in this browser');
|
| 175 |
+
return false;
|
| 176 |
+
}
|
| 177 |
+
|
| 178 |
+
try {
|
| 179 |
+
await this.recorder.initialize();
|
| 180 |
+
this.setupUI();
|
| 181 |
+
this.isInitialized = true;
|
| 182 |
+
return true;
|
| 183 |
+
} catch (error) {
|
| 184 |
+
console.error('Failed to initialize audio recording UI:', error);
|
| 185 |
+
this.showError('Microphone access denied. Please allow microphone access to use voice input.');
|
| 186 |
+
return false;
|
| 187 |
+
}
|
| 188 |
+
}
|
| 189 |
+
|
| 190 |
+
setupUI() {
|
| 191 |
+
this.microphoneBtn = document.getElementById('microphoneBtn');
|
| 192 |
+
if (!this.microphoneBtn) {
|
| 193 |
+
console.error('Microphone button not found');
|
| 194 |
+
return;
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
// Add recording state classes
|
| 198 |
+
this.microphoneBtn.classList.add('recording-enabled');
|
| 199 |
+
|
| 200 |
+
// Set up event listeners
|
| 201 |
+
this.microphoneBtn.addEventListener('mousedown', (e) => this.startRecording(e));
|
| 202 |
+
this.microphoneBtn.addEventListener('mouseup', (e) => this.stopRecording(e));
|
| 203 |
+
this.microphoneBtn.addEventListener('mouseleave', (e) => this.stopRecording(e));
|
| 204 |
+
|
| 205 |
+
// Touch events for mobile
|
| 206 |
+
this.microphoneBtn.addEventListener('touchstart', (e) => {
|
| 207 |
+
e.preventDefault();
|
| 208 |
+
this.startRecording(e);
|
| 209 |
+
});
|
| 210 |
+
this.microphoneBtn.addEventListener('touchend', (e) => {
|
| 211 |
+
e.preventDefault();
|
| 212 |
+
this.stopRecording(e);
|
| 213 |
+
});
|
| 214 |
+
|
| 215 |
+
// Update button appearance
|
| 216 |
+
this.updateButtonState('ready');
|
| 217 |
+
}
|
| 218 |
+
|
| 219 |
+
async startRecording(event) {
|
| 220 |
+
if (!this.isInitialized || this.recorder.isRecording) {
|
| 221 |
+
return;
|
| 222 |
+
}
|
| 223 |
+
|
| 224 |
+
event.preventDefault();
|
| 225 |
+
|
| 226 |
+
try {
|
| 227 |
+
const success = this.recorder.startRecording();
|
| 228 |
+
if (success) {
|
| 229 |
+
this.updateButtonState('recording');
|
| 230 |
+
this.showRecordingIndicator();
|
| 231 |
+
|
| 232 |
+
// Auto-stop after 30 seconds to prevent very long recordings
|
| 233 |
+
this.recordingTimeout = setTimeout(() => {
|
| 234 |
+
this.stopRecording(event);
|
| 235 |
+
}, 30000);
|
| 236 |
+
}
|
| 237 |
+
} catch (error) {
|
| 238 |
+
console.error('Failed to start recording:', error);
|
| 239 |
+
this.showError('Failed to start recording. Please try again.');
|
| 240 |
+
}
|
| 241 |
+
}
|
| 242 |
+
|
| 243 |
+
async stopRecording(event) {
|
| 244 |
+
if (!this.isInitialized || !this.recorder.isRecording) {
|
| 245 |
+
return;
|
| 246 |
+
}
|
| 247 |
+
|
| 248 |
+
event.preventDefault();
|
| 249 |
+
|
| 250 |
+
try {
|
| 251 |
+
const success = this.recorder.stopRecording();
|
| 252 |
+
if (success) {
|
| 253 |
+
this.updateButtonState('processing');
|
| 254 |
+
this.hideRecordingIndicator();
|
| 255 |
+
|
| 256 |
+
if (this.recordingTimeout) {
|
| 257 |
+
clearTimeout(this.recordingTimeout);
|
| 258 |
+
this.recordingTimeout = null;
|
| 259 |
+
}
|
| 260 |
+
|
| 261 |
+
// Process the recording
|
| 262 |
+
const transcribedText = await this.recorder.processRecording();
|
| 263 |
+
|
| 264 |
+
if (transcribedText) {
|
| 265 |
+
this.insertTranscribedText(transcribedText);
|
| 266 |
+
this.showSuccess('Audio transcribed successfully!');
|
| 267 |
+
} else {
|
| 268 |
+
this.showError('No speech detected. Please try again.');
|
| 269 |
+
}
|
| 270 |
+
|
| 271 |
+
this.updateButtonState('ready');
|
| 272 |
+
}
|
| 273 |
+
} catch (error) {
|
| 274 |
+
console.error('Failed to stop recording:', error);
|
| 275 |
+
this.showError('Transcription failed. Please try again.');
|
| 276 |
+
this.updateButtonState('ready');
|
| 277 |
+
}
|
| 278 |
+
}
|
| 279 |
+
|
| 280 |
+
insertTranscribedText(text) {
|
| 281 |
+
const chatInput = document.getElementById('chatInput');
|
| 282 |
+
if (!chatInput) {
|
| 283 |
+
console.error('Chat input not found');
|
| 284 |
+
return;
|
| 285 |
+
}
|
| 286 |
+
|
| 287 |
+
// Append transcribed text to existing content
|
| 288 |
+
const currentText = chatInput.value.trim();
|
| 289 |
+
const newText = currentText ? `${currentText} ${text}` : text;
|
| 290 |
+
|
| 291 |
+
chatInput.value = newText;
|
| 292 |
+
|
| 293 |
+
// Add visual feedback for transcribed text
|
| 294 |
+
chatInput.classList.add('transcribed');
|
| 295 |
+
|
| 296 |
+
// Remove the highlighting after a few seconds
|
| 297 |
+
setTimeout(() => {
|
| 298 |
+
chatInput.classList.remove('transcribed');
|
| 299 |
+
}, 3000);
|
| 300 |
+
|
| 301 |
+
// Trigger input event to update UI
|
| 302 |
+
chatInput.dispatchEvent(new Event('input', { bubbles: true }));
|
| 303 |
+
|
| 304 |
+
// Focus the input
|
| 305 |
+
chatInput.focus();
|
| 306 |
+
|
| 307 |
+
// Auto-resize if needed
|
| 308 |
+
if (this.app && this.app.autoResizeTextarea) {
|
| 309 |
+
this.app.autoResizeTextarea(chatInput);
|
| 310 |
+
}
|
| 311 |
+
}
|
| 312 |
+
|
| 313 |
+
updateButtonState(state) {
|
| 314 |
+
if (!this.microphoneBtn) return;
|
| 315 |
+
|
| 316 |
+
// Remove all state classes
|
| 317 |
+
this.microphoneBtn.classList.remove('recording-ready', 'recording-active', 'recording-processing');
|
| 318 |
+
|
| 319 |
+
// Add appropriate state class
|
| 320 |
+
switch (state) {
|
| 321 |
+
case 'ready':
|
| 322 |
+
this.microphoneBtn.classList.add('recording-ready');
|
| 323 |
+
this.microphoneBtn.title = 'Hold to record voice input';
|
| 324 |
+
break;
|
| 325 |
+
case 'recording':
|
| 326 |
+
this.microphoneBtn.classList.add('recording-active');
|
| 327 |
+
this.microphoneBtn.title = 'Recording... Release to stop';
|
| 328 |
+
break;
|
| 329 |
+
case 'processing':
|
| 330 |
+
this.microphoneBtn.classList.add('recording-processing');
|
| 331 |
+
this.microphoneBtn.title = 'Processing audio...';
|
| 332 |
+
break;
|
| 333 |
+
}
|
| 334 |
+
}
|
| 335 |
+
|
| 336 |
+
showRecordingIndicator() {
|
| 337 |
+
// Create or update recording indicator
|
| 338 |
+
let indicator = document.getElementById('recordingIndicator');
|
| 339 |
+
if (!indicator) {
|
| 340 |
+
indicator = document.createElement('div');
|
| 341 |
+
indicator.id = 'recordingIndicator';
|
| 342 |
+
indicator.className = 'recording-indicator';
|
| 343 |
+
indicator.innerHTML = '<i class="fas fa-microphone"></i> Recording...';
|
| 344 |
+
|
| 345 |
+
const chatInputContainer = document.querySelector('.chat-input-container');
|
| 346 |
+
if (chatInputContainer) {
|
| 347 |
+
chatInputContainer.appendChild(indicator);
|
| 348 |
+
}
|
| 349 |
+
}
|
| 350 |
+
|
| 351 |
+
indicator.style.display = 'block';
|
| 352 |
+
}
|
| 353 |
+
|
| 354 |
+
hideRecordingIndicator() {
|
| 355 |
+
const indicator = document.getElementById('recordingIndicator');
|
| 356 |
+
if (indicator) {
|
| 357 |
+
indicator.style.display = 'none';
|
| 358 |
+
}
|
| 359 |
+
}
|
| 360 |
+
|
| 361 |
+
showError(message) {
|
| 362 |
+
// Create or update error message
|
| 363 |
+
let errorMsg = document.getElementById('audioError');
|
| 364 |
+
if (!errorMsg) {
|
| 365 |
+
errorMsg = document.createElement('div');
|
| 366 |
+
errorMsg.id = 'audioError';
|
| 367 |
+
errorMsg.className = 'audio-error-message';
|
| 368 |
+
|
| 369 |
+
const chatInputContainer = document.querySelector('.chat-input-container');
|
| 370 |
+
if (chatInputContainer) {
|
| 371 |
+
chatInputContainer.appendChild(errorMsg);
|
| 372 |
+
}
|
| 373 |
+
}
|
| 374 |
+
|
| 375 |
+
errorMsg.textContent = message;
|
| 376 |
+
errorMsg.style.display = 'block';
|
| 377 |
+
|
| 378 |
+
// Hide after 5 seconds
|
| 379 |
+
setTimeout(() => {
|
| 380 |
+
errorMsg.style.display = 'none';
|
| 381 |
+
}, 5000);
|
| 382 |
+
}
|
| 383 |
+
|
| 384 |
+
showSuccess(message) {
|
| 385 |
+
// Create or update success message
|
| 386 |
+
let successMsg = document.getElementById('audioSuccess');
|
| 387 |
+
if (!successMsg) {
|
| 388 |
+
successMsg = document.createElement('div');
|
| 389 |
+
successMsg.id = 'audioSuccess';
|
| 390 |
+
successMsg.className = 'audio-success-message';
|
| 391 |
+
|
| 392 |
+
const chatInputContainer = document.querySelector('.chat-input-container');
|
| 393 |
+
if (chatInputContainer) {
|
| 394 |
+
chatInputContainer.appendChild(successMsg);
|
| 395 |
+
}
|
| 396 |
+
}
|
| 397 |
+
|
| 398 |
+
successMsg.textContent = message;
|
| 399 |
+
successMsg.style.display = 'block';
|
| 400 |
+
|
| 401 |
+
// Hide after 3 seconds
|
| 402 |
+
setTimeout(() => {
|
| 403 |
+
successMsg.style.display = 'none';
|
| 404 |
+
}, 3000);
|
| 405 |
+
}
|
| 406 |
+
|
| 407 |
+
cleanup() {
|
| 408 |
+
if (this.recorder) {
|
| 409 |
+
this.recorder.cleanup();
|
| 410 |
+
}
|
| 411 |
+
|
| 412 |
+
if (this.recordingTimeout) {
|
| 413 |
+
clearTimeout(this.recordingTimeout);
|
| 414 |
+
this.recordingTimeout = null;
|
| 415 |
+
}
|
| 416 |
+
}
|
| 417 |
+
}
|
static/js/chat/messaging.js
DELETED
|
@@ -1,162 +0,0 @@
|
|
| 1 |
-
// chat/messaging.js
|
| 2 |
-
// Send, API call, add/display message, summariseTitle, format content/time
|
| 3 |
-
|
| 4 |
-
export function attachMessagingUI(app) {
|
| 5 |
-
app.sendMessage = async function () {
|
| 6 |
-
const input = document.getElementById('chatInput');
|
| 7 |
-
const message = input.value.trim();
|
| 8 |
-
if (!message || app.isLoading) return;
|
| 9 |
-
if (!app.currentPatientId) {
|
| 10 |
-
const status = document.getElementById('patientStatus');
|
| 11 |
-
if (status) { status.textContent = 'Select a patient before chatting.'; status.style.color = 'var(--warning-color)'; }
|
| 12 |
-
return;
|
| 13 |
-
}
|
| 14 |
-
input.value = '';
|
| 15 |
-
app.autoResizeTextarea(input);
|
| 16 |
-
app.addMessage('user', message);
|
| 17 |
-
app.showLoading(true);
|
| 18 |
-
try {
|
| 19 |
-
const response = await app.callMedicalAPI(message);
|
| 20 |
-
app.addMessage('assistant', response);
|
| 21 |
-
app.updateCurrentSession();
|
| 22 |
-
} catch (error) {
|
| 23 |
-
console.error('Error sending message:', error);
|
| 24 |
-
let errorMessage = 'I apologize, but I encountered an error processing your request.';
|
| 25 |
-
if (error.message.includes('500')) errorMessage = 'The server encountered an internal error. Please try again in a moment.';
|
| 26 |
-
else if (error.message.includes('404')) errorMessage = 'The requested service was not found. Please check your connection.';
|
| 27 |
-
else if (error.message.includes('fetch')) errorMessage = 'Unable to connect to the server. Please check your internet connection.';
|
| 28 |
-
app.addMessage('assistant', errorMessage);
|
| 29 |
-
} finally {
|
| 30 |
-
app.showLoading(false);
|
| 31 |
-
}
|
| 32 |
-
};
|
| 33 |
-
|
| 34 |
-
app.callMedicalAPI = async function (message) {
|
| 35 |
-
try {
|
| 36 |
-
const response = await fetch('/chat', {
|
| 37 |
-
method: 'POST',
|
| 38 |
-
headers: { 'Content-Type': 'application/json' },
|
| 39 |
-
body: JSON.stringify({
|
| 40 |
-
user_id: app.currentUser.id,
|
| 41 |
-
patient_id: app.currentPatientId,
|
| 42 |
-
doctor_id: app.currentUser.id,
|
| 43 |
-
session_id: app.currentSession?.id || 'default',
|
| 44 |
-
message: message,
|
| 45 |
-
user_role: app.currentUser.role,
|
| 46 |
-
user_specialty: app.currentUser.specialty,
|
| 47 |
-
title: app.currentSession?.title || 'New Chat'
|
| 48 |
-
})
|
| 49 |
-
});
|
| 50 |
-
if (!response.ok) throw new Error(`HTTP error! status: ${response.status}`);
|
| 51 |
-
const data = await response.json();
|
| 52 |
-
return data.response || 'I apologize, but I received an empty response. Please try again.';
|
| 53 |
-
} catch (error) {
|
| 54 |
-
console.error('API call failed:', error);
|
| 55 |
-
console.error('Error details:', {
|
| 56 |
-
message: error.message,
|
| 57 |
-
stack: error.stack,
|
| 58 |
-
user: app.currentUser,
|
| 59 |
-
session: app.currentSession,
|
| 60 |
-
patientId: app.currentPatientId
|
| 61 |
-
});
|
| 62 |
-
if (error.name === 'TypeError' && error.message.includes('fetch')) return app.generateMockResponse(message);
|
| 63 |
-
throw error;
|
| 64 |
-
}
|
| 65 |
-
};
|
| 66 |
-
|
| 67 |
-
app.generateMockResponse = function (message) {
|
| 68 |
-
const responses = [
|
| 69 |
-
"Based on your question about medical topics, I can provide general information. However, please remember that this is for educational purposes only and should not replace professional medical advice.",
|
| 70 |
-
"That's an interesting medical question. While I can offer some general insights, it's important to consult with healthcare professionals for personalized medical advice.",
|
| 71 |
-
"I understand your medical inquiry. For accurate diagnosis and treatment recommendations, please consult with qualified healthcare providers who can assess your specific situation.",
|
| 72 |
-
"Thank you for your medical question. I can provide educational information, but medical decisions should always be made in consultation with healthcare professionals.",
|
| 73 |
-
"I appreciate your interest in medical topics. Remember that medical information found online should be discussed with healthcare providers for proper evaluation."
|
| 74 |
-
];
|
| 75 |
-
return responses[Math.floor(Math.random() * responses.length)];
|
| 76 |
-
};
|
| 77 |
-
|
| 78 |
-
app.addMessage = function (role, content) {
|
| 79 |
-
if (!app.currentSession) app.startNewChat();
|
| 80 |
-
const message = { id: app.generateId(), role, content, timestamp: new Date().toISOString() };
|
| 81 |
-
app.currentSession.messages.push(message);
|
| 82 |
-
app.displayMessage(message);
|
| 83 |
-
if (role === 'user' && app.currentSession.messages.length === 2) app.summariseAndSetTitle(content);
|
| 84 |
-
};
|
| 85 |
-
|
| 86 |
-
app.summariseAndSetTitle = async function (text) {
|
| 87 |
-
try {
|
| 88 |
-
const resp = await fetch('/summarise', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ text, max_words: 5 }) });
|
| 89 |
-
if (resp.ok) {
|
| 90 |
-
const data = await resp.json();
|
| 91 |
-
const title = (data.title || 'New Chat').trim();
|
| 92 |
-
app.currentSession.title = title;
|
| 93 |
-
app.updateCurrentSession();
|
| 94 |
-
app.updateChatTitle();
|
| 95 |
-
app.loadChatSessions();
|
| 96 |
-
} else {
|
| 97 |
-
const fallback = text.length > 50 ? text.substring(0, 50) + '...' : text;
|
| 98 |
-
app.currentSession.title = fallback;
|
| 99 |
-
app.updateCurrentSession();
|
| 100 |
-
app.updateChatTitle();
|
| 101 |
-
app.loadChatSessions();
|
| 102 |
-
}
|
| 103 |
-
} catch (e) {
|
| 104 |
-
const fallback = text.length > 50 ? text.substring(0, 50) + '...' : text;
|
| 105 |
-
app.currentSession.title = fallback;
|
| 106 |
-
app.updateCurrentSession();
|
| 107 |
-
app.updateChatTitle();
|
| 108 |
-
app.loadChatSessions();
|
| 109 |
-
}
|
| 110 |
-
};
|
| 111 |
-
|
| 112 |
-
app.displayMessage = function (message) {
|
| 113 |
-
const chatMessages = document.getElementById('chatMessages');
|
| 114 |
-
const messageElement = document.createElement('div');
|
| 115 |
-
messageElement.className = `message ${message.role}-message fade-in`;
|
| 116 |
-
messageElement.id = `message-${message.id}`;
|
| 117 |
-
const avatar = message.role === 'user' ? '<i class="fas fa-user"></i>' : '<i class="fas fa-robot"></i>';
|
| 118 |
-
const time = app.formatTime(message.timestamp);
|
| 119 |
-
messageElement.innerHTML = `
|
| 120 |
-
<div class="message-avatar">${avatar}</div>
|
| 121 |
-
<div class="message-content">
|
| 122 |
-
<div class="message-text">${app.formatMessageContent(message.content)}</div>
|
| 123 |
-
<div class="message-time">${time}</div>
|
| 124 |
-
</div>`;
|
| 125 |
-
chatMessages.appendChild(messageElement);
|
| 126 |
-
chatMessages.scrollTop = chatMessages.scrollHeight;
|
| 127 |
-
if (app.currentSession) app.currentSession.lastActivity = new Date().toISOString();
|
| 128 |
-
};
|
| 129 |
-
|
| 130 |
-
app.formatMessageContent = function (content) {
|
| 131 |
-
return content
|
| 132 |
-
// Handle headers (1-6 # symbols)
|
| 133 |
-
.replace(/^#{1,6}\s+(.+)$/gm, (match, text, offset, string) => {
|
| 134 |
-
const level = match.match(/^#+/)[0].length;
|
| 135 |
-
return `<h${level}>${text}</h${level}>`;
|
| 136 |
-
})
|
| 137 |
-
// Handle bold text
|
| 138 |
-
.replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>')
|
| 139 |
-
// Handle italic text
|
| 140 |
-
.replace(/\*(.*?)\*/g, '<em>$1</em>')
|
| 141 |
-
// Handle line breaks
|
| 142 |
-
.replace(/\n/g, '<br>')
|
| 143 |
-
// Handle emojis with colors
|
| 144 |
-
.replace(/🔍/g, '<span style="color: var(--primary-color);">🔍</span>')
|
| 145 |
-
.replace(/📋/g, '<span style="color: var(--secondary-color);">📋</span>')
|
| 146 |
-
.replace(/💊/g, '<span style="color: var(--accent-color);">💊</span>')
|
| 147 |
-
.replace(/📚/g, '<span style="color: var(--success-color);">📚</span>')
|
| 148 |
-
.replace(/⚠️/g, '<span style="color: var(--warning-color);">⚠️</span>');
|
| 149 |
-
};
|
| 150 |
-
|
| 151 |
-
app.formatTime = function (timestamp) {
|
| 152 |
-
const date = new Date(timestamp);
|
| 153 |
-
const now = new Date();
|
| 154 |
-
const diff = now - date;
|
| 155 |
-
if (diff < 60000) return 'Just now';
|
| 156 |
-
if (diff < 3600000) { const minutes = Math.floor(diff / 60000); return `${minutes} minute${minutes > 1 ? 's' : ''} ago`; }
|
| 157 |
-
if (diff < 86400000) { const hours = Math.floor(diff / 3600000); return `${hours} hour${hours > 1 ? 's' : ''} ago`; }
|
| 158 |
-
return date.toLocaleDateString();
|
| 159 |
-
};
|
| 160 |
-
}
|
| 161 |
-
|
| 162 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
static/js/chat/sessions.js
DELETED
|
@@ -1,164 +0,0 @@
|
|
| 1 |
-
// chat/sessions.js
|
| 2 |
-
// Session sidebar rendering, context menu, rename/delete, local storage helpers
|
| 3 |
-
|
| 4 |
-
export function attachSessionsUI(app) {
|
| 5 |
-
app.getChatSessions = function () {
|
| 6 |
-
const sessions = localStorage.getItem(`chatSessions_${app.currentUser.id}`);
|
| 7 |
-
return sessions ? JSON.parse(sessions) : [];
|
| 8 |
-
};
|
| 9 |
-
|
| 10 |
-
app.saveCurrentSession = function () {
|
| 11 |
-
if (!app.currentSession) return;
|
| 12 |
-
if (app.currentSession.source === 'backend') return; // do not persist backend sessions locally here
|
| 13 |
-
const sessions = app.getChatSessions();
|
| 14 |
-
const existingIndex = sessions.findIndex(s => s.id === app.currentSession.id);
|
| 15 |
-
if (existingIndex >= 0) sessions[existingIndex] = { ...app.currentSession };
|
| 16 |
-
else sessions.unshift(app.currentSession);
|
| 17 |
-
localStorage.setItem(`chatSessions_${app.currentUser.id}`, JSON.stringify(sessions));
|
| 18 |
-
};
|
| 19 |
-
|
| 20 |
-
app.updateCurrentSession = function () {
|
| 21 |
-
if (app.currentSession) {
|
| 22 |
-
app.currentSession.lastActivity = new Date().toISOString();
|
| 23 |
-
app.saveCurrentSession();
|
| 24 |
-
}
|
| 25 |
-
};
|
| 26 |
-
|
| 27 |
-
app.updateChatTitle = function () {
|
| 28 |
-
const titleElement = document.getElementById('chatTitle');
|
| 29 |
-
if (app.currentSession) titleElement.textContent = app.currentSession.title; else titleElement.textContent = 'Medical AI Assistant';
|
| 30 |
-
};
|
| 31 |
-
|
| 32 |
-
app.loadChatSession = function (sessionId) {
|
| 33 |
-
const sessions = app.getChatSessions();
|
| 34 |
-
const session = sessions.find(s => s.id === sessionId);
|
| 35 |
-
if (!session) return;
|
| 36 |
-
app.currentSession = session;
|
| 37 |
-
app.clearChatMessages();
|
| 38 |
-
session.messages.forEach(message => app.displayMessage(message));
|
| 39 |
-
app.updateChatTitle();
|
| 40 |
-
app.loadChatSessions();
|
| 41 |
-
};
|
| 42 |
-
|
| 43 |
-
app.deleteChatSession = function (sessionId) {
|
| 44 |
-
const sessions = app.getChatSessions();
|
| 45 |
-
const index = sessions.findIndex(s => s.id === sessionId);
|
| 46 |
-
if (index === -1) return;
|
| 47 |
-
const confirmDelete = confirm('Delete this chat session? This cannot be undone.');
|
| 48 |
-
if (!confirmDelete) return;
|
| 49 |
-
sessions.splice(index, 1);
|
| 50 |
-
localStorage.setItem(`chatSessions_${app.currentUser.id}`, JSON.stringify(sessions));
|
| 51 |
-
if (app.currentSession && app.currentSession.id === sessionId) {
|
| 52 |
-
if (sessions.length > 0) {
|
| 53 |
-
app.currentSession = sessions[0];
|
| 54 |
-
app.clearChatMessages();
|
| 55 |
-
app.currentSession.messages.forEach(m => app.displayMessage(m));
|
| 56 |
-
app.updateChatTitle();
|
| 57 |
-
} else {
|
| 58 |
-
app.currentSession = null;
|
| 59 |
-
app.clearChatMessages();
|
| 60 |
-
app.updateChatTitle();
|
| 61 |
-
}
|
| 62 |
-
}
|
| 63 |
-
app.loadChatSessions();
|
| 64 |
-
};
|
| 65 |
-
|
| 66 |
-
app.renameChatSession = function (sessionId, newTitle) {
|
| 67 |
-
const sessions = app.getChatSessions();
|
| 68 |
-
const idx = sessions.findIndex(s => s.id === sessionId);
|
| 69 |
-
if (idx === -1) return;
|
| 70 |
-
sessions[idx] = { ...sessions[idx], title: newTitle };
|
| 71 |
-
localStorage.setItem(`chatSessions_${app.currentUser.id}`, JSON.stringify(sessions));
|
| 72 |
-
if (app.currentSession && app.currentSession.id === sessionId) {
|
| 73 |
-
app.currentSession.title = newTitle;
|
| 74 |
-
app.updateChatTitle();
|
| 75 |
-
}
|
| 76 |
-
app.loadChatSessions();
|
| 77 |
-
};
|
| 78 |
-
|
| 79 |
-
app.showSessionMenu = function (anchorEl, sessionId) {
|
| 80 |
-
// Remove existing popover
|
| 81 |
-
document.querySelectorAll('.chat-session-menu-popover').forEach(p => p.remove());
|
| 82 |
-
const rect = anchorEl.getBoundingClientRect();
|
| 83 |
-
const pop = document.createElement('div');
|
| 84 |
-
pop.className = 'chat-session-menu-popover show';
|
| 85 |
-
pop.innerHTML = `
|
| 86 |
-
<div class="chat-session-menu-item" data-action="edit" data-session-id="${sessionId}"><i class="fas fa-pen"></i> Edit Name</div>
|
| 87 |
-
<div class="chat-session-menu-item" data-action="delete" data-session-id="${sessionId}"><i class="fas fa-trash"></i> Delete</div>
|
| 88 |
-
`;
|
| 89 |
-
document.body.appendChild(pop);
|
| 90 |
-
pop.style.top = `${rect.bottom + window.scrollY + 6}px`;
|
| 91 |
-
pop.style.left = `${rect.right + window.scrollX - pop.offsetWidth}px`;
|
| 92 |
-
const onDocClick = (ev) => {
|
| 93 |
-
if (!pop.contains(ev.target) && ev.target !== anchorEl) {
|
| 94 |
-
pop.remove();
|
| 95 |
-
document.removeEventListener('click', onDocClick);
|
| 96 |
-
}
|
| 97 |
-
};
|
| 98 |
-
setTimeout(() => document.addEventListener('click', onDocClick), 0);
|
| 99 |
-
pop.querySelectorAll('.chat-session-menu-item').forEach(item => {
|
| 100 |
-
item.addEventListener('click', (e) => {
|
| 101 |
-
const action = item.getAttribute('data-action');
|
| 102 |
-
const id = item.getAttribute('data-session-id');
|
| 103 |
-
if (action === 'delete') app.deleteChatSession(id);
|
| 104 |
-
else if (action === 'edit') {
|
| 105 |
-
app._pendingEditSessionId = id;
|
| 106 |
-
const sessions = app.getChatSessions();
|
| 107 |
-
const s = sessions.find(x => x.id === id);
|
| 108 |
-
const input = document.getElementById('editSessionTitleInput');
|
| 109 |
-
if (input) input.value = s ? s.title : '';
|
| 110 |
-
app.showModal('editTitleModal');
|
| 111 |
-
}
|
| 112 |
-
pop.remove();
|
| 113 |
-
});
|
| 114 |
-
});
|
| 115 |
-
};
|
| 116 |
-
|
| 117 |
-
app.loadChatSessions = function () {
|
| 118 |
-
const sessionsContainer = document.getElementById('chatSessions');
|
| 119 |
-
sessionsContainer.innerHTML = '';
|
| 120 |
-
const sessions = (app.backendSessions && app.backendSessions.length > 0) ? app.backendSessions : app.getChatSessions();
|
| 121 |
-
if (sessions.length === 0) {
|
| 122 |
-
sessionsContainer.innerHTML = '<div class="no-sessions">No chat sessions yet</div>';
|
| 123 |
-
return;
|
| 124 |
-
}
|
| 125 |
-
sessions.forEach(session => {
|
| 126 |
-
const sessionElement = document.createElement('div');
|
| 127 |
-
sessionElement.className = `chat-session ${session.id === app.currentSession?.id ? 'active' : ''}`;
|
| 128 |
-
sessionElement.addEventListener('click', async () => {
|
| 129 |
-
if (session.source === 'backend') {
|
| 130 |
-
app.currentSession = { ...session };
|
| 131 |
-
await app.hydrateMessagesForSession(session.id);
|
| 132 |
-
} else {
|
| 133 |
-
app.loadChatSession(session.id);
|
| 134 |
-
}
|
| 135 |
-
});
|
| 136 |
-
const time = app.formatTime(session.lastActivity);
|
| 137 |
-
sessionElement.innerHTML = `
|
| 138 |
-
<div class="chat-session-row">
|
| 139 |
-
<div class="chat-session-meta">
|
| 140 |
-
<div class="chat-session-title">${session.title}</div>
|
| 141 |
-
<div class="chat-session-time">${time}</div>
|
| 142 |
-
</div>
|
| 143 |
-
<div class="chat-session-actions">
|
| 144 |
-
<button class="chat-session-menu" title="Options" aria-label="Options" data-session-id="${session.id}">
|
| 145 |
-
<i class="fas fa-ellipsis-vertical"></i>
|
| 146 |
-
</button>
|
| 147 |
-
</div>
|
| 148 |
-
</div>
|
| 149 |
-
`;
|
| 150 |
-
sessionsContainer.appendChild(sessionElement);
|
| 151 |
-
const menuBtn = sessionElement.querySelector('.chat-session-menu');
|
| 152 |
-
if (session.source !== 'backend') {
|
| 153 |
-
menuBtn.addEventListener('click', (e) => {
|
| 154 |
-
e.stopPropagation();
|
| 155 |
-
app.showSessionMenu(e.currentTarget, session.id);
|
| 156 |
-
});
|
| 157 |
-
} else {
|
| 158 |
-
menuBtn.disabled = true;
|
| 159 |
-
menuBtn.style.opacity = 0.5;
|
| 160 |
-
menuBtn.title = 'Options available for local sessions only';
|
| 161 |
-
}
|
| 162 |
-
});
|
| 163 |
-
};
|
| 164 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
static/js/ui/doctor.js
DELETED
|
@@ -1,124 +0,0 @@
|
|
| 1 |
-
// ui/doctor.js
|
| 2 |
-
// Doctor list load/save, dropdown populate, create-flow, show/save profile
|
| 3 |
-
|
| 4 |
-
export function attachDoctorUI(app) {
|
| 5 |
-
// Model: list of doctors persisted in localStorage
|
| 6 |
-
app.loadDoctors = function () {
|
| 7 |
-
try {
|
| 8 |
-
const raw = localStorage.getItem('medicalChatbotDoctors');
|
| 9 |
-
const arr = raw ? JSON.parse(raw) : [];
|
| 10 |
-
const seen = new Set();
|
| 11 |
-
return arr.filter(x => x && x.name && !seen.has(x.name) && seen.add(x.name));
|
| 12 |
-
} catch { return []; }
|
| 13 |
-
};
|
| 14 |
-
|
| 15 |
-
app.saveDoctors = function () {
|
| 16 |
-
localStorage.setItem('medicalChatbotDoctors', JSON.stringify(app.doctors));
|
| 17 |
-
};
|
| 18 |
-
|
| 19 |
-
app.populateDoctorSelect = function () {
|
| 20 |
-
const sel = document.getElementById('profileNameSelect');
|
| 21 |
-
const newSec = document.getElementById('newDoctorSection');
|
| 22 |
-
if (!sel) return;
|
| 23 |
-
sel.innerHTML = '';
|
| 24 |
-
const createOpt = document.createElement('option');
|
| 25 |
-
createOpt.value = '__create__';
|
| 26 |
-
createOpt.textContent = 'Create doctor user...';
|
| 27 |
-
sel.appendChild(createOpt);
|
| 28 |
-
// Ensure no duplicates, include current doctor
|
| 29 |
-
const names = new Set(app.doctors.map(d => d.name));
|
| 30 |
-
if (app.currentUser?.name && !names.has(app.currentUser.name)) {
|
| 31 |
-
app.doctors.unshift({ name: app.currentUser.name });
|
| 32 |
-
names.add(app.currentUser.name);
|
| 33 |
-
app.saveDoctors();
|
| 34 |
-
}
|
| 35 |
-
app.doctors.forEach(d => {
|
| 36 |
-
const opt = document.createElement('option');
|
| 37 |
-
opt.value = d.name;
|
| 38 |
-
opt.textContent = d.name;
|
| 39 |
-
if (app.currentUser?.name === d.name) opt.selected = true;
|
| 40 |
-
sel.appendChild(opt);
|
| 41 |
-
});
|
| 42 |
-
sel.addEventListener('change', () => {
|
| 43 |
-
if (sel.value === '__create__') {
|
| 44 |
-
newSec.style.display = '';
|
| 45 |
-
const input = document.getElementById('newDoctorName');
|
| 46 |
-
if (input) input.value = '';
|
| 47 |
-
} else {
|
| 48 |
-
newSec.style.display = 'none';
|
| 49 |
-
}
|
| 50 |
-
});
|
| 51 |
-
const cancelBtn = document.getElementById('cancelNewDoctor');
|
| 52 |
-
const confirmBtn = document.getElementById('confirmNewDoctor');
|
| 53 |
-
if (cancelBtn) cancelBtn.onclick = () => { newSec.style.display = 'none'; sel.value = app.currentUser?.name || ''; };
|
| 54 |
-
if (confirmBtn) confirmBtn.onclick = () => {
|
| 55 |
-
const name = (document.getElementById('newDoctorName').value || '').trim();
|
| 56 |
-
if (!name) return;
|
| 57 |
-
if (!app.doctors.find(d => d.name === name)) {
|
| 58 |
-
app.doctors.unshift({ name });
|
| 59 |
-
app.saveDoctors();
|
| 60 |
-
}
|
| 61 |
-
app.populateDoctorSelect();
|
| 62 |
-
sel.value = name;
|
| 63 |
-
newSec.style.display = 'none';
|
| 64 |
-
};
|
| 65 |
-
};
|
| 66 |
-
|
| 67 |
-
app.showUserModal = function () {
|
| 68 |
-
app.populateDoctorSelect();
|
| 69 |
-
const sel = document.getElementById('profileNameSelect');
|
| 70 |
-
if (sel && sel.options.length === 0) {
|
| 71 |
-
const createOpt = document.createElement('option');
|
| 72 |
-
createOpt.value = '__create__';
|
| 73 |
-
createOpt.textContent = 'Create doctor user...';
|
| 74 |
-
sel.appendChild(createOpt);
|
| 75 |
-
}
|
| 76 |
-
if (sel && !sel.value) sel.value = app.currentUser?.name || '__create__';
|
| 77 |
-
document.getElementById('profileRole').value = app.currentUser.role;
|
| 78 |
-
document.getElementById('profileSpecialty').value = app.currentUser.specialty || '';
|
| 79 |
-
app.showModal('userModal');
|
| 80 |
-
};
|
| 81 |
-
|
| 82 |
-
app.saveUserProfile = function () {
|
| 83 |
-
const nameSel = document.getElementById('profileNameSelect');
|
| 84 |
-
const name = nameSel ? nameSel.value : '';
|
| 85 |
-
const role = document.getElementById('profileRole').value;
|
| 86 |
-
const specialty = document.getElementById('profileSpecialty').value.trim();
|
| 87 |
-
|
| 88 |
-
if (!name || name === '__create__') {
|
| 89 |
-
alert('Please select or create a doctor name.');
|
| 90 |
-
return;
|
| 91 |
-
}
|
| 92 |
-
|
| 93 |
-
if (!app.doctors.find(d => d.name === name)) {
|
| 94 |
-
app.doctors.unshift({ name });
|
| 95 |
-
app.saveDoctors();
|
| 96 |
-
}
|
| 97 |
-
|
| 98 |
-
app.currentUser.name = name;
|
| 99 |
-
app.currentUser.role = role;
|
| 100 |
-
app.currentUser.specialty = specialty;
|
| 101 |
-
|
| 102 |
-
app.saveUser();
|
| 103 |
-
app.updateUserDisplay();
|
| 104 |
-
app.hideModal('userModal');
|
| 105 |
-
};
|
| 106 |
-
|
| 107 |
-
// Doctor modal open/close wiring
|
| 108 |
-
document.addEventListener('DOMContentLoaded', () => {
|
| 109 |
-
const doctorCard = document.getElementById('userProfile');
|
| 110 |
-
const userModal = document.getElementById('userModal');
|
| 111 |
-
const closeBtn = document.getElementById('userModalClose');
|
| 112 |
-
const cancelBtn = document.getElementById('userModalCancel');
|
| 113 |
-
if (doctorCard && userModal) {
|
| 114 |
-
doctorCard.addEventListener('click', () => userModal.classList.add('show'));
|
| 115 |
-
}
|
| 116 |
-
if (closeBtn) closeBtn.addEventListener('click', () => userModal.classList.remove('show'));
|
| 117 |
-
if (cancelBtn) cancelBtn.addEventListener('click', () => userModal.classList.remove('show'));
|
| 118 |
-
if (userModal) {
|
| 119 |
-
userModal.addEventListener('click', (e) => { if (e.target === userModal) userModal.classList.remove('show'); });
|
| 120 |
-
}
|
| 121 |
-
});
|
| 122 |
-
}
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
static/js/ui/handlers.js
DELETED
|
@@ -1,68 +0,0 @@
|
|
| 1 |
-
// ui/handlers.js
|
| 2 |
-
// DOM wiring helpers: sidebar open/close, modal wiring, textarea autosize, export/clear
|
| 3 |
-
|
| 4 |
-
export function attachUIHandlers(app) {
|
| 5 |
-
// Sidebar toggle implementation
|
| 6 |
-
app.toggleSidebar = function () {
|
| 7 |
-
const sidebar = document.getElementById('sidebar');
|
| 8 |
-
console.log('[DEBUG] toggleSidebar called');
|
| 9 |
-
if (sidebar) {
|
| 10 |
-
const wasOpen = sidebar.classList.contains('show');
|
| 11 |
-
sidebar.classList.toggle('show');
|
| 12 |
-
const isNowOpen = sidebar.classList.contains('show');
|
| 13 |
-
console.log('[DEBUG] Sidebar toggled - was open:', wasOpen, 'now open:', isNowOpen);
|
| 14 |
-
} else {
|
| 15 |
-
console.error('[DEBUG] Sidebar element not found');
|
| 16 |
-
}
|
| 17 |
-
};
|
| 18 |
-
|
| 19 |
-
// Textarea autosize
|
| 20 |
-
app.autoResizeTextarea = function (textarea) {
|
| 21 |
-
if (!textarea) return;
|
| 22 |
-
textarea.style.height = 'auto';
|
| 23 |
-
textarea.style.height = Math.min(textarea.scrollHeight, 120) + 'px';
|
| 24 |
-
};
|
| 25 |
-
|
| 26 |
-
// Export current chat as JSON
|
| 27 |
-
app.exportChat = function () {
|
| 28 |
-
if (!app.currentSession || app.currentSession.messages.length === 0) {
|
| 29 |
-
alert('No chat to export.');
|
| 30 |
-
return;
|
| 31 |
-
}
|
| 32 |
-
const chatData = {
|
| 33 |
-
user: app.currentUser?.name || 'Unknown',
|
| 34 |
-
session: app.currentSession.title,
|
| 35 |
-
date: new Date().toISOString(),
|
| 36 |
-
messages: app.currentSession.messages
|
| 37 |
-
};
|
| 38 |
-
const blob = new Blob([JSON.stringify(chatData, null, 2)], { type: 'application/json' });
|
| 39 |
-
const url = URL.createObjectURL(blob);
|
| 40 |
-
const a = document.createElement('a');
|
| 41 |
-
a.href = url;
|
| 42 |
-
a.download = `medical-chat-${app.currentSession.title.replace(/[^a-z0-9]/gi, '-')}.json`;
|
| 43 |
-
document.body.appendChild(a);
|
| 44 |
-
a.click();
|
| 45 |
-
document.body.removeChild(a);
|
| 46 |
-
URL.revokeObjectURL(url);
|
| 47 |
-
};
|
| 48 |
-
|
| 49 |
-
// Clear current chat
|
| 50 |
-
app.clearChat = function () {
|
| 51 |
-
if (confirm('Are you sure you want to clear this chat? This action cannot be undone.')) {
|
| 52 |
-
app.clearChatMessages();
|
| 53 |
-
if (app.currentSession) {
|
| 54 |
-
app.currentSession.messages = [];
|
| 55 |
-
app.currentSession.title = 'New Chat';
|
| 56 |
-
app.updateChatTitle();
|
| 57 |
-
}
|
| 58 |
-
}
|
| 59 |
-
};
|
| 60 |
-
|
| 61 |
-
// Generic modal helpers
|
| 62 |
-
app.showModal = function (modalId) {
|
| 63 |
-
document.getElementById(modalId)?.classList.add('show');
|
| 64 |
-
};
|
| 65 |
-
app.hideModal = function (modalId) {
|
| 66 |
-
document.getElementById(modalId)?.classList.remove('show');
|
| 67 |
-
};
|
| 68 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
static/js/ui/patient.js
DELETED
|
@@ -1,246 +0,0 @@
|
|
| 1 |
-
// ui/patient.js
|
| 2 |
-
// Patient selection, typeahead search, load/hydrate, patient modal wiring
|
| 3 |
-
|
| 4 |
-
export function attachPatientUI(app) {
|
| 5 |
-
// State helpers
|
| 6 |
-
app.loadSavedPatientId = function () {
|
| 7 |
-
const pid = localStorage.getItem('medicalChatbotPatientId');
|
| 8 |
-
if (pid && /^\d{8}$/.test(pid)) {
|
| 9 |
-
app.currentPatientId = pid;
|
| 10 |
-
const status = document.getElementById('patientStatus');
|
| 11 |
-
if (status) {
|
| 12 |
-
status.textContent = `Patient: ${pid}`;
|
| 13 |
-
status.style.color = 'var(--text-secondary)';
|
| 14 |
-
}
|
| 15 |
-
const input = document.getElementById('patientIdInput');
|
| 16 |
-
if (input) input.value = pid;
|
| 17 |
-
}
|
| 18 |
-
};
|
| 19 |
-
|
| 20 |
-
app.savePatientId = function () {
|
| 21 |
-
if (app.currentPatientId) localStorage.setItem('medicalChatbotPatientId', app.currentPatientId);
|
| 22 |
-
else localStorage.removeItem('medicalChatbotPatientId');
|
| 23 |
-
};
|
| 24 |
-
|
| 25 |
-
app.loadPatient = async function () {
|
| 26 |
-
console.log('[DEBUG] loadPatient called');
|
| 27 |
-
const input = document.getElementById('patientIdInput');
|
| 28 |
-
const status = document.getElementById('patientStatus');
|
| 29 |
-
const id = (input?.value || '').trim();
|
| 30 |
-
console.log('[DEBUG] Patient ID from input:', id);
|
| 31 |
-
if (!/^\d{8}$/.test(id)) {
|
| 32 |
-
console.log('[DEBUG] Invalid patient ID format');
|
| 33 |
-
if (status) { status.textContent = 'Invalid patient ID. Use 8 digits.'; status.style.color = 'var(--warning-color)'; }
|
| 34 |
-
return;
|
| 35 |
-
}
|
| 36 |
-
console.log('[DEBUG] Setting current patient ID:', id);
|
| 37 |
-
app.currentPatientId = id;
|
| 38 |
-
app.savePatientId();
|
| 39 |
-
if (status) { status.textContent = `Patient: ${id}`; status.style.color = 'var(--text-secondary)'; }
|
| 40 |
-
await app.fetchAndRenderPatientSessions();
|
| 41 |
-
};
|
| 42 |
-
|
| 43 |
-
app.fetchAndRenderPatientSessions = async function () {
|
| 44 |
-
if (!app.currentPatientId) return;
|
| 45 |
-
try {
|
| 46 |
-
const resp = await fetch(`/patients/${app.currentPatientId}/sessions`);
|
| 47 |
-
if (resp.ok) {
|
| 48 |
-
const data = await resp.json();
|
| 49 |
-
const sessions = Array.isArray(data.sessions) ? data.sessions : [];
|
| 50 |
-
app.backendSessions = sessions.map(s => ({
|
| 51 |
-
id: s.session_id,
|
| 52 |
-
title: s.title || 'New Chat',
|
| 53 |
-
messages: [],
|
| 54 |
-
createdAt: s.created_at || new Date().toISOString(),
|
| 55 |
-
lastActivity: s.last_activity || new Date().toISOString(),
|
| 56 |
-
source: 'backend'
|
| 57 |
-
}));
|
| 58 |
-
if (app.backendSessions.length > 0) {
|
| 59 |
-
app.currentSession = app.backendSessions[0];
|
| 60 |
-
await app.hydrateMessagesForSession(app.currentSession.id);
|
| 61 |
-
}
|
| 62 |
-
} else {
|
| 63 |
-
console.warn('Failed to fetch patient sessions', resp.status);
|
| 64 |
-
app.backendSessions = [];
|
| 65 |
-
}
|
| 66 |
-
} catch (e) {
|
| 67 |
-
console.error('Failed to load patient sessions', e);
|
| 68 |
-
app.backendSessions = [];
|
| 69 |
-
}
|
| 70 |
-
app.loadChatSessions();
|
| 71 |
-
};
|
| 72 |
-
|
| 73 |
-
app.hydrateMessagesForSession = async function (sessionId) {
|
| 74 |
-
try {
|
| 75 |
-
const resp = await fetch(`/sessions/${sessionId}/messages?patient_id=${app.currentPatientId}&limit=1000`);
|
| 76 |
-
if (!resp.ok) return;
|
| 77 |
-
const data = await resp.json();
|
| 78 |
-
const msgs = Array.isArray(data.messages) ? data.messages : [];
|
| 79 |
-
const normalized = msgs.map(m => ({
|
| 80 |
-
id: m._id || app.generateId(),
|
| 81 |
-
role: m.role,
|
| 82 |
-
content: m.content,
|
| 83 |
-
timestamp: m.timestamp
|
| 84 |
-
}));
|
| 85 |
-
if (app.currentSession && app.currentSession.id === sessionId) {
|
| 86 |
-
app.currentSession.messages = normalized;
|
| 87 |
-
app.clearChatMessages();
|
| 88 |
-
app.currentSession.messages.forEach(m => app.displayMessage(m));
|
| 89 |
-
app.updateChatTitle();
|
| 90 |
-
}
|
| 91 |
-
} catch (e) {
|
| 92 |
-
console.error('Failed to hydrate session messages', e);
|
| 93 |
-
}
|
| 94 |
-
};
|
| 95 |
-
|
| 96 |
-
// Bind patient input + typeahead + load button
|
| 97 |
-
app.bindPatientHandlers = function () {
|
| 98 |
-
console.log('[DEBUG] bindPatientHandlers called');
|
| 99 |
-
const loadBtn = document.getElementById('loadPatientBtn');
|
| 100 |
-
console.log('[DEBUG] Load button found:', !!loadBtn);
|
| 101 |
-
if (loadBtn) loadBtn.addEventListener('click', () => app.loadPatient());
|
| 102 |
-
const patientInput = document.getElementById('patientIdInput');
|
| 103 |
-
const suggestionsEl = document.getElementById('patientSuggestions');
|
| 104 |
-
console.log('[DEBUG] Patient input found:', !!patientInput);
|
| 105 |
-
console.log('[DEBUG] Suggestions element found:', !!suggestionsEl);
|
| 106 |
-
if (!patientInput) return;
|
| 107 |
-
let debounceTimer;
|
| 108 |
-
const hideSuggestions = () => { if (suggestionsEl) suggestionsEl.style.display = 'none'; };
|
| 109 |
-
const renderSuggestions = (items) => {
|
| 110 |
-
if (!suggestionsEl) return;
|
| 111 |
-
if (!items || items.length === 0) { hideSuggestions(); return; }
|
| 112 |
-
suggestionsEl.innerHTML = '';
|
| 113 |
-
items.forEach(p => {
|
| 114 |
-
const div = document.createElement('div');
|
| 115 |
-
div.className = 'patient-suggestion';
|
| 116 |
-
div.textContent = `${p.name || 'Unknown'} (${p.patient_id})`;
|
| 117 |
-
div.addEventListener('click', async () => {
|
| 118 |
-
app.currentPatientId = p.patient_id;
|
| 119 |
-
app.savePatientId();
|
| 120 |
-
patientInput.value = p.patient_id;
|
| 121 |
-
hideSuggestions();
|
| 122 |
-
const status = document.getElementById('patientStatus');
|
| 123 |
-
if (status) { status.textContent = `Patient: ${p.patient_id}`; status.style.color = 'var(--text-secondary)'; }
|
| 124 |
-
await app.fetchAndRenderPatientSessions();
|
| 125 |
-
});
|
| 126 |
-
suggestionsEl.appendChild(div);
|
| 127 |
-
});
|
| 128 |
-
suggestionsEl.style.display = 'block';
|
| 129 |
-
};
|
| 130 |
-
patientInput.addEventListener('input', () => {
|
| 131 |
-
const q = patientInput.value.trim();
|
| 132 |
-
console.log('[DEBUG] Patient input changed:', q);
|
| 133 |
-
clearTimeout(debounceTimer);
|
| 134 |
-
if (!q) { hideSuggestions(); return; }
|
| 135 |
-
debounceTimer = setTimeout(async () => {
|
| 136 |
-
try {
|
| 137 |
-
console.log('[DEBUG] Searching patients with query:', q);
|
| 138 |
-
const resp = await fetch(`/patients/search?q=${encodeURIComponent(q)}&limit=8`, { headers: { 'Accept': 'application/json' } });
|
| 139 |
-
console.log('[DEBUG] Search response status:', resp.status);
|
| 140 |
-
if (resp.ok) {
|
| 141 |
-
const data = await resp.json();
|
| 142 |
-
console.log('[DEBUG] Search results:', data);
|
| 143 |
-
renderSuggestions(data.results || []);
|
| 144 |
-
} else {
|
| 145 |
-
console.warn('Search request failed', resp.status);
|
| 146 |
-
}
|
| 147 |
-
} catch (e) {
|
| 148 |
-
console.error('[DEBUG] Search error:', e);
|
| 149 |
-
}
|
| 150 |
-
}, 200);
|
| 151 |
-
});
|
| 152 |
-
patientInput.addEventListener('keydown', async (e) => {
|
| 153 |
-
if (e.key === 'Enter') {
|
| 154 |
-
const value = patientInput.value.trim();
|
| 155 |
-
console.log('[DEBUG] Patient input Enter pressed with value:', value);
|
| 156 |
-
if (/^\d{8}$/.test(value)) {
|
| 157 |
-
console.log('[DEBUG] Loading patient with 8-digit ID');
|
| 158 |
-
await app.loadPatient();
|
| 159 |
-
hideSuggestions();
|
| 160 |
-
} else {
|
| 161 |
-
console.log('[DEBUG] Searching for patient by name/partial ID');
|
| 162 |
-
try {
|
| 163 |
-
const resp = await fetch(`/patients/search?q=${encodeURIComponent(value)}&limit=1`);
|
| 164 |
-
console.log('[DEBUG] Search response status:', resp.status);
|
| 165 |
-
if (resp.ok) {
|
| 166 |
-
const data = await resp.json();
|
| 167 |
-
console.log('[DEBUG] Search results for Enter:', data);
|
| 168 |
-
const first = (data.results || [])[0];
|
| 169 |
-
if (first) {
|
| 170 |
-
console.log('[DEBUG] Found patient, setting as current:', first);
|
| 171 |
-
app.currentPatientId = first.patient_id;
|
| 172 |
-
app.savePatientId();
|
| 173 |
-
patientInput.value = first.patient_id;
|
| 174 |
-
hideSuggestions();
|
| 175 |
-
const status = document.getElementById('patientStatus');
|
| 176 |
-
if (status) { status.textContent = `Patient: ${first.patient_id}`; status.style.color = 'var(--text-secondary)'; }
|
| 177 |
-
await app.fetchAndRenderPatientSessions();
|
| 178 |
-
return;
|
| 179 |
-
}
|
| 180 |
-
}
|
| 181 |
-
} catch (e) {
|
| 182 |
-
console.error('[DEBUG] Search error on Enter:', e);
|
| 183 |
-
}
|
| 184 |
-
const status = document.getElementById('patientStatus');
|
| 185 |
-
if (status) { status.textContent = 'No matching patient found'; status.style.color = 'var(--warning-color)'; }
|
| 186 |
-
}
|
| 187 |
-
}
|
| 188 |
-
});
|
| 189 |
-
document.addEventListener('click', (ev) => {
|
| 190 |
-
if (!suggestionsEl) return;
|
| 191 |
-
if (!suggestionsEl.contains(ev.target) && ev.target !== patientInput) hideSuggestions();
|
| 192 |
-
});
|
| 193 |
-
};
|
| 194 |
-
|
| 195 |
-
// Patient modal wiring
|
| 196 |
-
document.addEventListener('DOMContentLoaded', () => {
|
| 197 |
-
const profileBtn = document.getElementById('patientMenuBtn');
|
| 198 |
-
const modal = document.getElementById('patientModal');
|
| 199 |
-
const closeBtn = document.getElementById('patientModalClose');
|
| 200 |
-
const logoutBtn = document.getElementById('patientLogoutBtn');
|
| 201 |
-
const createBtn = document.getElementById('patientCreateBtn');
|
| 202 |
-
if (profileBtn && modal) {
|
| 203 |
-
profileBtn.addEventListener('click', async () => {
|
| 204 |
-
const pid = app?.currentPatientId;
|
| 205 |
-
if (pid) {
|
| 206 |
-
try {
|
| 207 |
-
const resp = await fetch(`/patients/${pid}`);
|
| 208 |
-
if (resp.ok) {
|
| 209 |
-
const p = await resp.json();
|
| 210 |
-
const name = p.name || 'Unknown';
|
| 211 |
-
const age = typeof p.age === 'number' ? p.age : '-';
|
| 212 |
-
const sex = p.sex || '-';
|
| 213 |
-
const meds = Array.isArray(p.medications) && p.medications.length > 0 ? p.medications.join(', ') : '-';
|
| 214 |
-
document.getElementById('patientSummary').textContent = `${name} — ${sex}, ${age}`;
|
| 215 |
-
document.getElementById('patientMedications').textContent = meds;
|
| 216 |
-
document.getElementById('patientAssessment').textContent = p.past_assessment_summary || '-';
|
| 217 |
-
}
|
| 218 |
-
} catch (e) {
|
| 219 |
-
console.error('Failed to load patient profile', e);
|
| 220 |
-
}
|
| 221 |
-
}
|
| 222 |
-
modal.classList.add('show');
|
| 223 |
-
});
|
| 224 |
-
}
|
| 225 |
-
if (closeBtn && modal) {
|
| 226 |
-
closeBtn.addEventListener('click', () => modal.classList.remove('show'));
|
| 227 |
-
modal.addEventListener('click', (e) => { if (e.target === modal) modal.classList.remove('show'); });
|
| 228 |
-
}
|
| 229 |
-
if (logoutBtn) {
|
| 230 |
-
logoutBtn.addEventListener('click', () => {
|
| 231 |
-
if (confirm('Log out current patient?')) {
|
| 232 |
-
app.currentPatientId = null;
|
| 233 |
-
localStorage.removeItem('medicalChatbotPatientId');
|
| 234 |
-
const status = document.getElementById('patientStatus');
|
| 235 |
-
if (status) { status.textContent = 'No patient selected'; status.style.color = 'var(--text-secondary)'; }
|
| 236 |
-
const input = document.getElementById('patientIdInput');
|
| 237 |
-
if (input) input.value = '';
|
| 238 |
-
modal.classList.remove('show');
|
| 239 |
-
}
|
| 240 |
-
});
|
| 241 |
-
}
|
| 242 |
-
if (createBtn) createBtn.addEventListener('click', () => modal.classList.remove('show'));
|
| 243 |
-
});
|
| 244 |
-
}
|
| 245 |
-
|
| 246 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
static/js/ui/settings.js
DELETED
|
@@ -1,74 +0,0 @@
|
|
| 1 |
-
// ui/settings.js
|
| 2 |
-
// Settings UI: theme/font preferences, showLoading overlay, settings modal wiring
|
| 3 |
-
|
| 4 |
-
export function attachSettingsUI(app) {
|
| 5 |
-
app.loadUserPreferences = function () {
|
| 6 |
-
const preferences = localStorage.getItem('medicalChatbotPreferences');
|
| 7 |
-
if (preferences) {
|
| 8 |
-
const prefs = JSON.parse(preferences);
|
| 9 |
-
app.setTheme(prefs.theme || 'auto');
|
| 10 |
-
app.setFontSize(prefs.fontSize || 'medium');
|
| 11 |
-
}
|
| 12 |
-
};
|
| 13 |
-
|
| 14 |
-
app.setupTheme = function () {
|
| 15 |
-
if (window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches) {
|
| 16 |
-
app.setTheme('auto');
|
| 17 |
-
}
|
| 18 |
-
};
|
| 19 |
-
|
| 20 |
-
app.setTheme = function (theme) {
|
| 21 |
-
const root = document.documentElement;
|
| 22 |
-
if (theme === 'auto') {
|
| 23 |
-
const isDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
|
| 24 |
-
root.setAttribute('data-theme', isDark ? 'dark' : 'light');
|
| 25 |
-
} else {
|
| 26 |
-
root.setAttribute('data-theme', theme);
|
| 27 |
-
}
|
| 28 |
-
const sel = document.getElementById('themeSelect');
|
| 29 |
-
if (sel) sel.value = theme;
|
| 30 |
-
app.savePreferences();
|
| 31 |
-
};
|
| 32 |
-
|
| 33 |
-
app.setFontSize = function (size) {
|
| 34 |
-
const root = document.documentElement;
|
| 35 |
-
root.style.fontSize = size === 'small' ? '14px' : size === 'large' ? '18px' : '16px';
|
| 36 |
-
app.savePreferences();
|
| 37 |
-
};
|
| 38 |
-
|
| 39 |
-
app.savePreferences = function () {
|
| 40 |
-
const preferences = {
|
| 41 |
-
theme: document.getElementById('themeSelect')?.value,
|
| 42 |
-
fontSize: document.getElementById('fontSize')?.value,
|
| 43 |
-
autoSave: document.getElementById('autoSave')?.checked,
|
| 44 |
-
notifications: document.getElementById('notifications')?.checked
|
| 45 |
-
};
|
| 46 |
-
localStorage.setItem('medicalChatbotPreferences', JSON.stringify(preferences));
|
| 47 |
-
};
|
| 48 |
-
|
| 49 |
-
app.showLoading = function (show) {
|
| 50 |
-
app.isLoading = show;
|
| 51 |
-
const overlay = document.getElementById('loadingOverlay');
|
| 52 |
-
const sendBtn = document.getElementById('sendBtn');
|
| 53 |
-
if (!overlay || !sendBtn) return;
|
| 54 |
-
if (show) {
|
| 55 |
-
overlay.classList.add('show');
|
| 56 |
-
sendBtn.disabled = true;
|
| 57 |
-
} else {
|
| 58 |
-
overlay.classList.remove('show');
|
| 59 |
-
sendBtn.disabled = false;
|
| 60 |
-
}
|
| 61 |
-
};
|
| 62 |
-
|
| 63 |
-
// Settings modal open/close wiring
|
| 64 |
-
document.addEventListener('DOMContentLoaded', () => {
|
| 65 |
-
const settingsBtn = document.getElementById('settingsBtn');
|
| 66 |
-
const modal = document.getElementById('settingsModal');
|
| 67 |
-
const closeBtn = document.getElementById('settingsModalClose');
|
| 68 |
-
const cancelBtn = document.getElementById('settingsModalCancel');
|
| 69 |
-
if (settingsBtn && modal) settingsBtn.addEventListener('click', () => modal.classList.add('show'));
|
| 70 |
-
if (closeBtn) closeBtn.addEventListener('click', () => modal.classList.remove('show'));
|
| 71 |
-
if (cancelBtn) cancelBtn.addEventListener('click', () => modal.classList.remove('show'));
|
| 72 |
-
if (modal) modal.addEventListener('click', (e) => { if (e.target === modal) modal.classList.remove('show'); });
|
| 73 |
-
});
|
| 74 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|