AbdullahIsaMarkus commited on
Commit
be716ff
Β·
verified Β·
1 Parent(s): 5895c9e

Upload folder using huggingface_hub

Browse files
.gitignore ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+ MANIFEST
23
+
24
+ # Virtual environments
25
+ venv/
26
+ ENV/
27
+ env/
28
+ .venv
29
+
30
+ # IDE
31
+ .vscode/
32
+ .idea/
33
+ *.swp
34
+ *.swo
35
+ *~
36
+
37
+ # OS
38
+ .DS_Store
39
+ Thumbs.db
40
+
41
+ # Node
42
+ node_modules/
43
+ npm-debug.log*
44
+ yarn-debug.log*
45
+ yarn-error.log*
46
+
47
+ # Gradio
48
+ flagged/
49
+ gradio_cached_examples/
50
+
51
+ # Project specific
52
+ *.dcm
53
+ *.dicom
54
+ IM_*
55
+ test_images/
56
+ temp/
57
+ .gradio/
README.md CHANGED
@@ -1,12 +1,1026 @@
 
1
  ---
2
- title: Gradio Medical Image Analyzer
3
- emoji: πŸš€
4
- colorFrom: purple
5
- colorTo: indigo
6
  sdk: gradio
7
  sdk_version: 5.33.0
8
- app_file: app.py
9
  pinned: false
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
  ---
3
+ title: Medical Image Analyzer Component
4
+ emoji: πŸ₯
5
+ colorFrom: blue
6
+ colorTo: green
7
  sdk: gradio
8
  sdk_version: 5.33.0
9
+ app_file: demo/app.py
10
  pinned: false
11
+ license: apache-2.0
12
+ tags:
13
+ - custom-component-track
14
+ - medical-imaging
15
+ - gradio-custom-component
16
+ - hackathon-2025
17
+ - ai-agents
18
  ---
19
 
20
+ # `gradio_medical_image_analyzer`
21
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange"> <a href="https://github.com/markusclauss/gradio-medical-image-analyzer/issues" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Issues-white?logo=github&logoColor=black"></a>
22
+
23
+ AI-agent optimized medical image analysis component for Gradio
24
+
25
+ ## ⚠️ IMPORTANT MEDICAL DISCLAIMER ⚠️
26
+
27
+ **THIS SOFTWARE IS FOR RESEARCH AND EDUCATIONAL PURPOSES ONLY**
28
+
29
+ 🚨 **DO NOT USE FOR CLINICAL DIAGNOSIS OR MEDICAL DECISION MAKING** 🚨
30
+
31
+ This component is in **EARLY DEVELOPMENT** and is intended as a **proof of concept** for medical image analysis integration with Gradio. The results produced by this software:
32
+
33
+ - **ARE NOT** validated for clinical use
34
+ - **ARE NOT** FDA approved or CE marked
35
+ - **SHOULD NOT** be used for patient diagnosis or treatment decisions
36
+ - **SHOULD NOT** replace professional medical judgment
37
+ - **MAY CONTAIN** significant errors or inaccuracies
38
+ - **ARE PROVIDED** without any warranty of accuracy or fitness for medical purposes
39
+
40
+ **ALWAYS CONSULT QUALIFIED HEALTHCARE PROFESSIONALS** for medical image interpretation and clinical decisions. This software is intended solely for:
41
+ - Research and development purposes
42
+ - Educational demonstrations
43
+ - Technical integration testing
44
+ - Non-clinical experimental use
45
+
46
+ By using this software, you acknowledge that you understand these limitations and agree not to use it for any clinical or medical diagnostic purposes.
47
+
48
+ ## Installation
49
+
50
+ ```bash
51
+ pip install gradio_medical_image_analyzer
52
+ ```
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ #!/usr/bin/env python3
58
+ """
59
+ Demo for MedicalImageAnalyzer - Enhanced with file upload and overlay visualization
60
+ """
61
+
62
+ import gradio as gr
63
+ import numpy as np
64
+ import sys
65
+ import os
66
+ import cv2
67
+ from pathlib import Path
68
+
69
+ # Add backend to path
70
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
71
+
72
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
73
+
74
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
75
+ """Draw ROI circle on the image"""
76
+ # Convert to RGB if grayscale
77
+ if len(image.shape) == 2:
78
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
79
+ else:
80
+ image_rgb = image.copy()
81
+
82
+ # Draw ROI circle
83
+ center = (int(roi_x), int(roi_y))
84
+ radius = int(roi_radius)
85
+
86
+ # Draw outer circle (white)
87
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
88
+ # Draw inner circle (red)
89
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
90
+ # Draw center cross
91
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
92
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
93
+
94
+ return image_rgb
95
+
96
+ def create_fat_overlay(base_image, segmentation_results):
97
+ """Create overlay image with fat segmentation highlighted"""
98
+ # Convert to RGB
99
+ if len(base_image.shape) == 2:
100
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
101
+ else:
102
+ overlay_img = base_image.copy()
103
+
104
+ # Check if we have segmentation masks
105
+ if not segmentation_results or 'segments' not in segmentation_results:
106
+ return overlay_img
107
+
108
+ segments = segmentation_results.get('segments', {})
109
+
110
+ # Apply subcutaneous fat overlay (yellow)
111
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
112
+ mask = segments['subcutaneous']['mask']
113
+ yellow_overlay = np.zeros_like(overlay_img)
114
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
115
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
116
+
117
+ # Apply visceral fat overlay (red)
118
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
119
+ mask = segments['visceral']['mask']
120
+ red_overlay = np.zeros_like(overlay_img)
121
+ red_overlay[mask > 0] = [255, 0, 0] # Red
122
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
123
+
124
+ # Add legend
125
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
126
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
127
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
128
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
129
+
130
+ return overlay_img
131
+
132
+ def process_and_analyze(file_obj, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
133
+ """
134
+ Processes uploaded file and performs analysis
135
+ """
136
+ if file_obj is None:
137
+ return None, "No file selected", None, {}, None
138
+
139
+ # Create analyzer instance
140
+ analyzer = MedicalImageAnalyzer(
141
+ analysis_mode="structured",
142
+ include_confidence=True,
143
+ include_reasoning=True
144
+ )
145
+
146
+ try:
147
+ # Process the file (DICOM or image)
148
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
149
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
150
+
151
+ # Update modality from file metadata if it's a DICOM
152
+ if metadata.get('file_type') == 'DICOM' and 'modality' in metadata:
153
+ modality = metadata['modality']
154
+
155
+ # Prepare analysis parameters
156
+ analysis_params = {
157
+ "image": pixel_array,
158
+ "modality": modality,
159
+ "task": task
160
+ }
161
+
162
+ # Add ROI if applicable
163
+ if task in ["analyze_point", "full_analysis"]:
164
+ # Scale ROI coordinates to image size
165
+ h, w = pixel_array.shape
166
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
167
+ roi_y_scaled = int(roi_y * h / 512)
168
+
169
+ analysis_params["roi"] = {
170
+ "x": roi_x_scaled,
171
+ "y": roi_y_scaled,
172
+ "radius": roi_radius
173
+ }
174
+
175
+ # Add clinical context
176
+ if symptoms:
177
+ analysis_params["clinical_context"] = {"symptoms": symptoms}
178
+
179
+ # Perform analysis
180
+ results = analyzer.analyze_image(**analysis_params)
181
+
182
+ # Create visual report
183
+ visual_report = create_visual_report(results, metadata)
184
+
185
+ # Add metadata info
186
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
187
+ info += f"πŸ₯ {modality} | "
188
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
189
+
190
+ if metadata.get('window_center'):
191
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
192
+
193
+ # Create overlay image if requested
194
+ overlay_image = None
195
+ if show_overlay:
196
+ # For ROI visualization
197
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
198
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x_scaled, roi_y_scaled, roi_radius)
199
+
200
+ # For fat segmentation overlay (simplified version since we don't have masks in current implementation)
201
+ elif task == "segment_fat" and 'segmentation' in results and modality == 'CT':
202
+ # For now, just draw ROI since we don't have actual masks
203
+ overlay_image = display_array.copy()
204
+ if len(overlay_image.shape) == 2:
205
+ overlay_image = cv2.cvtColor(overlay_image, cv2.COLOR_GRAY2RGB)
206
+ # Add text overlay about fat percentages
207
+ if 'statistics' in results['segmentation']:
208
+ stats = results['segmentation']['statistics']
209
+ cv2.putText(overlay_image, f"Total Fat: {stats.get('total_fat_percentage', 0):.1f}%",
210
+ (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
211
+ cv2.putText(overlay_image, f"Subcutaneous: {stats.get('subcutaneous_fat_percentage', 0):.1f}%",
212
+ (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
213
+ cv2.putText(overlay_image, f"Visceral: {stats.get('visceral_fat_percentage', 0):.1f}%",
214
+ (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
215
+
216
+ return display_array, info, visual_report, results, overlay_image
217
+
218
+ except Exception as e:
219
+ error_msg = f"Error: {str(e)}"
220
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
221
+
222
+ def create_visual_report(results, metadata):
223
+ """Creates a visual HTML report with improved styling"""
224
+ html = f"""
225
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
226
+ padding: 24px;
227
+ background: #ffffff;
228
+ border-radius: 12px;
229
+ max-width: 100%;
230
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
231
+ color: #1a1a1a !important;'>
232
+
233
+ <h2 style='color: #1e40af !important;
234
+ border-bottom: 3px solid #3b82f6;
235
+ padding-bottom: 12px;
236
+ margin-bottom: 20px;
237
+ font-size: 24px;
238
+ font-weight: 600;'>
239
+ πŸ₯ Medical Image Analysis Report
240
+ </h2>
241
+
242
+ <div style='background: #f0f9ff;
243
+ padding: 20px;
244
+ margin: 16px 0;
245
+ border-radius: 8px;
246
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
247
+ <h3 style='color: #1e3a8a !important;
248
+ font-size: 18px;
249
+ font-weight: 600;
250
+ margin-bottom: 12px;'>
251
+ πŸ“‹ Metadata
252
+ </h3>
253
+ <table style='width: 100%; border-collapse: collapse;'>
254
+ <tr>
255
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
256
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
257
+ </tr>
258
+ <tr>
259
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
260
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
261
+ </tr>
262
+ <tr>
263
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
264
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
265
+ </tr>
266
+ <tr>
267
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
268
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
269
+ </tr>
270
+ </table>
271
+ </div>
272
+ """
273
+
274
+ # Point Analysis
275
+ if 'point_analysis' in results:
276
+ pa = results['point_analysis']
277
+ tissue = pa.get('tissue_type', {})
278
+
279
+ html += f"""
280
+ <div style='background: #f0f9ff;
281
+ padding: 20px;
282
+ margin: 16px 0;
283
+ border-radius: 8px;
284
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
285
+ <h3 style='color: #1e3a8a !important;
286
+ font-size: 18px;
287
+ font-weight: 600;
288
+ margin-bottom: 12px;'>
289
+ 🎯 Point Analysis
290
+ </h3>
291
+ <table style='width: 100%; border-collapse: collapse;'>
292
+ <tr>
293
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
294
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
295
+ </tr>
296
+ """
297
+
298
+ if results.get('modality') == 'CT':
299
+ html += f"""
300
+ <tr>
301
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
302
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
303
+ </tr>
304
+ """
305
+ else:
306
+ html += f"""
307
+ <tr>
308
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
309
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
310
+ </tr>
311
+ """
312
+
313
+ html += f"""
314
+ <tr>
315
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
316
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
317
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
318
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
319
+ </td>
320
+ </tr>
321
+ <tr>
322
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
323
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
324
+ </tr>
325
+ </table>
326
+ """
327
+
328
+ if 'reasoning' in pa:
329
+ html += f"""
330
+ <div style='margin-top: 12px;
331
+ padding: 12px;
332
+ background: #dbeafe;
333
+ border-left: 3px solid #3b82f6;
334
+ border-radius: 4px;'>
335
+ <p style='margin: 0; color: #1e40af !important; font-style: italic;'>
336
+ πŸ’­ {pa['reasoning']}
337
+ </p>
338
+ </div>
339
+ """
340
+
341
+ html += "</div>"
342
+
343
+ # Segmentation Results
344
+ if 'segmentation' in results and results['segmentation']:
345
+ seg = results['segmentation']
346
+
347
+ if 'statistics' in seg:
348
+ # Fat segmentation for CT
349
+ stats = seg['statistics']
350
+ html += f"""
351
+ <div style='background: #f0f9ff;
352
+ padding: 20px;
353
+ margin: 16px 0;
354
+ border-radius: 8px;
355
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
356
+ <h3 style='color: #1e3a8a !important;
357
+ font-size: 18px;
358
+ font-weight: 600;
359
+ margin-bottom: 12px;'>
360
+ πŸ”¬ Fat Segmentation Analysis
361
+ </h3>
362
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
363
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
364
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
365
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
366
+ </div>
367
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
368
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
369
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
370
+ </div>
371
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
372
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
373
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
374
+ </div>
375
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
376
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
377
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
378
+ </div>
379
+ </div>
380
+ """
381
+
382
+ if 'interpretation' in seg:
383
+ interp = seg['interpretation']
384
+ obesity_color = "#16a34a" if interp.get("obesity_risk") == "normal" else "#d97706" if interp.get("obesity_risk") == "moderate" else "#dc2626"
385
+ visceral_color = "#16a34a" if interp.get("visceral_risk") == "normal" else "#d97706" if interp.get("visceral_risk") == "moderate" else "#dc2626"
386
+
387
+ html += f"""
388
+ <div style='margin-top: 16px; padding: 16px; background: #f3f4f6; border-radius: 6px;'>
389
+ <h4 style='color: #374151 !important; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
390
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
391
+ <div>
392
+ <span style='color: #6b7280 !important; font-size: 14px;'>Obesity Risk:</span>
393
+ <span style='color: {obesity_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
394
+ </div>
395
+ <div>
396
+ <span style='color: #6b7280 !important; font-size: 14px;'>Visceral Risk:</span>
397
+ <span style='color: {visceral_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
398
+ </div>
399
+ </div>
400
+ """
401
+
402
+ if interp.get('recommendations'):
403
+ html += """
404
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e5e7eb;'>
405
+ <h5 style='color: #374151 !important; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
406
+ <ul style='margin: 0; padding-left: 20px; color: #4b5563 !important;'>
407
+ """
408
+ for rec in interp['recommendations']:
409
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
410
+ html += "</ul></div>"
411
+
412
+ html += "</div>"
413
+ html += "</div>"
414
+
415
+ # Quality Assessment
416
+ if 'quality_metrics' in results:
417
+ quality = results['quality_metrics']
418
+ quality_colors = {
419
+ 'excellent': '#16a34a',
420
+ 'good': '#16a34a',
421
+ 'fair': '#d97706',
422
+ 'poor': '#dc2626',
423
+ 'unknown': '#6b7280'
424
+ }
425
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#6b7280')
426
+
427
+ html += f"""
428
+ <div style='background: #f0f9ff;
429
+ padding: 20px;
430
+ margin: 16px 0;
431
+ border-radius: 8px;
432
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
433
+ <h3 style='color: #1e3a8a !important;
434
+ font-size: 18px;
435
+ font-weight: 600;
436
+ margin-bottom: 12px;'>
437
+ πŸ“Š Image Quality Assessment
438
+ </h3>
439
+ <div style='display: flex; align-items: center; gap: 16px;'>
440
+ <div>
441
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
442
+ <span style='color: {q_color} !important;
443
+ font-size: 18px;
444
+ font-weight: 700;
445
+ margin-left: 8px;'>
446
+ {quality.get('overall_quality', 'unknown').upper()}
447
+ </span>
448
+ </div>
449
+ </div>
450
+ """
451
+
452
+ if quality.get('issues'):
453
+ html += f"""
454
+ <div style='margin-top: 12px;
455
+ padding: 12px;
456
+ background: #fef3c7;
457
+ border-left: 3px solid #f59e0b;
458
+ border-radius: 4px;'>
459
+ <strong style='color: #92400e !important;'>Issues Detected:</strong>
460
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #92400e !important;'>
461
+ """
462
+ for issue in quality['issues']:
463
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
464
+ html += "</ul></div>"
465
+
466
+ html += "</div>"
467
+
468
+ html += "</div>"
469
+ return html
470
+
471
+ def create_demo():
472
+ with gr.Blocks(
473
+ title="Medical Image Analyzer - Enhanced Demo",
474
+ theme=gr.themes.Soft(
475
+ primary_hue="blue",
476
+ secondary_hue="blue",
477
+ neutral_hue="slate",
478
+ text_size="md",
479
+ spacing_size="md",
480
+ radius_size="md",
481
+ ).set(
482
+ # Medical blue theme colors
483
+ body_background_fill="*neutral_950",
484
+ body_background_fill_dark="*neutral_950",
485
+ block_background_fill="*neutral_900",
486
+ block_background_fill_dark="*neutral_900",
487
+ border_color_primary="*primary_600",
488
+ border_color_primary_dark="*primary_600",
489
+ # Text colors for better contrast
490
+ body_text_color="*neutral_100",
491
+ body_text_color_dark="*neutral_100",
492
+ body_text_color_subdued="*neutral_300",
493
+ body_text_color_subdued_dark="*neutral_300",
494
+ # Button colors
495
+ button_primary_background_fill="*primary_600",
496
+ button_primary_background_fill_dark="*primary_600",
497
+ button_primary_text_color="white",
498
+ button_primary_text_color_dark="white",
499
+ ),
500
+ css="""
501
+ /* Medical blue theme with high contrast */
502
+ :root {
503
+ --medical-blue: #1e40af;
504
+ --medical-blue-light: #3b82f6;
505
+ --medical-blue-dark: #1e3a8a;
506
+ --text-primary: #f9fafb;
507
+ --text-secondary: #e5e7eb;
508
+ --bg-primary: #0f172a;
509
+ --bg-secondary: #1e293b;
510
+ --bg-tertiary: #334155;
511
+ }
512
+
513
+ /* Override default text colors for medical theme */
514
+ * {
515
+ color: var(--text-primary) !important;
516
+ }
517
+
518
+ /* Style the file upload area */
519
+ .file-upload {
520
+ border: 2px dashed var(--medical-blue-light) !important;
521
+ border-radius: 8px !important;
522
+ padding: 20px !important;
523
+ text-align: center !important;
524
+ background: var(--bg-secondary) !important;
525
+ transition: all 0.3s ease !important;
526
+ color: var(--text-primary) !important;
527
+ }
528
+
529
+ .file-upload:hover {
530
+ border-color: var(--medical-blue) !important;
531
+ background: var(--bg-tertiary) !important;
532
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
533
+ }
534
+
535
+ /* Ensure report text is readable with white background */
536
+ .medical-report {
537
+ background: #ffffff !important;
538
+ border: 2px solid var(--medical-blue-light) !important;
539
+ border-radius: 8px !important;
540
+ padding: 16px !important;
541
+ color: #1a1a1a !important;
542
+ }
543
+
544
+ .medical-report * {
545
+ color: #1f2937 !important; /* Dark gray text */
546
+ }
547
+
548
+ .medical-report h2 {
549
+ color: #1e40af !important; /* Medical blue for main heading */
550
+ }
551
+
552
+ .medical-report h3, .medical-report h4 {
553
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
554
+ }
555
+
556
+ .medical-report strong {
557
+ color: #374151 !important; /* Darker gray for labels */
558
+ }
559
+
560
+ .medical-report td {
561
+ color: #1f2937 !important; /* Ensure table text is dark */
562
+ }
563
+
564
+ /* Report sections with light blue background */
565
+ .medical-report > div {
566
+ background: #f0f9ff !important;
567
+ color: #1f2937 !important;
568
+ }
569
+
570
+ /* Medical blue accents for UI elements */
571
+ .gr-button-primary {
572
+ background: var(--medical-blue) !important;
573
+ border-color: var(--medical-blue) !important;
574
+ }
575
+
576
+ .gr-button-primary:hover {
577
+ background: var(--medical-blue-dark) !important;
578
+ border-color: var(--medical-blue-dark) !important;
579
+ }
580
+
581
+ /* Tab styling */
582
+ .gr-tab-item {
583
+ border-color: var(--medical-blue-light) !important;
584
+ }
585
+
586
+ .gr-tab-item.selected {
587
+ background: var(--medical-blue) !important;
588
+ color: white !important;
589
+ }
590
+
591
+ /* Accordion styling */
592
+ .gr-accordion {
593
+ border-color: var(--medical-blue-light) !important;
594
+ }
595
+
596
+ /* Slider track in medical blue */
597
+ input[type="range"]::-webkit-slider-track {
598
+ background: var(--bg-tertiary) !important;
599
+ }
600
+
601
+ input[type="range"]::-webkit-slider-thumb {
602
+ background: var(--medical-blue) !important;
603
+ }
604
+ """
605
+ ) as demo:
606
+ gr.Markdown("""
607
+ # πŸ₯ Medical Image Analyzer
608
+
609
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
610
+ """)
611
+
612
+ with gr.Row():
613
+ with gr.Column(scale=1):
614
+ # File upload - no file type restrictions
615
+ with gr.Group():
616
+ gr.Markdown("### πŸ“€ Upload Medical Image")
617
+ file_input = gr.File(
618
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
619
+ file_count="single",
620
+ type="filepath",
621
+ elem_classes="file-upload"
622
+ # Note: NO file_types parameter = accepts ALL files
623
+ )
624
+ gr.Markdown("""
625
+ <small style='color: #666;'>
626
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
627
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
628
+ </small>
629
+ """)
630
+
631
+ # Modality selection
632
+ modality = gr.Radio(
633
+ choices=["CT", "CR", "DX", "RX", "DR"],
634
+ value="CT",
635
+ label="Modality",
636
+ info="Will be auto-detected for DICOM files"
637
+ )
638
+
639
+ # Task selection
640
+ task = gr.Dropdown(
641
+ choices=[
642
+ ("🎯 Point Analysis", "analyze_point"),
643
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
644
+ ("πŸ“Š Full Analysis", "full_analysis")
645
+ ],
646
+ value="full_analysis",
647
+ label="Analysis Task"
648
+ )
649
+
650
+ # ROI settings
651
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
652
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
653
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
654
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
655
+
656
+ # Clinical context
657
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
658
+ symptoms = gr.CheckboxGroup(
659
+ choices=[
660
+ "dyspnea", "chest_pain", "abdominal_pain",
661
+ "trauma", "obesity_screening", "routine_check"
662
+ ],
663
+ label="Symptoms/Indication"
664
+ )
665
+
666
+ # Visualization options
667
+ with gr.Accordion("🎨 Visualization Options", open=True):
668
+ show_overlay = gr.Checkbox(
669
+ label="Show ROI/Segmentation Overlay",
670
+ value=True,
671
+ info="Display ROI circle or fat segmentation info on the image"
672
+ )
673
+
674
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
675
+
676
+ with gr.Column(scale=2):
677
+ # Results with tabs for different views
678
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
679
+ image_display = gr.Image(label="Medical Image", type="numpy")
680
+
681
+ with gr.Tab("🎯 Overlay View"):
682
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
683
+
684
+ file_info = gr.Textbox(label="File Information", lines=1)
685
+
686
+ with gr.Tab("πŸ“Š Visual Report"):
687
+ report_html = gr.HTML()
688
+
689
+ with gr.Tab("πŸ”§ JSON Output"):
690
+ json_output = gr.JSON(label="Structured Data for AI Agents")
691
+
692
+ # Examples and help
693
+ with gr.Row():
694
+ gr.Markdown("""
695
+ ### πŸ“ Supported Formats
696
+ - **DICOM**: Automatic HU value extraction and modality detection
697
+ - **PNG/JPG**: Interpreted based on selected modality
698
+ - **All Formats**: Automatic grayscale conversion
699
+ - **Files without extension**: Supported (e.g., IM_0001) - will try DICOM first
700
+
701
+ ### 🎯 Usage
702
+ 1. Upload a medical image file
703
+ 2. Select modality (auto-detected for DICOM)
704
+ 3. Choose analysis task
705
+ 4. Adjust ROI position for point analysis
706
+ 5. Click "Analyze"
707
+
708
+ ### πŸ’‘ Features
709
+ - **ROI Visualization**: See the exact area being analyzed
710
+ - **Fat Segmentation**: Visual percentages for CT scans
711
+ - **Multi-format Support**: Works with any medical image format
712
+ - **AI Agent Ready**: Structured JSON output for integration
713
+ """)
714
+
715
+ # Connect the interface
716
+ analyze_btn.click(
717
+ fn=process_and_analyze,
718
+ inputs=[file_input, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
719
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
720
+ )
721
+
722
+ # Auto-update ROI limits when image is loaded
723
+ def update_roi_on_upload(file_obj):
724
+ if file_obj is None:
725
+ return gr.update(), gr.update()
726
+
727
+ try:
728
+ analyzer = MedicalImageAnalyzer()
729
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
730
+
731
+ if 'shape' in metadata:
732
+ h, w = metadata['shape']
733
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
734
+ except:
735
+ pass
736
+
737
+ return gr.update(), gr.update()
738
+
739
+ file_input.change(
740
+ fn=update_roi_on_upload,
741
+ inputs=[file_input],
742
+ outputs=[roi_x, roi_y]
743
+ )
744
+
745
+ return demo
746
+
747
+ if __name__ == "__main__":
748
+ demo = create_demo()
749
+ demo.launch()
750
+ ```
751
+
752
+ ## `MedicalImageAnalyzer`
753
+
754
+ ### Initialization
755
+
756
+ <table>
757
+ <thead>
758
+ <tr>
759
+ <th align="left">name</th>
760
+ <th align="left" style="width: 25%;">type</th>
761
+ <th align="left">default</th>
762
+ <th align="left">description</th>
763
+ </tr>
764
+ </thead>
765
+ <tbody>
766
+ <tr>
767
+ <td align="left"><code>value</code></td>
768
+ <td align="left" style="width: 25%;">
769
+
770
+ ```python
771
+ typing.Optional[typing.Dict[str, typing.Any]][
772
+ typing.Dict[str, typing.Any][str, typing.Any], None
773
+ ]
774
+ ```
775
+
776
+ </td>
777
+ <td align="left"><code>None</code></td>
778
+ <td align="left">None</td>
779
+ </tr>
780
+
781
+ <tr>
782
+ <td align="left"><code>label</code></td>
783
+ <td align="left" style="width: 25%;">
784
+
785
+ ```python
786
+ typing.Optional[str][str, None]
787
+ ```
788
+
789
+ </td>
790
+ <td align="left"><code>None</code></td>
791
+ <td align="left">None</td>
792
+ </tr>
793
+
794
+ <tr>
795
+ <td align="left"><code>info</code></td>
796
+ <td align="left" style="width: 25%;">
797
+
798
+ ```python
799
+ typing.Optional[str][str, None]
800
+ ```
801
+
802
+ </td>
803
+ <td align="left"><code>None</code></td>
804
+ <td align="left">None</td>
805
+ </tr>
806
+
807
+ <tr>
808
+ <td align="left"><code>every</code></td>
809
+ <td align="left" style="width: 25%;">
810
+
811
+ ```python
812
+ typing.Optional[float][float, None]
813
+ ```
814
+
815
+ </td>
816
+ <td align="left"><code>None</code></td>
817
+ <td align="left">None</td>
818
+ </tr>
819
+
820
+ <tr>
821
+ <td align="left"><code>show_label</code></td>
822
+ <td align="left" style="width: 25%;">
823
+
824
+ ```python
825
+ typing.Optional[bool][bool, None]
826
+ ```
827
+
828
+ </td>
829
+ <td align="left"><code>None</code></td>
830
+ <td align="left">None</td>
831
+ </tr>
832
+
833
+ <tr>
834
+ <td align="left"><code>container</code></td>
835
+ <td align="left" style="width: 25%;">
836
+
837
+ ```python
838
+ typing.Optional[bool][bool, None]
839
+ ```
840
+
841
+ </td>
842
+ <td align="left"><code>None</code></td>
843
+ <td align="left">None</td>
844
+ </tr>
845
+
846
+ <tr>
847
+ <td align="left"><code>scale</code></td>
848
+ <td align="left" style="width: 25%;">
849
+
850
+ ```python
851
+ typing.Optional[int][int, None]
852
+ ```
853
+
854
+ </td>
855
+ <td align="left"><code>None</code></td>
856
+ <td align="left">None</td>
857
+ </tr>
858
+
859
+ <tr>
860
+ <td align="left"><code>min_width</code></td>
861
+ <td align="left" style="width: 25%;">
862
+
863
+ ```python
864
+ typing.Optional[int][int, None]
865
+ ```
866
+
867
+ </td>
868
+ <td align="left"><code>None</code></td>
869
+ <td align="left">None</td>
870
+ </tr>
871
+
872
+ <tr>
873
+ <td align="left"><code>visible</code></td>
874
+ <td align="left" style="width: 25%;">
875
+
876
+ ```python
877
+ typing.Optional[bool][bool, None]
878
+ ```
879
+
880
+ </td>
881
+ <td align="left"><code>None</code></td>
882
+ <td align="left">None</td>
883
+ </tr>
884
+
885
+ <tr>
886
+ <td align="left"><code>elem_id</code></td>
887
+ <td align="left" style="width: 25%;">
888
+
889
+ ```python
890
+ typing.Optional[str][str, None]
891
+ ```
892
+
893
+ </td>
894
+ <td align="left"><code>None</code></td>
895
+ <td align="left">None</td>
896
+ </tr>
897
+
898
+ <tr>
899
+ <td align="left"><code>elem_classes</code></td>
900
+ <td align="left" style="width: 25%;">
901
+
902
+ ```python
903
+ typing.Union[typing.List[str], str, NoneType][
904
+ typing.List[str][str], str, None
905
+ ]
906
+ ```
907
+
908
+ </td>
909
+ <td align="left"><code>None</code></td>
910
+ <td align="left">None</td>
911
+ </tr>
912
+
913
+ <tr>
914
+ <td align="left"><code>render</code></td>
915
+ <td align="left" style="width: 25%;">
916
+
917
+ ```python
918
+ typing.Optional[bool][bool, None]
919
+ ```
920
+
921
+ </td>
922
+ <td align="left"><code>None</code></td>
923
+ <td align="left">None</td>
924
+ </tr>
925
+
926
+ <tr>
927
+ <td align="left"><code>key</code></td>
928
+ <td align="left" style="width: 25%;">
929
+
930
+ ```python
931
+ typing.Union[int, str, NoneType][int, str, None]
932
+ ```
933
+
934
+ </td>
935
+ <td align="left"><code>None</code></td>
936
+ <td align="left">None</td>
937
+ </tr>
938
+
939
+ <tr>
940
+ <td align="left"><code>analysis_mode</code></td>
941
+ <td align="left" style="width: 25%;">
942
+
943
+ ```python
944
+ str
945
+ ```
946
+
947
+ </td>
948
+ <td align="left"><code>"structured"</code></td>
949
+ <td align="left">"structured" for AI agents, "visual" for human interpretation</td>
950
+ </tr>
951
+
952
+ <tr>
953
+ <td align="left"><code>include_confidence</code></td>
954
+ <td align="left" style="width: 25%;">
955
+
956
+ ```python
957
+ bool
958
+ ```
959
+
960
+ </td>
961
+ <td align="left"><code>True</code></td>
962
+ <td align="left">Include confidence scores in results</td>
963
+ </tr>
964
+
965
+ <tr>
966
+ <td align="left"><code>include_reasoning</code></td>
967
+ <td align="left" style="width: 25%;">
968
+
969
+ ```python
970
+ bool
971
+ ```
972
+
973
+ </td>
974
+ <td align="left"><code>True</code></td>
975
+ <td align="left">Include reasoning/explanation for findings</td>
976
+ </tr>
977
+
978
+ <tr>
979
+ <td align="left"><code>segmentation_types</code></td>
980
+ <td align="left" style="width: 25%;">
981
+
982
+ ```python
983
+ typing.List[str][str]
984
+ ```
985
+
986
+ </td>
987
+ <td align="left"><code>None</code></td>
988
+ <td align="left">List of segmentation types to perform</td>
989
+ </tr>
990
+ </tbody></table>
991
+
992
+
993
+ ### Events
994
+
995
+ | name | description |
996
+ |:-----|:------------|
997
+ | `change` | Triggered when the value of the MedicalImageAnalyzer changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
998
+ | `select` | Event listener for when the user selects or deselects the MedicalImageAnalyzer. Uses event data gradio.SelectData to carry `value` referring to the label of the MedicalImageAnalyzer, and `selected` to refer to state of the MedicalImageAnalyzer. See EventData documentation on how to use this event data |
999
+ | `upload` | This listener is triggered when the user uploads a file into the MedicalImageAnalyzer. |
1000
+ | `clear` | This listener is triggered when the user clears the MedicalImageAnalyzer using the clear button for the component. |
1001
+
1002
+
1003
+
1004
+ ### User function
1005
+
1006
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
1007
+
1008
+ - When used as an Input, the component only impacts the input signature of the user function.
1009
+ - When used as an output, the component only impacts the return signature of the user function.
1010
+
1011
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
1012
+
1013
+
1014
+
1015
+ ```python
1016
+ def predict(
1017
+ value: typing.Dict[str, typing.Any][str, typing.Any]
1018
+ ) -> typing.Dict[str, typing.Any][str, typing.Any]:
1019
+ return value
1020
+ ```
1021
+
1022
+ ---
1023
+
1024
+ Developed for veterinary medicine with ❀️ and cutting-edge web technology
1025
+
1026
+ **Gradio Agents & MCP Hackathon 2025 - Track 2 Submission**
app.py ADDED
@@ -0,0 +1,693 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Demo for MedicalImageAnalyzer - Enhanced with file upload and overlay visualization
4
+ """
5
+
6
+ import gradio as gr
7
+ import numpy as np
8
+ import sys
9
+ import os
10
+ import cv2
11
+ from pathlib import Path
12
+
13
+ # Add backend to path
14
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
15
+
16
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
17
+
18
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
19
+ """Draw ROI circle on the image"""
20
+ # Convert to RGB if grayscale
21
+ if len(image.shape) == 2:
22
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
23
+ else:
24
+ image_rgb = image.copy()
25
+
26
+ # Draw ROI circle
27
+ center = (int(roi_x), int(roi_y))
28
+ radius = int(roi_radius)
29
+
30
+ # Draw outer circle (white)
31
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
32
+ # Draw inner circle (red)
33
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
34
+ # Draw center cross
35
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
36
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
37
+
38
+ return image_rgb
39
+
40
+ def create_fat_overlay(base_image, segmentation_results):
41
+ """Create overlay image with fat segmentation highlighted"""
42
+ # Convert to RGB
43
+ if len(base_image.shape) == 2:
44
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
45
+ else:
46
+ overlay_img = base_image.copy()
47
+
48
+ # Check if we have segmentation masks
49
+ if not segmentation_results or 'segments' not in segmentation_results:
50
+ return overlay_img
51
+
52
+ segments = segmentation_results.get('segments', {})
53
+
54
+ # Apply subcutaneous fat overlay (yellow)
55
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
56
+ mask = segments['subcutaneous']['mask']
57
+ yellow_overlay = np.zeros_like(overlay_img)
58
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
59
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
60
+
61
+ # Apply visceral fat overlay (red)
62
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
63
+ mask = segments['visceral']['mask']
64
+ red_overlay = np.zeros_like(overlay_img)
65
+ red_overlay[mask > 0] = [255, 0, 0] # Red
66
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
67
+
68
+ # Add legend
69
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
70
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
71
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
72
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
73
+
74
+ return overlay_img
75
+
76
+ def process_and_analyze(file_obj, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
77
+ """
78
+ Processes uploaded file and performs analysis
79
+ """
80
+ if file_obj is None:
81
+ return None, "No file selected", None, {}, None
82
+
83
+ # Create analyzer instance
84
+ analyzer = MedicalImageAnalyzer(
85
+ analysis_mode="structured",
86
+ include_confidence=True,
87
+ include_reasoning=True
88
+ )
89
+
90
+ try:
91
+ # Process the file (DICOM or image)
92
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
93
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
94
+
95
+ # Update modality from file metadata if it's a DICOM
96
+ if metadata.get('file_type') == 'DICOM' and 'modality' in metadata:
97
+ modality = metadata['modality']
98
+
99
+ # Prepare analysis parameters
100
+ analysis_params = {
101
+ "image": pixel_array,
102
+ "modality": modality,
103
+ "task": task
104
+ }
105
+
106
+ # Add ROI if applicable
107
+ if task in ["analyze_point", "full_analysis"]:
108
+ # Scale ROI coordinates to image size
109
+ h, w = pixel_array.shape
110
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
111
+ roi_y_scaled = int(roi_y * h / 512)
112
+
113
+ analysis_params["roi"] = {
114
+ "x": roi_x_scaled,
115
+ "y": roi_y_scaled,
116
+ "radius": roi_radius
117
+ }
118
+
119
+ # Add clinical context
120
+ if symptoms:
121
+ analysis_params["clinical_context"] = {"symptoms": symptoms}
122
+
123
+ # Perform analysis
124
+ results = analyzer.analyze_image(**analysis_params)
125
+
126
+ # Create visual report
127
+ visual_report = create_visual_report(results, metadata)
128
+
129
+ # Add metadata info
130
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
131
+ info += f"πŸ₯ {modality} | "
132
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
133
+
134
+ if metadata.get('window_center'):
135
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
136
+
137
+ # Create overlay image if requested
138
+ overlay_image = None
139
+ if show_overlay:
140
+ # For ROI visualization
141
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
142
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x_scaled, roi_y_scaled, roi_radius)
143
+
144
+ # For fat segmentation overlay (simplified version since we don't have masks in current implementation)
145
+ elif task == "segment_fat" and 'segmentation' in results and modality == 'CT':
146
+ # For now, just draw ROI since we don't have actual masks
147
+ overlay_image = display_array.copy()
148
+ if len(overlay_image.shape) == 2:
149
+ overlay_image = cv2.cvtColor(overlay_image, cv2.COLOR_GRAY2RGB)
150
+ # Add text overlay about fat percentages
151
+ if 'statistics' in results['segmentation']:
152
+ stats = results['segmentation']['statistics']
153
+ cv2.putText(overlay_image, f"Total Fat: {stats.get('total_fat_percentage', 0):.1f}%",
154
+ (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
155
+ cv2.putText(overlay_image, f"Subcutaneous: {stats.get('subcutaneous_fat_percentage', 0):.1f}%",
156
+ (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
157
+ cv2.putText(overlay_image, f"Visceral: {stats.get('visceral_fat_percentage', 0):.1f}%",
158
+ (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
159
+
160
+ return display_array, info, visual_report, results, overlay_image
161
+
162
+ except Exception as e:
163
+ error_msg = f"Error: {str(e)}"
164
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
165
+
166
+ def create_visual_report(results, metadata):
167
+ """Creates a visual HTML report with improved styling"""
168
+ html = f"""
169
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
170
+ padding: 24px;
171
+ background: #ffffff;
172
+ border-radius: 12px;
173
+ max-width: 100%;
174
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
175
+ color: #1a1a1a !important;'>
176
+
177
+ <h2 style='color: #1e40af !important;
178
+ border-bottom: 3px solid #3b82f6;
179
+ padding-bottom: 12px;
180
+ margin-bottom: 20px;
181
+ font-size: 24px;
182
+ font-weight: 600;'>
183
+ πŸ₯ Medical Image Analysis Report
184
+ </h2>
185
+
186
+ <div style='background: #f0f9ff;
187
+ padding: 20px;
188
+ margin: 16px 0;
189
+ border-radius: 8px;
190
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
191
+ <h3 style='color: #1e3a8a !important;
192
+ font-size: 18px;
193
+ font-weight: 600;
194
+ margin-bottom: 12px;'>
195
+ πŸ“‹ Metadata
196
+ </h3>
197
+ <table style='width: 100%; border-collapse: collapse;'>
198
+ <tr>
199
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
200
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
201
+ </tr>
202
+ <tr>
203
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
204
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
205
+ </tr>
206
+ <tr>
207
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
208
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
209
+ </tr>
210
+ <tr>
211
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
212
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
213
+ </tr>
214
+ </table>
215
+ </div>
216
+ """
217
+
218
+ # Point Analysis
219
+ if 'point_analysis' in results:
220
+ pa = results['point_analysis']
221
+ tissue = pa.get('tissue_type', {})
222
+
223
+ html += f"""
224
+ <div style='background: #f0f9ff;
225
+ padding: 20px;
226
+ margin: 16px 0;
227
+ border-radius: 8px;
228
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
229
+ <h3 style='color: #1e3a8a !important;
230
+ font-size: 18px;
231
+ font-weight: 600;
232
+ margin-bottom: 12px;'>
233
+ 🎯 Point Analysis
234
+ </h3>
235
+ <table style='width: 100%; border-collapse: collapse;'>
236
+ <tr>
237
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
238
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
239
+ </tr>
240
+ """
241
+
242
+ if results.get('modality') == 'CT':
243
+ html += f"""
244
+ <tr>
245
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
246
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
247
+ </tr>
248
+ """
249
+ else:
250
+ html += f"""
251
+ <tr>
252
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
253
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
254
+ </tr>
255
+ """
256
+
257
+ html += f"""
258
+ <tr>
259
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
260
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
261
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
262
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
263
+ </td>
264
+ </tr>
265
+ <tr>
266
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
267
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
268
+ </tr>
269
+ </table>
270
+ """
271
+
272
+ if 'reasoning' in pa:
273
+ html += f"""
274
+ <div style='margin-top: 12px;
275
+ padding: 12px;
276
+ background: #dbeafe;
277
+ border-left: 3px solid #3b82f6;
278
+ border-radius: 4px;'>
279
+ <p style='margin: 0; color: #1e40af !important; font-style: italic;'>
280
+ πŸ’­ {pa['reasoning']}
281
+ </p>
282
+ </div>
283
+ """
284
+
285
+ html += "</div>"
286
+
287
+ # Segmentation Results
288
+ if 'segmentation' in results and results['segmentation']:
289
+ seg = results['segmentation']
290
+
291
+ if 'statistics' in seg:
292
+ # Fat segmentation for CT
293
+ stats = seg['statistics']
294
+ html += f"""
295
+ <div style='background: #f0f9ff;
296
+ padding: 20px;
297
+ margin: 16px 0;
298
+ border-radius: 8px;
299
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
300
+ <h3 style='color: #1e3a8a !important;
301
+ font-size: 18px;
302
+ font-weight: 600;
303
+ margin-bottom: 12px;'>
304
+ πŸ”¬ Fat Segmentation Analysis
305
+ </h3>
306
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
307
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
308
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
309
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
310
+ </div>
311
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
312
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
313
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
314
+ </div>
315
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
316
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
317
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
318
+ </div>
319
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
320
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
321
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
322
+ </div>
323
+ </div>
324
+ """
325
+
326
+ if 'interpretation' in seg:
327
+ interp = seg['interpretation']
328
+ obesity_color = "#16a34a" if interp.get("obesity_risk") == "normal" else "#d97706" if interp.get("obesity_risk") == "moderate" else "#dc2626"
329
+ visceral_color = "#16a34a" if interp.get("visceral_risk") == "normal" else "#d97706" if interp.get("visceral_risk") == "moderate" else "#dc2626"
330
+
331
+ html += f"""
332
+ <div style='margin-top: 16px; padding: 16px; background: #f3f4f6; border-radius: 6px;'>
333
+ <h4 style='color: #374151 !important; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
334
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
335
+ <div>
336
+ <span style='color: #6b7280 !important; font-size: 14px;'>Obesity Risk:</span>
337
+ <span style='color: {obesity_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
338
+ </div>
339
+ <div>
340
+ <span style='color: #6b7280 !important; font-size: 14px;'>Visceral Risk:</span>
341
+ <span style='color: {visceral_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
342
+ </div>
343
+ </div>
344
+ """
345
+
346
+ if interp.get('recommendations'):
347
+ html += """
348
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e5e7eb;'>
349
+ <h5 style='color: #374151 !important; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
350
+ <ul style='margin: 0; padding-left: 20px; color: #4b5563 !important;'>
351
+ """
352
+ for rec in interp['recommendations']:
353
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
354
+ html += "</ul></div>"
355
+
356
+ html += "</div>"
357
+ html += "</div>"
358
+
359
+ # Quality Assessment
360
+ if 'quality_metrics' in results:
361
+ quality = results['quality_metrics']
362
+ quality_colors = {
363
+ 'excellent': '#16a34a',
364
+ 'good': '#16a34a',
365
+ 'fair': '#d97706',
366
+ 'poor': '#dc2626',
367
+ 'unknown': '#6b7280'
368
+ }
369
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#6b7280')
370
+
371
+ html += f"""
372
+ <div style='background: #f0f9ff;
373
+ padding: 20px;
374
+ margin: 16px 0;
375
+ border-radius: 8px;
376
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
377
+ <h3 style='color: #1e3a8a !important;
378
+ font-size: 18px;
379
+ font-weight: 600;
380
+ margin-bottom: 12px;'>
381
+ πŸ“Š Image Quality Assessment
382
+ </h3>
383
+ <div style='display: flex; align-items: center; gap: 16px;'>
384
+ <div>
385
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
386
+ <span style='color: {q_color} !important;
387
+ font-size: 18px;
388
+ font-weight: 700;
389
+ margin-left: 8px;'>
390
+ {quality.get('overall_quality', 'unknown').upper()}
391
+ </span>
392
+ </div>
393
+ </div>
394
+ """
395
+
396
+ if quality.get('issues'):
397
+ html += f"""
398
+ <div style='margin-top: 12px;
399
+ padding: 12px;
400
+ background: #fef3c7;
401
+ border-left: 3px solid #f59e0b;
402
+ border-radius: 4px;'>
403
+ <strong style='color: #92400e !important;'>Issues Detected:</strong>
404
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #92400e !important;'>
405
+ """
406
+ for issue in quality['issues']:
407
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
408
+ html += "</ul></div>"
409
+
410
+ html += "</div>"
411
+
412
+ html += "</div>"
413
+ return html
414
+
415
+ def create_demo():
416
+ with gr.Blocks(
417
+ title="Medical Image Analyzer - Enhanced Demo",
418
+ theme=gr.themes.Soft(
419
+ primary_hue="blue",
420
+ secondary_hue="blue",
421
+ neutral_hue="slate",
422
+ text_size="md",
423
+ spacing_size="md",
424
+ radius_size="md",
425
+ ).set(
426
+ # Medical blue theme colors
427
+ body_background_fill="*neutral_950",
428
+ body_background_fill_dark="*neutral_950",
429
+ block_background_fill="*neutral_900",
430
+ block_background_fill_dark="*neutral_900",
431
+ border_color_primary="*primary_600",
432
+ border_color_primary_dark="*primary_600",
433
+ # Text colors for better contrast
434
+ body_text_color="*neutral_100",
435
+ body_text_color_dark="*neutral_100",
436
+ body_text_color_subdued="*neutral_300",
437
+ body_text_color_subdued_dark="*neutral_300",
438
+ # Button colors
439
+ button_primary_background_fill="*primary_600",
440
+ button_primary_background_fill_dark="*primary_600",
441
+ button_primary_text_color="white",
442
+ button_primary_text_color_dark="white",
443
+ ),
444
+ css="""
445
+ /* Medical blue theme with high contrast */
446
+ :root {
447
+ --medical-blue: #1e40af;
448
+ --medical-blue-light: #3b82f6;
449
+ --medical-blue-dark: #1e3a8a;
450
+ --text-primary: #f9fafb;
451
+ --text-secondary: #e5e7eb;
452
+ --bg-primary: #0f172a;
453
+ --bg-secondary: #1e293b;
454
+ --bg-tertiary: #334155;
455
+ }
456
+
457
+ /* Override default text colors for medical theme */
458
+ * {
459
+ color: var(--text-primary) !important;
460
+ }
461
+
462
+ /* Style the file upload area */
463
+ .file-upload {
464
+ border: 2px dashed var(--medical-blue-light) !important;
465
+ border-radius: 8px !important;
466
+ padding: 20px !important;
467
+ text-align: center !important;
468
+ background: var(--bg-secondary) !important;
469
+ transition: all 0.3s ease !important;
470
+ color: var(--text-primary) !important;
471
+ }
472
+
473
+ .file-upload:hover {
474
+ border-color: var(--medical-blue) !important;
475
+ background: var(--bg-tertiary) !important;
476
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
477
+ }
478
+
479
+ /* Ensure report text is readable with white background */
480
+ .medical-report {
481
+ background: #ffffff !important;
482
+ border: 2px solid var(--medical-blue-light) !important;
483
+ border-radius: 8px !important;
484
+ padding: 16px !important;
485
+ color: #1a1a1a !important;
486
+ }
487
+
488
+ .medical-report * {
489
+ color: #1f2937 !important; /* Dark gray text */
490
+ }
491
+
492
+ .medical-report h2 {
493
+ color: #1e40af !important; /* Medical blue for main heading */
494
+ }
495
+
496
+ .medical-report h3, .medical-report h4 {
497
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
498
+ }
499
+
500
+ .medical-report strong {
501
+ color: #374151 !important; /* Darker gray for labels */
502
+ }
503
+
504
+ .medical-report td {
505
+ color: #1f2937 !important; /* Ensure table text is dark */
506
+ }
507
+
508
+ /* Report sections with light blue background */
509
+ .medical-report > div {
510
+ background: #f0f9ff !important;
511
+ color: #1f2937 !important;
512
+ }
513
+
514
+ /* Medical blue accents for UI elements */
515
+ .gr-button-primary {
516
+ background: var(--medical-blue) !important;
517
+ border-color: var(--medical-blue) !important;
518
+ }
519
+
520
+ .gr-button-primary:hover {
521
+ background: var(--medical-blue-dark) !important;
522
+ border-color: var(--medical-blue-dark) !important;
523
+ }
524
+
525
+ /* Tab styling */
526
+ .gr-tab-item {
527
+ border-color: var(--medical-blue-light) !important;
528
+ }
529
+
530
+ .gr-tab-item.selected {
531
+ background: var(--medical-blue) !important;
532
+ color: white !important;
533
+ }
534
+
535
+ /* Accordion styling */
536
+ .gr-accordion {
537
+ border-color: var(--medical-blue-light) !important;
538
+ }
539
+
540
+ /* Slider track in medical blue */
541
+ input[type="range"]::-webkit-slider-track {
542
+ background: var(--bg-tertiary) !important;
543
+ }
544
+
545
+ input[type="range"]::-webkit-slider-thumb {
546
+ background: var(--medical-blue) !important;
547
+ }
548
+ """
549
+ ) as demo:
550
+ gr.Markdown("""
551
+ # πŸ₯ Medical Image Analyzer
552
+
553
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
554
+ """)
555
+
556
+ with gr.Row():
557
+ with gr.Column(scale=1):
558
+ # File upload - no file type restrictions
559
+ with gr.Group():
560
+ gr.Markdown("### πŸ“€ Upload Medical Image")
561
+ file_input = gr.File(
562
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
563
+ file_count="single",
564
+ type="filepath",
565
+ elem_classes="file-upload"
566
+ # Note: NO file_types parameter = accepts ALL files
567
+ )
568
+ gr.Markdown("""
569
+ <small style='color: #666;'>
570
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
571
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
572
+ </small>
573
+ """)
574
+
575
+ # Modality selection
576
+ modality = gr.Radio(
577
+ choices=["CT", "CR", "DX", "RX", "DR"],
578
+ value="CT",
579
+ label="Modality",
580
+ info="Will be auto-detected for DICOM files"
581
+ )
582
+
583
+ # Task selection
584
+ task = gr.Dropdown(
585
+ choices=[
586
+ ("🎯 Point Analysis", "analyze_point"),
587
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
588
+ ("πŸ“Š Full Analysis", "full_analysis")
589
+ ],
590
+ value="full_analysis",
591
+ label="Analysis Task"
592
+ )
593
+
594
+ # ROI settings
595
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
596
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
597
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
598
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
599
+
600
+ # Clinical context
601
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
602
+ symptoms = gr.CheckboxGroup(
603
+ choices=[
604
+ "dyspnea", "chest_pain", "abdominal_pain",
605
+ "trauma", "obesity_screening", "routine_check"
606
+ ],
607
+ label="Symptoms/Indication"
608
+ )
609
+
610
+ # Visualization options
611
+ with gr.Accordion("🎨 Visualization Options", open=True):
612
+ show_overlay = gr.Checkbox(
613
+ label="Show ROI/Segmentation Overlay",
614
+ value=True,
615
+ info="Display ROI circle or fat segmentation info on the image"
616
+ )
617
+
618
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
619
+
620
+ with gr.Column(scale=2):
621
+ # Results with tabs for different views
622
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
623
+ image_display = gr.Image(label="Medical Image", type="numpy")
624
+
625
+ with gr.Tab("🎯 Overlay View"):
626
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
627
+
628
+ file_info = gr.Textbox(label="File Information", lines=1)
629
+
630
+ with gr.Tab("πŸ“Š Visual Report"):
631
+ report_html = gr.HTML()
632
+
633
+ with gr.Tab("πŸ”§ JSON Output"):
634
+ json_output = gr.JSON(label="Structured Data for AI Agents")
635
+
636
+ # Examples and help
637
+ with gr.Row():
638
+ gr.Markdown("""
639
+ ### πŸ“ Supported Formats
640
+ - **DICOM**: Automatic HU value extraction and modality detection
641
+ - **PNG/JPG**: Interpreted based on selected modality
642
+ - **All Formats**: Automatic grayscale conversion
643
+ - **Files without extension**: Supported (e.g., IM_0001) - will try DICOM first
644
+
645
+ ### 🎯 Usage
646
+ 1. Upload a medical image file
647
+ 2. Select modality (auto-detected for DICOM)
648
+ 3. Choose analysis task
649
+ 4. Adjust ROI position for point analysis
650
+ 5. Click "Analyze"
651
+
652
+ ### πŸ’‘ Features
653
+ - **ROI Visualization**: See the exact area being analyzed
654
+ - **Fat Segmentation**: Visual percentages for CT scans
655
+ - **Multi-format Support**: Works with any medical image format
656
+ - **AI Agent Ready**: Structured JSON output for integration
657
+ """)
658
+
659
+ # Connect the interface
660
+ analyze_btn.click(
661
+ fn=process_and_analyze,
662
+ inputs=[file_input, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
663
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
664
+ )
665
+
666
+ # Auto-update ROI limits when image is loaded
667
+ def update_roi_on_upload(file_obj):
668
+ if file_obj is None:
669
+ return gr.update(), gr.update()
670
+
671
+ try:
672
+ analyzer = MedicalImageAnalyzer()
673
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
674
+
675
+ if 'shape' in metadata:
676
+ h, w = metadata['shape']
677
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
678
+ except:
679
+ pass
680
+
681
+ return gr.update(), gr.update()
682
+
683
+ file_input.change(
684
+ fn=update_roi_on_upload,
685
+ inputs=[file_input],
686
+ outputs=[roi_x, roi_y]
687
+ )
688
+
689
+ return demo
690
+
691
+ if __name__ == "__main__":
692
+ demo = create_demo()
693
+ demo.launch()
app_with_frontend.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Demo fΓΌr MedicalImageAnalyzer mit Frontend Component
4
+ Zeigt die Verwendung der vollstΓ€ndigen Gradio Custom Component
5
+ """
6
+
7
+ import gradio as gr
8
+ import numpy as np
9
+ import sys
10
+ import os
11
+ from pathlib import Path
12
+
13
+ # Add backend to path
14
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
15
+
16
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
17
+
18
+ # Example data for demos
19
+ EXAMPLE_DATA = [
20
+ {
21
+ "image": {"url": "examples/ct_chest.png"},
22
+ "analysis": {
23
+ "modality": "CT",
24
+ "point_analysis": {
25
+ "tissue_type": {"icon": "🟑", "type": "fat"},
26
+ "hu_value": -75.0
27
+ },
28
+ "segmentation": {
29
+ "interpretation": {
30
+ "obesity_risk": "moderate"
31
+ }
32
+ }
33
+ }
34
+ },
35
+ {
36
+ "image": {"url": "examples/xray_chest.png"},
37
+ "analysis": {
38
+ "modality": "CR",
39
+ "point_analysis": {
40
+ "tissue_type": {"icon": "🦴", "type": "bone"}
41
+ }
42
+ }
43
+ }
44
+ ]
45
+
46
+ def create_demo():
47
+ with gr.Blocks(title="Medical Image Analyzer - Component Demo") as demo:
48
+ gr.Markdown("""
49
+ # πŸ₯ Medical Image Analyzer - Frontend Component Demo
50
+
51
+ Diese Demo zeigt die vollstΓ€ndige Gradio Custom Component mit Frontend-Integration.
52
+ UnterstΓΌtzt DICOM-Dateien und alle gΓ€ngigen Bildformate.
53
+ """)
54
+
55
+ with gr.Row():
56
+ with gr.Column():
57
+ # Configuration
58
+ gr.Markdown("### βš™οΈ Konfiguration")
59
+
60
+ analysis_mode = gr.Radio(
61
+ choices=["structured", "visual"],
62
+ value="structured",
63
+ label="Analyse-Modus",
64
+ info="structured: fΓΌr AI Agents, visual: fΓΌr Menschen"
65
+ )
66
+
67
+ include_confidence = gr.Checkbox(
68
+ value=True,
69
+ label="Konfidenzwerte einschließen"
70
+ )
71
+
72
+ include_reasoning = gr.Checkbox(
73
+ value=True,
74
+ label="Reasoning einschließen"
75
+ )
76
+
77
+ with gr.Column(scale=2):
78
+ # The custom component
79
+ analyzer = MedicalImageAnalyzer(
80
+ label="Medical Image Analyzer",
81
+ analysis_mode="structured",
82
+ include_confidence=True,
83
+ include_reasoning=True,
84
+ elem_id="medical-analyzer"
85
+ )
86
+
87
+ # Examples section
88
+ gr.Markdown("### πŸ“ Beispiele")
89
+
90
+ examples = gr.Examples(
91
+ examples=EXAMPLE_DATA,
92
+ inputs=analyzer,
93
+ label="Beispiel-Analysen"
94
+ )
95
+
96
+ # Info section
97
+ gr.Markdown("""
98
+ ### πŸ“ Verwendung
99
+
100
+ 1. **Datei hochladen**: Ziehen Sie eine DICOM- oder Bilddatei in den Upload-Bereich
101
+ 2. **ModalitΓ€t wΓ€hlen**: CT, CR, DX, RX, oder DR
102
+ 3. **Analyse-Task**: Punktanalyse, Fettsegmentierung, oder vollstΓ€ndige Analyse
103
+ 4. **ROI aktivieren**: Klicken Sie auf das Bild, um einen Analysepunkt zu wΓ€hlen
104
+
105
+ ### πŸ”§ Features
106
+
107
+ - **DICOM Support**: Automatische Erkennung von ModalitΓ€t und HU-Werten
108
+ - **Multi-Tissue Segmentation**: Erkennt Knochen, Weichgewebe, Luft, Metall, Fett, FlΓΌssigkeit
109
+ - **Klinische Bewertung**: Adipositas-Risiko, Gewebeverteilung, Anomalieerkennung
110
+ - **AI-Agent Ready**: Strukturierte JSON-Ausgabe fΓΌr Integration
111
+
112
+ ### πŸ”— Integration
113
+
114
+ ```python
115
+ import gradio as gr
116
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
117
+
118
+ analyzer = MedicalImageAnalyzer(
119
+ analysis_mode="structured",
120
+ include_confidence=True
121
+ )
122
+
123
+ # Use in your Gradio app
124
+ with gr.Blocks() as app:
125
+ analyzer_component = analyzer
126
+ # ... rest of your app
127
+ ```
128
+ """)
129
+
130
+ # Event handlers
131
+ def update_config(mode, conf, reason):
132
+ # This would update the component configuration
133
+ # In real implementation, this would be handled by the component
134
+ return gr.update(
135
+ analysis_mode=mode,
136
+ include_confidence=conf,
137
+ include_reasoning=reason
138
+ )
139
+
140
+ # Connect configuration changes
141
+ for config in [analysis_mode, include_confidence, include_reasoning]:
142
+ config.change(
143
+ fn=update_config,
144
+ inputs=[analysis_mode, include_confidence, include_reasoning],
145
+ outputs=analyzer
146
+ )
147
+
148
+ # Handle analysis results
149
+ def handle_analysis_complete(data):
150
+ if data and "analysis" in data:
151
+ analysis = data["analysis"]
152
+ report = data.get("report", "")
153
+
154
+ # Log to console for debugging
155
+ print("Analysis completed:")
156
+ print(f"Modality: {analysis.get('modality', 'Unknown')}")
157
+ if "point_analysis" in analysis:
158
+ print(f"Tissue: {analysis['point_analysis'].get('tissue_type', {}).get('type', 'Unknown')}")
159
+
160
+ return data
161
+ return data
162
+
163
+ analyzer.change(
164
+ fn=handle_analysis_complete,
165
+ inputs=analyzer,
166
+ outputs=analyzer
167
+ )
168
+
169
+ return demo
170
+
171
+
172
+ def create_simple_demo():
173
+ """Einfache Demo ohne viel Konfiguration"""
174
+ with gr.Blocks(title="Medical Image Analyzer - Simple Demo") as demo:
175
+ gr.Markdown("# πŸ₯ Medical Image Analyzer")
176
+
177
+ analyzer = MedicalImageAnalyzer(
178
+ label="Laden Sie ein medizinisches Bild hoch (DICOM, PNG, JPG)",
179
+ analysis_mode="visual", # Visual mode for human-readable output
180
+ elem_id="analyzer"
181
+ )
182
+
183
+ # Auto-analyze on upload
184
+ @analyzer.upload
185
+ def auto_analyze(file_data):
186
+ # The component handles the analysis internally
187
+ return file_data
188
+
189
+ return demo
190
+
191
+
192
+ if __name__ == "__main__":
193
+ # You can switch between demos
194
+ # demo = create_demo() # Full demo with configuration
195
+ demo = create_simple_demo() # Simple demo
196
+
197
+ demo.launch()
css.css ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ html {
2
+ font-family: Inter;
3
+ font-size: 16px;
4
+ font-weight: 400;
5
+ line-height: 1.5;
6
+ -webkit-text-size-adjust: 100%;
7
+ background: #fff;
8
+ color: #323232;
9
+ -webkit-font-smoothing: antialiased;
10
+ -moz-osx-font-smoothing: grayscale;
11
+ text-rendering: optimizeLegibility;
12
+ }
13
+
14
+ :root {
15
+ --space: 1;
16
+ --vspace: calc(var(--space) * 1rem);
17
+ --vspace-0: calc(3 * var(--space) * 1rem);
18
+ --vspace-1: calc(2 * var(--space) * 1rem);
19
+ --vspace-2: calc(1.5 * var(--space) * 1rem);
20
+ --vspace-3: calc(0.5 * var(--space) * 1rem);
21
+ }
22
+
23
+ .app {
24
+ max-width: 748px !important;
25
+ }
26
+
27
+ .prose p {
28
+ margin: var(--vspace) 0;
29
+ line-height: var(--vspace * 2);
30
+ font-size: 1rem;
31
+ }
32
+
33
+ code {
34
+ font-family: "Inconsolata", sans-serif;
35
+ font-size: 16px;
36
+ }
37
+
38
+ h1,
39
+ h1 code {
40
+ font-weight: 400;
41
+ line-height: calc(2.5 / var(--space) * var(--vspace));
42
+ }
43
+
44
+ h1 code {
45
+ background: none;
46
+ border: none;
47
+ letter-spacing: 0.05em;
48
+ padding-bottom: 5px;
49
+ position: relative;
50
+ padding: 0;
51
+ }
52
+
53
+ h2 {
54
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
55
+ line-height: 1em;
56
+ }
57
+
58
+ h3,
59
+ h3 code {
60
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
61
+ line-height: 1em;
62
+ }
63
+
64
+ h4,
65
+ h5,
66
+ h6 {
67
+ margin: var(--vspace-3) 0 var(--vspace-3) 0;
68
+ line-height: var(--vspace);
69
+ }
70
+
71
+ .bigtitle,
72
+ h1,
73
+ h1 code {
74
+ font-size: calc(8px * 4.5);
75
+ word-break: break-word;
76
+ }
77
+
78
+ .title,
79
+ h2,
80
+ h2 code {
81
+ font-size: calc(8px * 3.375);
82
+ font-weight: lighter;
83
+ word-break: break-word;
84
+ border: none;
85
+ background: none;
86
+ }
87
+
88
+ .subheading1,
89
+ h3,
90
+ h3 code {
91
+ font-size: calc(8px * 1.8);
92
+ font-weight: 600;
93
+ border: none;
94
+ background: none;
95
+ letter-spacing: 0.1em;
96
+ text-transform: uppercase;
97
+ }
98
+
99
+ h2 code {
100
+ padding: 0;
101
+ position: relative;
102
+ letter-spacing: 0.05em;
103
+ }
104
+
105
+ blockquote {
106
+ font-size: calc(8px * 1.1667);
107
+ font-style: italic;
108
+ line-height: calc(1.1667 * var(--vspace));
109
+ margin: var(--vspace-2) var(--vspace-2);
110
+ }
111
+
112
+ .subheading2,
113
+ h4 {
114
+ font-size: calc(8px * 1.4292);
115
+ text-transform: uppercase;
116
+ font-weight: 600;
117
+ }
118
+
119
+ .subheading3,
120
+ h5 {
121
+ font-size: calc(8px * 1.2917);
122
+ line-height: calc(1.2917 * var(--vspace));
123
+
124
+ font-weight: lighter;
125
+ text-transform: uppercase;
126
+ letter-spacing: 0.15em;
127
+ }
128
+
129
+ h6 {
130
+ font-size: calc(8px * 1.1667);
131
+ font-size: 1.1667em;
132
+ font-weight: normal;
133
+ font-style: italic;
134
+ font-family: "le-monde-livre-classic-byol", serif !important;
135
+ letter-spacing: 0px !important;
136
+ }
137
+
138
+ #start .md > *:first-child {
139
+ margin-top: 0;
140
+ }
141
+
142
+ h2 + h3 {
143
+ margin-top: 0;
144
+ }
145
+
146
+ .md hr {
147
+ border: none;
148
+ border-top: 1px solid var(--block-border-color);
149
+ margin: var(--vspace-2) 0 var(--vspace-2) 0;
150
+ }
151
+ .prose ul {
152
+ margin: var(--vspace-2) 0 var(--vspace-1) 0;
153
+ }
154
+
155
+ .gap {
156
+ gap: 0;
157
+ }
space.py ADDED
@@ -0,0 +1,813 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import gradio as gr
3
+ from app import demo as app
4
+ import os
5
+
6
+ _docs = {'MedicalImageAnalyzer': {'description': 'A Gradio component for AI-agent compatible medical image analysis.\n\nProvides structured output for:\n- HU value analysis (CT only)\n- Tissue classification\n- Fat segmentation (subcutaneous, visceral)\n- Confidence scores and reasoning', 'members': {'__init__': {'value': {'type': 'typing.Optional[typing.Dict[str, typing.Any]][\n typing.Dict[str, typing.Any][str, typing.Any], None\n]', 'default': 'None', 'description': None}, 'label': {'type': 'typing.Optional[str][str, None]', 'default': 'None', 'description': None}, 'info': {'type': 'typing.Optional[str][str, None]', 'default': 'None', 'description': None}, 'every': {'type': 'typing.Optional[float][float, None]', 'default': 'None', 'description': None}, 'show_label': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'container': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'scale': {'type': 'typing.Optional[int][int, None]', 'default': 'None', 'description': None}, 'min_width': {'type': 'typing.Optional[int][int, None]', 'default': 'None', 'description': None}, 'visible': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'elem_id': {'type': 'typing.Optional[str][str, None]', 'default': 'None', 'description': None}, 'elem_classes': {'type': 'typing.Union[typing.List[str], str, NoneType][\n typing.List[str][str], str, None\n]', 'default': 'None', 'description': None}, 'render': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'key': {'type': 'typing.Union[int, str, NoneType][int, str, None]', 'default': 'None', 'description': None}, 'analysis_mode': {'type': 'str', 'default': '"structured"', 'description': '"structured" for AI agents, "visual" for human interpretation'}, 'include_confidence': {'type': 'bool', 'default': 'True', 'description': 'Include confidence scores in results'}, 'include_reasoning': {'type': 'bool', 'default': 'True', 'description': 'Include reasoning/explanation for findings'}, 'segmentation_types': {'type': 'typing.List[str][str]', 'default': 'None', 'description': 'List of segmentation types to perform'}}, 'postprocess': {'value': {'type': 'typing.Dict[str, typing.Any][str, typing.Any]', 'description': None}}, 'preprocess': {'return': {'type': 'typing.Dict[str, typing.Any][str, typing.Any]', 'description': None}, 'value': None}}, 'events': {'change': {'type': None, 'default': None, 'description': 'Triggered when the value of the MedicalImageAnalyzer changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input.'}, 'select': {'type': None, 'default': None, 'description': 'Event listener for when the user selects or deselects the MedicalImageAnalyzer. Uses event data gradio.SelectData to carry `value` referring to the label of the MedicalImageAnalyzer, and `selected` to refer to state of the MedicalImageAnalyzer. See EventData documentation on how to use this event data'}, 'upload': {'type': None, 'default': None, 'description': 'This listener is triggered when the user uploads a file into the MedicalImageAnalyzer.'}, 'clear': {'type': None, 'default': None, 'description': 'This listener is triggered when the user clears the MedicalImageAnalyzer using the clear button for the component.'}}}, '__meta__': {'additional_interfaces': {}, 'user_fn_refs': {'MedicalImageAnalyzer': []}}}
7
+
8
+ abs_path = os.path.join(os.path.dirname(__file__), "css.css")
9
+
10
+ with gr.Blocks(
11
+ css=abs_path,
12
+ theme=gr.themes.Default(
13
+ font_mono=[
14
+ gr.themes.GoogleFont("Inconsolata"),
15
+ "monospace",
16
+ ],
17
+ ),
18
+ ) as demo:
19
+ gr.Markdown(
20
+ """
21
+ # `gradio_medical_image_analyzer`
22
+
23
+ <div style="display: flex; gap: 7px;">
24
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange"> <a href="https://github.com/yourusername/gradio-medical-image-analyzer/issues" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Issues-white?logo=github&logoColor=black"></a>
25
+ </div>
26
+
27
+ AI-agent optimized medical image analysis component for Gradio
28
+ """, elem_classes=["md-custom"], header_links=True)
29
+ app.render()
30
+ gr.Markdown(
31
+ """
32
+ ## Installation
33
+
34
+ ```bash
35
+ pip install gradio_medical_image_analyzer
36
+ ```
37
+
38
+ ## Usage
39
+
40
+ ```python
41
+ #!/usr/bin/env python3
42
+ \"\"\"
43
+ Demo for MedicalImageAnalyzer - Enhanced with file upload and overlay visualization
44
+ \"\"\"
45
+
46
+ import gradio as gr
47
+ import numpy as np
48
+ import sys
49
+ import os
50
+ import cv2
51
+ from pathlib import Path
52
+
53
+ # Add backend to path
54
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
55
+
56
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
57
+
58
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
59
+ \"\"\"Draw ROI circle on the image\"\"\"
60
+ # Convert to RGB if grayscale
61
+ if len(image.shape) == 2:
62
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
63
+ else:
64
+ image_rgb = image.copy()
65
+
66
+ # Draw ROI circle
67
+ center = (int(roi_x), int(roi_y))
68
+ radius = int(roi_radius)
69
+
70
+ # Draw outer circle (white)
71
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
72
+ # Draw inner circle (red)
73
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
74
+ # Draw center cross
75
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
76
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
77
+
78
+ return image_rgb
79
+
80
+ def create_fat_overlay(base_image, segmentation_results):
81
+ \"\"\"Create overlay image with fat segmentation highlighted\"\"\"
82
+ # Convert to RGB
83
+ if len(base_image.shape) == 2:
84
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
85
+ else:
86
+ overlay_img = base_image.copy()
87
+
88
+ # Check if we have segmentation masks
89
+ if not segmentation_results or 'segments' not in segmentation_results:
90
+ return overlay_img
91
+
92
+ segments = segmentation_results.get('segments', {})
93
+
94
+ # Apply subcutaneous fat overlay (yellow)
95
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
96
+ mask = segments['subcutaneous']['mask']
97
+ yellow_overlay = np.zeros_like(overlay_img)
98
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
99
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
100
+
101
+ # Apply visceral fat overlay (red)
102
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
103
+ mask = segments['visceral']['mask']
104
+ red_overlay = np.zeros_like(overlay_img)
105
+ red_overlay[mask > 0] = [255, 0, 0] # Red
106
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
107
+
108
+ # Add legend
109
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
110
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
111
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
112
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
113
+
114
+ return overlay_img
115
+
116
+ def process_and_analyze(file_obj, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
117
+ \"\"\"
118
+ Processes uploaded file and performs analysis
119
+ \"\"\"
120
+ if file_obj is None:
121
+ return None, "No file selected", None, {}, None
122
+
123
+ # Create analyzer instance
124
+ analyzer = MedicalImageAnalyzer(
125
+ analysis_mode="structured",
126
+ include_confidence=True,
127
+ include_reasoning=True
128
+ )
129
+
130
+ try:
131
+ # Process the file (DICOM or image)
132
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
133
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
134
+
135
+ # Update modality from file metadata if it's a DICOM
136
+ if metadata.get('file_type') == 'DICOM' and 'modality' in metadata:
137
+ modality = metadata['modality']
138
+
139
+ # Prepare analysis parameters
140
+ analysis_params = {
141
+ "image": pixel_array,
142
+ "modality": modality,
143
+ "task": task
144
+ }
145
+
146
+ # Add ROI if applicable
147
+ if task in ["analyze_point", "full_analysis"]:
148
+ # Scale ROI coordinates to image size
149
+ h, w = pixel_array.shape
150
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
151
+ roi_y_scaled = int(roi_y * h / 512)
152
+
153
+ analysis_params["roi"] = {
154
+ "x": roi_x_scaled,
155
+ "y": roi_y_scaled,
156
+ "radius": roi_radius
157
+ }
158
+
159
+ # Add clinical context
160
+ if symptoms:
161
+ analysis_params["clinical_context"] = {"symptoms": symptoms}
162
+
163
+ # Perform analysis
164
+ results = analyzer.analyze_image(**analysis_params)
165
+
166
+ # Create visual report
167
+ visual_report = create_visual_report(results, metadata)
168
+
169
+ # Add metadata info
170
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
171
+ info += f"πŸ₯ {modality} | "
172
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
173
+
174
+ if metadata.get('window_center'):
175
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
176
+
177
+ # Create overlay image if requested
178
+ overlay_image = None
179
+ if show_overlay:
180
+ # For ROI visualization
181
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
182
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x_scaled, roi_y_scaled, roi_radius)
183
+
184
+ # For fat segmentation overlay (simplified version since we don't have masks in current implementation)
185
+ elif task == "segment_fat" and 'segmentation' in results and modality == 'CT':
186
+ # For now, just draw ROI since we don't have actual masks
187
+ overlay_image = display_array.copy()
188
+ if len(overlay_image.shape) == 2:
189
+ overlay_image = cv2.cvtColor(overlay_image, cv2.COLOR_GRAY2RGB)
190
+ # Add text overlay about fat percentages
191
+ if 'statistics' in results['segmentation']:
192
+ stats = results['segmentation']['statistics']
193
+ cv2.putText(overlay_image, f"Total Fat: {stats.get('total_fat_percentage', 0):.1f}%",
194
+ (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
195
+ cv2.putText(overlay_image, f"Subcutaneous: {stats.get('subcutaneous_fat_percentage', 0):.1f}%",
196
+ (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
197
+ cv2.putText(overlay_image, f"Visceral: {stats.get('visceral_fat_percentage', 0):.1f}%",
198
+ (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
199
+
200
+ return display_array, info, visual_report, results, overlay_image
201
+
202
+ except Exception as e:
203
+ error_msg = f"Error: {str(e)}"
204
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
205
+
206
+ def create_visual_report(results, metadata):
207
+ \"\"\"Creates a visual HTML report with improved styling\"\"\"
208
+ html = f\"\"\"
209
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
210
+ padding: 24px;
211
+ background: #ffffff;
212
+ border-radius: 12px;
213
+ max-width: 100%;
214
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
215
+ color: #1a1a1a !important;'>
216
+
217
+ <h2 style='color: #1e40af !important;
218
+ border-bottom: 3px solid #3b82f6;
219
+ padding-bottom: 12px;
220
+ margin-bottom: 20px;
221
+ font-size: 24px;
222
+ font-weight: 600;'>
223
+ πŸ₯ Medical Image Analysis Report
224
+ </h2>
225
+
226
+ <div style='background: #f0f9ff;
227
+ padding: 20px;
228
+ margin: 16px 0;
229
+ border-radius: 8px;
230
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
231
+ <h3 style='color: #1e3a8a !important;
232
+ font-size: 18px;
233
+ font-weight: 600;
234
+ margin-bottom: 12px;'>
235
+ πŸ“‹ Metadata
236
+ </h3>
237
+ <table style='width: 100%; border-collapse: collapse;'>
238
+ <tr>
239
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
240
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
241
+ </tr>
242
+ <tr>
243
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
244
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
245
+ </tr>
246
+ <tr>
247
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
248
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
249
+ </tr>
250
+ <tr>
251
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
252
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
253
+ </tr>
254
+ </table>
255
+ </div>
256
+ \"\"\"
257
+
258
+ # Point Analysis
259
+ if 'point_analysis' in results:
260
+ pa = results['point_analysis']
261
+ tissue = pa.get('tissue_type', {})
262
+
263
+ html += f\"\"\"
264
+ <div style='background: #f0f9ff;
265
+ padding: 20px;
266
+ margin: 16px 0;
267
+ border-radius: 8px;
268
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
269
+ <h3 style='color: #1e3a8a !important;
270
+ font-size: 18px;
271
+ font-weight: 600;
272
+ margin-bottom: 12px;'>
273
+ 🎯 Point Analysis
274
+ </h3>
275
+ <table style='width: 100%; border-collapse: collapse;'>
276
+ <tr>
277
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
278
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
279
+ </tr>
280
+ \"\"\"
281
+
282
+ if results.get('modality') == 'CT':
283
+ html += f\"\"\"
284
+ <tr>
285
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
286
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
287
+ </tr>
288
+ \"\"\"
289
+ else:
290
+ html += f\"\"\"
291
+ <tr>
292
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
293
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
294
+ </tr>
295
+ \"\"\"
296
+
297
+ html += f\"\"\"
298
+ <tr>
299
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
300
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
301
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
302
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
303
+ </td>
304
+ </tr>
305
+ <tr>
306
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
307
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
308
+ </tr>
309
+ </table>
310
+ \"\"\"
311
+
312
+ if 'reasoning' in pa:
313
+ html += f\"\"\"
314
+ <div style='margin-top: 12px;
315
+ padding: 12px;
316
+ background: #dbeafe;
317
+ border-left: 3px solid #3b82f6;
318
+ border-radius: 4px;'>
319
+ <p style='margin: 0; color: #1e40af !important; font-style: italic;'>
320
+ πŸ’­ {pa['reasoning']}
321
+ </p>
322
+ </div>
323
+ \"\"\"
324
+
325
+ html += "</div>"
326
+
327
+ # Segmentation Results
328
+ if 'segmentation' in results and results['segmentation']:
329
+ seg = results['segmentation']
330
+
331
+ if 'statistics' in seg:
332
+ # Fat segmentation for CT
333
+ stats = seg['statistics']
334
+ html += f\"\"\"
335
+ <div style='background: #f0f9ff;
336
+ padding: 20px;
337
+ margin: 16px 0;
338
+ border-radius: 8px;
339
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
340
+ <h3 style='color: #1e3a8a !important;
341
+ font-size: 18px;
342
+ font-weight: 600;
343
+ margin-bottom: 12px;'>
344
+ πŸ”¬ Fat Segmentation Analysis
345
+ </h3>
346
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
347
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
348
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
349
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
350
+ </div>
351
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
352
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
353
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
354
+ </div>
355
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
356
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
357
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
358
+ </div>
359
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
360
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
361
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
362
+ </div>
363
+ </div>
364
+ \"\"\"
365
+
366
+ if 'interpretation' in seg:
367
+ interp = seg['interpretation']
368
+ obesity_color = "#16a34a" if interp.get("obesity_risk") == "normal" else "#d97706" if interp.get("obesity_risk") == "moderate" else "#dc2626"
369
+ visceral_color = "#16a34a" if interp.get("visceral_risk") == "normal" else "#d97706" if interp.get("visceral_risk") == "moderate" else "#dc2626"
370
+
371
+ html += f\"\"\"
372
+ <div style='margin-top: 16px; padding: 16px; background: #f3f4f6; border-radius: 6px;'>
373
+ <h4 style='color: #374151 !important; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
374
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
375
+ <div>
376
+ <span style='color: #6b7280 !important; font-size: 14px;'>Obesity Risk:</span>
377
+ <span style='color: {obesity_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
378
+ </div>
379
+ <div>
380
+ <span style='color: #6b7280 !important; font-size: 14px;'>Visceral Risk:</span>
381
+ <span style='color: {visceral_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
382
+ </div>
383
+ </div>
384
+ \"\"\"
385
+
386
+ if interp.get('recommendations'):
387
+ html += \"\"\"
388
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e5e7eb;'>
389
+ <h5 style='color: #374151 !important; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
390
+ <ul style='margin: 0; padding-left: 20px; color: #4b5563 !important;'>
391
+ \"\"\"
392
+ for rec in interp['recommendations']:
393
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
394
+ html += "</ul></div>"
395
+
396
+ html += "</div>"
397
+ html += "</div>"
398
+
399
+ # Quality Assessment
400
+ if 'quality_metrics' in results:
401
+ quality = results['quality_metrics']
402
+ quality_colors = {
403
+ 'excellent': '#16a34a',
404
+ 'good': '#16a34a',
405
+ 'fair': '#d97706',
406
+ 'poor': '#dc2626',
407
+ 'unknown': '#6b7280'
408
+ }
409
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#6b7280')
410
+
411
+ html += f\"\"\"
412
+ <div style='background: #f0f9ff;
413
+ padding: 20px;
414
+ margin: 16px 0;
415
+ border-radius: 8px;
416
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
417
+ <h3 style='color: #1e3a8a !important;
418
+ font-size: 18px;
419
+ font-weight: 600;
420
+ margin-bottom: 12px;'>
421
+ πŸ“Š Image Quality Assessment
422
+ </h3>
423
+ <div style='display: flex; align-items: center; gap: 16px;'>
424
+ <div>
425
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
426
+ <span style='color: {q_color} !important;
427
+ font-size: 18px;
428
+ font-weight: 700;
429
+ margin-left: 8px;'>
430
+ {quality.get('overall_quality', 'unknown').upper()}
431
+ </span>
432
+ </div>
433
+ </div>
434
+ \"\"\"
435
+
436
+ if quality.get('issues'):
437
+ html += f\"\"\"
438
+ <div style='margin-top: 12px;
439
+ padding: 12px;
440
+ background: #fef3c7;
441
+ border-left: 3px solid #f59e0b;
442
+ border-radius: 4px;'>
443
+ <strong style='color: #92400e !important;'>Issues Detected:</strong>
444
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #92400e !important;'>
445
+ \"\"\"
446
+ for issue in quality['issues']:
447
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
448
+ html += "</ul></div>"
449
+
450
+ html += "</div>"
451
+
452
+ html += "</div>"
453
+ return html
454
+
455
+ def create_demo():
456
+ with gr.Blocks(
457
+ title="Medical Image Analyzer - Enhanced Demo",
458
+ theme=gr.themes.Soft(
459
+ primary_hue="blue",
460
+ secondary_hue="blue",
461
+ neutral_hue="slate",
462
+ text_size="md",
463
+ spacing_size="md",
464
+ radius_size="md",
465
+ ).set(
466
+ # Medical blue theme colors
467
+ body_background_fill="*neutral_950",
468
+ body_background_fill_dark="*neutral_950",
469
+ block_background_fill="*neutral_900",
470
+ block_background_fill_dark="*neutral_900",
471
+ border_color_primary="*primary_600",
472
+ border_color_primary_dark="*primary_600",
473
+ # Text colors for better contrast
474
+ body_text_color="*neutral_100",
475
+ body_text_color_dark="*neutral_100",
476
+ body_text_color_subdued="*neutral_300",
477
+ body_text_color_subdued_dark="*neutral_300",
478
+ # Button colors
479
+ button_primary_background_fill="*primary_600",
480
+ button_primary_background_fill_dark="*primary_600",
481
+ button_primary_text_color="white",
482
+ button_primary_text_color_dark="white",
483
+ ),
484
+ css=\"\"\"
485
+ /* Medical blue theme with high contrast */
486
+ :root {
487
+ --medical-blue: #1e40af;
488
+ --medical-blue-light: #3b82f6;
489
+ --medical-blue-dark: #1e3a8a;
490
+ --text-primary: #f9fafb;
491
+ --text-secondary: #e5e7eb;
492
+ --bg-primary: #0f172a;
493
+ --bg-secondary: #1e293b;
494
+ --bg-tertiary: #334155;
495
+ }
496
+
497
+ /* Override default text colors for medical theme */
498
+ * {
499
+ color: var(--text-primary) !important;
500
+ }
501
+
502
+ /* Style the file upload area */
503
+ .file-upload {
504
+ border: 2px dashed var(--medical-blue-light) !important;
505
+ border-radius: 8px !important;
506
+ padding: 20px !important;
507
+ text-align: center !important;
508
+ background: var(--bg-secondary) !important;
509
+ transition: all 0.3s ease !important;
510
+ color: var(--text-primary) !important;
511
+ }
512
+
513
+ .file-upload:hover {
514
+ border-color: var(--medical-blue) !important;
515
+ background: var(--bg-tertiary) !important;
516
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
517
+ }
518
+
519
+ /* Ensure report text is readable with white background */
520
+ .medical-report {
521
+ background: #ffffff !important;
522
+ border: 2px solid var(--medical-blue-light) !important;
523
+ border-radius: 8px !important;
524
+ padding: 16px !important;
525
+ color: #1a1a1a !important;
526
+ }
527
+
528
+ .medical-report * {
529
+ color: #1f2937 !important; /* Dark gray text */
530
+ }
531
+
532
+ .medical-report h2 {
533
+ color: #1e40af !important; /* Medical blue for main heading */
534
+ }
535
+
536
+ .medical-report h3, .medical-report h4 {
537
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
538
+ }
539
+
540
+ .medical-report strong {
541
+ color: #374151 !important; /* Darker gray for labels */
542
+ }
543
+
544
+ .medical-report td {
545
+ color: #1f2937 !important; /* Ensure table text is dark */
546
+ }
547
+
548
+ /* Report sections with light blue background */
549
+ .medical-report > div {
550
+ background: #f0f9ff !important;
551
+ color: #1f2937 !important;
552
+ }
553
+
554
+ /* Medical blue accents for UI elements */
555
+ .gr-button-primary {
556
+ background: var(--medical-blue) !important;
557
+ border-color: var(--medical-blue) !important;
558
+ }
559
+
560
+ .gr-button-primary:hover {
561
+ background: var(--medical-blue-dark) !important;
562
+ border-color: var(--medical-blue-dark) !important;
563
+ }
564
+
565
+ /* Tab styling */
566
+ .gr-tab-item {
567
+ border-color: var(--medical-blue-light) !important;
568
+ }
569
+
570
+ .gr-tab-item.selected {
571
+ background: var(--medical-blue) !important;
572
+ color: white !important;
573
+ }
574
+
575
+ /* Accordion styling */
576
+ .gr-accordion {
577
+ border-color: var(--medical-blue-light) !important;
578
+ }
579
+
580
+ /* Slider track in medical blue */
581
+ input[type="range"]::-webkit-slider-track {
582
+ background: var(--bg-tertiary) !important;
583
+ }
584
+
585
+ input[type="range"]::-webkit-slider-thumb {
586
+ background: var(--medical-blue) !important;
587
+ }
588
+ \"\"\"
589
+ ) as demo:
590
+ gr.Markdown(\"\"\"
591
+ # πŸ₯ Medical Image Analyzer
592
+
593
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
594
+ \"\"\")
595
+
596
+ with gr.Row():
597
+ with gr.Column(scale=1):
598
+ # File upload - no file type restrictions
599
+ with gr.Group():
600
+ gr.Markdown("### πŸ“€ Upload Medical Image")
601
+ file_input = gr.File(
602
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
603
+ file_count="single",
604
+ type="filepath",
605
+ elem_classes="file-upload"
606
+ # Note: NO file_types parameter = accepts ALL files
607
+ )
608
+ gr.Markdown(\"\"\"
609
+ <small style='color: #666;'>
610
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
611
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
612
+ </small>
613
+ \"\"\")
614
+
615
+ # Modality selection
616
+ modality = gr.Radio(
617
+ choices=["CT", "CR", "DX", "RX", "DR"],
618
+ value="CT",
619
+ label="Modality",
620
+ info="Will be auto-detected for DICOM files"
621
+ )
622
+
623
+ # Task selection
624
+ task = gr.Dropdown(
625
+ choices=[
626
+ ("🎯 Point Analysis", "analyze_point"),
627
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
628
+ ("πŸ“Š Full Analysis", "full_analysis")
629
+ ],
630
+ value="full_analysis",
631
+ label="Analysis Task"
632
+ )
633
+
634
+ # ROI settings
635
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
636
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
637
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
638
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
639
+
640
+ # Clinical context
641
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
642
+ symptoms = gr.CheckboxGroup(
643
+ choices=[
644
+ "dyspnea", "chest_pain", "abdominal_pain",
645
+ "trauma", "obesity_screening", "routine_check"
646
+ ],
647
+ label="Symptoms/Indication"
648
+ )
649
+
650
+ # Visualization options
651
+ with gr.Accordion("🎨 Visualization Options", open=True):
652
+ show_overlay = gr.Checkbox(
653
+ label="Show ROI/Segmentation Overlay",
654
+ value=True,
655
+ info="Display ROI circle or fat segmentation info on the image"
656
+ )
657
+
658
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
659
+
660
+ with gr.Column(scale=2):
661
+ # Results with tabs for different views
662
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
663
+ image_display = gr.Image(label="Medical Image", type="numpy")
664
+
665
+ with gr.Tab("🎯 Overlay View"):
666
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
667
+
668
+ file_info = gr.Textbox(label="File Information", lines=1)
669
+
670
+ with gr.Tab("πŸ“Š Visual Report"):
671
+ report_html = gr.HTML()
672
+
673
+ with gr.Tab("πŸ”§ JSON Output"):
674
+ json_output = gr.JSON(label="Structured Data for AI Agents")
675
+
676
+ # Examples and help
677
+ with gr.Row():
678
+ gr.Markdown(\"\"\"
679
+ ### πŸ“ Supported Formats
680
+ - **DICOM**: Automatic HU value extraction and modality detection
681
+ - **PNG/JPG**: Interpreted based on selected modality
682
+ - **All Formats**: Automatic grayscale conversion
683
+ - **Files without extension**: Supported (e.g., IM_0001) - will try DICOM first
684
+
685
+ ### 🎯 Usage
686
+ 1. Upload a medical image file
687
+ 2. Select modality (auto-detected for DICOM)
688
+ 3. Choose analysis task
689
+ 4. Adjust ROI position for point analysis
690
+ 5. Click "Analyze"
691
+
692
+ ### πŸ’‘ Features
693
+ - **ROI Visualization**: See the exact area being analyzed
694
+ - **Fat Segmentation**: Visual percentages for CT scans
695
+ - **Multi-format Support**: Works with any medical image format
696
+ - **AI Agent Ready**: Structured JSON output for integration
697
+ \"\"\")
698
+
699
+ # Connect the interface
700
+ analyze_btn.click(
701
+ fn=process_and_analyze,
702
+ inputs=[file_input, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
703
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
704
+ )
705
+
706
+ # Auto-update ROI limits when image is loaded
707
+ def update_roi_on_upload(file_obj):
708
+ if file_obj is None:
709
+ return gr.update(), gr.update()
710
+
711
+ try:
712
+ analyzer = MedicalImageAnalyzer()
713
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
714
+
715
+ if 'shape' in metadata:
716
+ h, w = metadata['shape']
717
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
718
+ except:
719
+ pass
720
+
721
+ return gr.update(), gr.update()
722
+
723
+ file_input.change(
724
+ fn=update_roi_on_upload,
725
+ inputs=[file_input],
726
+ outputs=[roi_x, roi_y]
727
+ )
728
+
729
+ return demo
730
+
731
+ if __name__ == "__main__":
732
+ demo = create_demo()
733
+ demo.launch()
734
+ ```
735
+ """, elem_classes=["md-custom"], header_links=True)
736
+
737
+
738
+ gr.Markdown("""
739
+ ## `MedicalImageAnalyzer`
740
+
741
+ ### Initialization
742
+ """, elem_classes=["md-custom"], header_links=True)
743
+
744
+ gr.ParamViewer(value=_docs["MedicalImageAnalyzer"]["members"]["__init__"], linkify=[])
745
+
746
+
747
+ gr.Markdown("### Events")
748
+ gr.ParamViewer(value=_docs["MedicalImageAnalyzer"]["events"], linkify=['Event'])
749
+
750
+
751
+
752
+
753
+ gr.Markdown("""
754
+
755
+ ### User function
756
+
757
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
758
+
759
+ - When used as an Input, the component only impacts the input signature of the user function.
760
+ - When used as an output, the component only impacts the return signature of the user function.
761
+
762
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
763
+
764
+
765
+
766
+ ```python
767
+ def predict(
768
+ value: typing.Dict[str, typing.Any][str, typing.Any]
769
+ ) -> typing.Dict[str, typing.Any][str, typing.Any]:
770
+ return value
771
+ ```
772
+ """, elem_classes=["md-custom", "MedicalImageAnalyzer-user-fn"], header_links=True)
773
+
774
+
775
+
776
+
777
+ demo.load(None, js=r"""function() {
778
+ const refs = {};
779
+ const user_fn_refs = {
780
+ MedicalImageAnalyzer: [], };
781
+ requestAnimationFrame(() => {
782
+
783
+ Object.entries(user_fn_refs).forEach(([key, refs]) => {
784
+ if (refs.length > 0) {
785
+ const el = document.querySelector(`.${key}-user-fn`);
786
+ if (!el) return;
787
+ refs.forEach(ref => {
788
+ el.innerHTML = el.innerHTML.replace(
789
+ new RegExp("\\b"+ref+"\\b", "g"),
790
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
791
+ );
792
+ })
793
+ }
794
+ })
795
+
796
+ Object.entries(refs).forEach(([key, refs]) => {
797
+ if (refs.length > 0) {
798
+ const el = document.querySelector(`.${key}`);
799
+ if (!el) return;
800
+ refs.forEach(ref => {
801
+ el.innerHTML = el.innerHTML.replace(
802
+ new RegExp("\\b"+ref+"\\b", "g"),
803
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
804
+ );
805
+ })
806
+ }
807
+ })
808
+ })
809
+ }
810
+
811
+ """)
812
+
813
+ demo.launch()
src/.gitignore ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+ MANIFEST
23
+
24
+ # Virtual environments
25
+ venv/
26
+ ENV/
27
+ env/
28
+ .venv
29
+
30
+ # IDE
31
+ .vscode/
32
+ .idea/
33
+ *.swp
34
+ *.swo
35
+ *~
36
+
37
+ # OS
38
+ .DS_Store
39
+ Thumbs.db
40
+
41
+ # Node
42
+ node_modules/
43
+ npm-debug.log*
44
+ yarn-debug.log*
45
+ yarn-error.log*
46
+
47
+ # Gradio
48
+ flagged/
49
+ gradio_cached_examples/
50
+
51
+ # Project specific
52
+ *.dcm
53
+ *.dicom
54
+ IM_*
55
+ test_images/
56
+ temp/
57
+ .gradio/
src/FILE_UPLOAD_IMPLEMENTATION.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # File Upload Implementation for Medical Image Analyzer
2
+
3
+ ## Overview
4
+ The medical_image_analyzer now fully supports uploading files without extensions (like IM_0001), matching the vetdicomviewer implementation.
5
+
6
+ ## Key Implementation Details
7
+
8
+ ### 1. Backend (medical_image_analyzer.py)
9
+ ```python
10
+ # Line 674 in process_file method
11
+ ds = pydicom.dcmread(file_path, force=True)
12
+ ```
13
+ - Uses `force=True` parameter to read any file as DICOM first
14
+ - Falls back to regular image processing if DICOM reading fails
15
+ - No filename filtering - accepts all files
16
+
17
+ ### 2. Frontend File Upload (wrapper_test.py & app.py)
18
+ ```python
19
+ file_input = gr.File(
20
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
21
+ file_count="single",
22
+ type="filepath",
23
+ elem_classes="file-upload"
24
+ # Note: NO file_types parameter = accepts ALL files
25
+ )
26
+ ```
27
+ - No `file_types` parameter means ALL files are accepted
28
+ - Clear labeling mentions "IM_*" files
29
+ - Custom CSS styling for better UX
30
+
31
+ ### 3. Svelte Component (Index.svelte)
32
+ ```javascript
33
+ // Always try DICOM first for files without extensions
34
+ if (!file_ext || file_ext === 'dcm' || file_ext === 'dicom' ||
35
+ file.type === 'application/dicom' || file.name.startsWith('IM_')) {
36
+ // Process as DICOM
37
+ }
38
+ ```
39
+ - Prioritizes DICOM processing for files without extensions
40
+ - Specifically checks for files starting with "IM_"
41
+
42
+ ## Features Added
43
+
44
+ ### 1. ROI Visualization
45
+ - Draw ROI circle on the image
46
+ - Visual feedback for point analysis location
47
+ - Toggle to show/hide overlay
48
+
49
+ ### 2. Fat Segmentation Overlay
50
+ - Display fat percentages on CT images
51
+ - Color-coded visualization (when masks available)
52
+ - Legend for subcutaneous vs visceral fat
53
+
54
+ ### 3. Enhanced UI
55
+ - Two-tab view: Original Image | Overlay View
56
+ - File information display with metadata
57
+ - Improved text contrast and styling
58
+ - English interface (no German text)
59
+
60
+ ## Testing
61
+ Run the test script to verify IM_0001 support:
62
+ ```bash
63
+ python test_im_files.py
64
+ ```
65
+
66
+ Output confirms:
67
+ - βœ… IM_0001 files load successfully
68
+ - βœ… DICOM metadata extracted properly
69
+ - βœ… Analysis functions work correctly
70
+ - βœ… HU values calculated for CT images
71
+
72
+ ## File Type Support
73
+ 1. **DICOM files**: .dcm, .dicom
74
+ 2. **Files without extensions**: IM_0001, IM_0002, etc.
75
+ 3. **Regular images**: .png, .jpg, .jpeg, .tiff, .bmp
76
+ 4. **Any other file**: Will attempt DICOM first, then image
77
+
78
+ ## Usage Example
79
+ ```python
80
+ # Both wrapper_test.py and app.py now support:
81
+ # 1. Upload any medical image file
82
+ # 2. Automatic modality detection for DICOM
83
+ # 3. ROI visualization on demand
84
+ # 4. Fat segmentation info overlay
85
+
86
+ # The file upload is unrestricted:
87
+ # - Accepts ALL file types
88
+ # - Uses force=True for DICOM reading
89
+ # - Graceful fallback to image processing
90
+ ```
91
+
92
+ ## Summary
93
+ The medical_image_analyzer now matches vetdicomviewer's file handling capabilities:
94
+ - βœ… Supports files without extensions (IM_0001)
95
+ - βœ… ROI visualization on images
96
+ - βœ… Fat segmentation overlay (text-based currently)
97
+ - βœ… Enhanced UI with better contrast
98
+ - βœ… English-only interface
99
+ - βœ… Synchronized app.py with wrapper_test.py features
src/README.md ADDED
@@ -0,0 +1,1026 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ title: Medical Image Analyzer Component
4
+ emoji: πŸ₯
5
+ colorFrom: blue
6
+ colorTo: green
7
+ sdk: gradio
8
+ sdk_version: 5.33.0
9
+ app_file: demo/app.py
10
+ pinned: false
11
+ license: apache-2.0
12
+ tags:
13
+ - custom-component-track
14
+ - medical-imaging
15
+ - gradio-custom-component
16
+ - hackathon-2025
17
+ - ai-agents
18
+ ---
19
+
20
+ # `gradio_medical_image_analyzer`
21
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange"> <a href="https://github.com/markusclauss/gradio-medical-image-analyzer/issues" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Issues-white?logo=github&logoColor=black"></a>
22
+
23
+ AI-agent optimized medical image analysis component for Gradio
24
+
25
+ ## ⚠️ IMPORTANT MEDICAL DISCLAIMER ⚠️
26
+
27
+ **THIS SOFTWARE IS FOR RESEARCH AND EDUCATIONAL PURPOSES ONLY**
28
+
29
+ 🚨 **DO NOT USE FOR CLINICAL DIAGNOSIS OR MEDICAL DECISION MAKING** 🚨
30
+
31
+ This component is in **EARLY DEVELOPMENT** and is intended as a **proof of concept** for medical image analysis integration with Gradio. The results produced by this software:
32
+
33
+ - **ARE NOT** validated for clinical use
34
+ - **ARE NOT** FDA approved or CE marked
35
+ - **SHOULD NOT** be used for patient diagnosis or treatment decisions
36
+ - **SHOULD NOT** replace professional medical judgment
37
+ - **MAY CONTAIN** significant errors or inaccuracies
38
+ - **ARE PROVIDED** without any warranty of accuracy or fitness for medical purposes
39
+
40
+ **ALWAYS CONSULT QUALIFIED HEALTHCARE PROFESSIONALS** for medical image interpretation and clinical decisions. This software is intended solely for:
41
+ - Research and development purposes
42
+ - Educational demonstrations
43
+ - Technical integration testing
44
+ - Non-clinical experimental use
45
+
46
+ By using this software, you acknowledge that you understand these limitations and agree not to use it for any clinical or medical diagnostic purposes.
47
+
48
+ ## Installation
49
+
50
+ ```bash
51
+ pip install gradio_medical_image_analyzer
52
+ ```
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ #!/usr/bin/env python3
58
+ """
59
+ Demo for MedicalImageAnalyzer - Enhanced with file upload and overlay visualization
60
+ """
61
+
62
+ import gradio as gr
63
+ import numpy as np
64
+ import sys
65
+ import os
66
+ import cv2
67
+ from pathlib import Path
68
+
69
+ # Add backend to path
70
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
71
+
72
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
73
+
74
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
75
+ """Draw ROI circle on the image"""
76
+ # Convert to RGB if grayscale
77
+ if len(image.shape) == 2:
78
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
79
+ else:
80
+ image_rgb = image.copy()
81
+
82
+ # Draw ROI circle
83
+ center = (int(roi_x), int(roi_y))
84
+ radius = int(roi_radius)
85
+
86
+ # Draw outer circle (white)
87
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
88
+ # Draw inner circle (red)
89
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
90
+ # Draw center cross
91
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
92
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
93
+
94
+ return image_rgb
95
+
96
+ def create_fat_overlay(base_image, segmentation_results):
97
+ """Create overlay image with fat segmentation highlighted"""
98
+ # Convert to RGB
99
+ if len(base_image.shape) == 2:
100
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
101
+ else:
102
+ overlay_img = base_image.copy()
103
+
104
+ # Check if we have segmentation masks
105
+ if not segmentation_results or 'segments' not in segmentation_results:
106
+ return overlay_img
107
+
108
+ segments = segmentation_results.get('segments', {})
109
+
110
+ # Apply subcutaneous fat overlay (yellow)
111
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
112
+ mask = segments['subcutaneous']['mask']
113
+ yellow_overlay = np.zeros_like(overlay_img)
114
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
115
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
116
+
117
+ # Apply visceral fat overlay (red)
118
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
119
+ mask = segments['visceral']['mask']
120
+ red_overlay = np.zeros_like(overlay_img)
121
+ red_overlay[mask > 0] = [255, 0, 0] # Red
122
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
123
+
124
+ # Add legend
125
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
126
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
127
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
128
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
129
+
130
+ return overlay_img
131
+
132
+ def process_and_analyze(file_obj, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
133
+ """
134
+ Processes uploaded file and performs analysis
135
+ """
136
+ if file_obj is None:
137
+ return None, "No file selected", None, {}, None
138
+
139
+ # Create analyzer instance
140
+ analyzer = MedicalImageAnalyzer(
141
+ analysis_mode="structured",
142
+ include_confidence=True,
143
+ include_reasoning=True
144
+ )
145
+
146
+ try:
147
+ # Process the file (DICOM or image)
148
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
149
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
150
+
151
+ # Update modality from file metadata if it's a DICOM
152
+ if metadata.get('file_type') == 'DICOM' and 'modality' in metadata:
153
+ modality = metadata['modality']
154
+
155
+ # Prepare analysis parameters
156
+ analysis_params = {
157
+ "image": pixel_array,
158
+ "modality": modality,
159
+ "task": task
160
+ }
161
+
162
+ # Add ROI if applicable
163
+ if task in ["analyze_point", "full_analysis"]:
164
+ # Scale ROI coordinates to image size
165
+ h, w = pixel_array.shape
166
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
167
+ roi_y_scaled = int(roi_y * h / 512)
168
+
169
+ analysis_params["roi"] = {
170
+ "x": roi_x_scaled,
171
+ "y": roi_y_scaled,
172
+ "radius": roi_radius
173
+ }
174
+
175
+ # Add clinical context
176
+ if symptoms:
177
+ analysis_params["clinical_context"] = {"symptoms": symptoms}
178
+
179
+ # Perform analysis
180
+ results = analyzer.analyze_image(**analysis_params)
181
+
182
+ # Create visual report
183
+ visual_report = create_visual_report(results, metadata)
184
+
185
+ # Add metadata info
186
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
187
+ info += f"πŸ₯ {modality} | "
188
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
189
+
190
+ if metadata.get('window_center'):
191
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
192
+
193
+ # Create overlay image if requested
194
+ overlay_image = None
195
+ if show_overlay:
196
+ # For ROI visualization
197
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
198
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x_scaled, roi_y_scaled, roi_radius)
199
+
200
+ # For fat segmentation overlay (simplified version since we don't have masks in current implementation)
201
+ elif task == "segment_fat" and 'segmentation' in results and modality == 'CT':
202
+ # For now, just draw ROI since we don't have actual masks
203
+ overlay_image = display_array.copy()
204
+ if len(overlay_image.shape) == 2:
205
+ overlay_image = cv2.cvtColor(overlay_image, cv2.COLOR_GRAY2RGB)
206
+ # Add text overlay about fat percentages
207
+ if 'statistics' in results['segmentation']:
208
+ stats = results['segmentation']['statistics']
209
+ cv2.putText(overlay_image, f"Total Fat: {stats.get('total_fat_percentage', 0):.1f}%",
210
+ (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
211
+ cv2.putText(overlay_image, f"Subcutaneous: {stats.get('subcutaneous_fat_percentage', 0):.1f}%",
212
+ (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
213
+ cv2.putText(overlay_image, f"Visceral: {stats.get('visceral_fat_percentage', 0):.1f}%",
214
+ (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
215
+
216
+ return display_array, info, visual_report, results, overlay_image
217
+
218
+ except Exception as e:
219
+ error_msg = f"Error: {str(e)}"
220
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
221
+
222
+ def create_visual_report(results, metadata):
223
+ """Creates a visual HTML report with improved styling"""
224
+ html = f"""
225
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
226
+ padding: 24px;
227
+ background: #ffffff;
228
+ border-radius: 12px;
229
+ max-width: 100%;
230
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
231
+ color: #1a1a1a !important;'>
232
+
233
+ <h2 style='color: #1e40af !important;
234
+ border-bottom: 3px solid #3b82f6;
235
+ padding-bottom: 12px;
236
+ margin-bottom: 20px;
237
+ font-size: 24px;
238
+ font-weight: 600;'>
239
+ πŸ₯ Medical Image Analysis Report
240
+ </h2>
241
+
242
+ <div style='background: #f0f9ff;
243
+ padding: 20px;
244
+ margin: 16px 0;
245
+ border-radius: 8px;
246
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
247
+ <h3 style='color: #1e3a8a !important;
248
+ font-size: 18px;
249
+ font-weight: 600;
250
+ margin-bottom: 12px;'>
251
+ πŸ“‹ Metadata
252
+ </h3>
253
+ <table style='width: 100%; border-collapse: collapse;'>
254
+ <tr>
255
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
256
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
257
+ </tr>
258
+ <tr>
259
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
260
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
261
+ </tr>
262
+ <tr>
263
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
264
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
265
+ </tr>
266
+ <tr>
267
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
268
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
269
+ </tr>
270
+ </table>
271
+ </div>
272
+ """
273
+
274
+ # Point Analysis
275
+ if 'point_analysis' in results:
276
+ pa = results['point_analysis']
277
+ tissue = pa.get('tissue_type', {})
278
+
279
+ html += f"""
280
+ <div style='background: #f0f9ff;
281
+ padding: 20px;
282
+ margin: 16px 0;
283
+ border-radius: 8px;
284
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
285
+ <h3 style='color: #1e3a8a !important;
286
+ font-size: 18px;
287
+ font-weight: 600;
288
+ margin-bottom: 12px;'>
289
+ 🎯 Point Analysis
290
+ </h3>
291
+ <table style='width: 100%; border-collapse: collapse;'>
292
+ <tr>
293
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
294
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
295
+ </tr>
296
+ """
297
+
298
+ if results.get('modality') == 'CT':
299
+ html += f"""
300
+ <tr>
301
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
302
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
303
+ </tr>
304
+ """
305
+ else:
306
+ html += f"""
307
+ <tr>
308
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
309
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
310
+ </tr>
311
+ """
312
+
313
+ html += f"""
314
+ <tr>
315
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
316
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
317
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
318
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
319
+ </td>
320
+ </tr>
321
+ <tr>
322
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
323
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
324
+ </tr>
325
+ </table>
326
+ """
327
+
328
+ if 'reasoning' in pa:
329
+ html += f"""
330
+ <div style='margin-top: 12px;
331
+ padding: 12px;
332
+ background: #dbeafe;
333
+ border-left: 3px solid #3b82f6;
334
+ border-radius: 4px;'>
335
+ <p style='margin: 0; color: #1e40af !important; font-style: italic;'>
336
+ πŸ’­ {pa['reasoning']}
337
+ </p>
338
+ </div>
339
+ """
340
+
341
+ html += "</div>"
342
+
343
+ # Segmentation Results
344
+ if 'segmentation' in results and results['segmentation']:
345
+ seg = results['segmentation']
346
+
347
+ if 'statistics' in seg:
348
+ # Fat segmentation for CT
349
+ stats = seg['statistics']
350
+ html += f"""
351
+ <div style='background: #f0f9ff;
352
+ padding: 20px;
353
+ margin: 16px 0;
354
+ border-radius: 8px;
355
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
356
+ <h3 style='color: #1e3a8a !important;
357
+ font-size: 18px;
358
+ font-weight: 600;
359
+ margin-bottom: 12px;'>
360
+ πŸ”¬ Fat Segmentation Analysis
361
+ </h3>
362
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
363
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
364
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
365
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
366
+ </div>
367
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
368
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
369
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
370
+ </div>
371
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
372
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
373
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
374
+ </div>
375
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
376
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
377
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
378
+ </div>
379
+ </div>
380
+ """
381
+
382
+ if 'interpretation' in seg:
383
+ interp = seg['interpretation']
384
+ obesity_color = "#16a34a" if interp.get("obesity_risk") == "normal" else "#d97706" if interp.get("obesity_risk") == "moderate" else "#dc2626"
385
+ visceral_color = "#16a34a" if interp.get("visceral_risk") == "normal" else "#d97706" if interp.get("visceral_risk") == "moderate" else "#dc2626"
386
+
387
+ html += f"""
388
+ <div style='margin-top: 16px; padding: 16px; background: #f3f4f6; border-radius: 6px;'>
389
+ <h4 style='color: #374151 !important; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
390
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
391
+ <div>
392
+ <span style='color: #6b7280 !important; font-size: 14px;'>Obesity Risk:</span>
393
+ <span style='color: {obesity_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
394
+ </div>
395
+ <div>
396
+ <span style='color: #6b7280 !important; font-size: 14px;'>Visceral Risk:</span>
397
+ <span style='color: {visceral_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
398
+ </div>
399
+ </div>
400
+ """
401
+
402
+ if interp.get('recommendations'):
403
+ html += """
404
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e5e7eb;'>
405
+ <h5 style='color: #374151 !important; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
406
+ <ul style='margin: 0; padding-left: 20px; color: #4b5563 !important;'>
407
+ """
408
+ for rec in interp['recommendations']:
409
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
410
+ html += "</ul></div>"
411
+
412
+ html += "</div>"
413
+ html += "</div>"
414
+
415
+ # Quality Assessment
416
+ if 'quality_metrics' in results:
417
+ quality = results['quality_metrics']
418
+ quality_colors = {
419
+ 'excellent': '#16a34a',
420
+ 'good': '#16a34a',
421
+ 'fair': '#d97706',
422
+ 'poor': '#dc2626',
423
+ 'unknown': '#6b7280'
424
+ }
425
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#6b7280')
426
+
427
+ html += f"""
428
+ <div style='background: #f0f9ff;
429
+ padding: 20px;
430
+ margin: 16px 0;
431
+ border-radius: 8px;
432
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
433
+ <h3 style='color: #1e3a8a !important;
434
+ font-size: 18px;
435
+ font-weight: 600;
436
+ margin-bottom: 12px;'>
437
+ πŸ“Š Image Quality Assessment
438
+ </h3>
439
+ <div style='display: flex; align-items: center; gap: 16px;'>
440
+ <div>
441
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
442
+ <span style='color: {q_color} !important;
443
+ font-size: 18px;
444
+ font-weight: 700;
445
+ margin-left: 8px;'>
446
+ {quality.get('overall_quality', 'unknown').upper()}
447
+ </span>
448
+ </div>
449
+ </div>
450
+ """
451
+
452
+ if quality.get('issues'):
453
+ html += f"""
454
+ <div style='margin-top: 12px;
455
+ padding: 12px;
456
+ background: #fef3c7;
457
+ border-left: 3px solid #f59e0b;
458
+ border-radius: 4px;'>
459
+ <strong style='color: #92400e !important;'>Issues Detected:</strong>
460
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #92400e !important;'>
461
+ """
462
+ for issue in quality['issues']:
463
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
464
+ html += "</ul></div>"
465
+
466
+ html += "</div>"
467
+
468
+ html += "</div>"
469
+ return html
470
+
471
+ def create_demo():
472
+ with gr.Blocks(
473
+ title="Medical Image Analyzer - Enhanced Demo",
474
+ theme=gr.themes.Soft(
475
+ primary_hue="blue",
476
+ secondary_hue="blue",
477
+ neutral_hue="slate",
478
+ text_size="md",
479
+ spacing_size="md",
480
+ radius_size="md",
481
+ ).set(
482
+ # Medical blue theme colors
483
+ body_background_fill="*neutral_950",
484
+ body_background_fill_dark="*neutral_950",
485
+ block_background_fill="*neutral_900",
486
+ block_background_fill_dark="*neutral_900",
487
+ border_color_primary="*primary_600",
488
+ border_color_primary_dark="*primary_600",
489
+ # Text colors for better contrast
490
+ body_text_color="*neutral_100",
491
+ body_text_color_dark="*neutral_100",
492
+ body_text_color_subdued="*neutral_300",
493
+ body_text_color_subdued_dark="*neutral_300",
494
+ # Button colors
495
+ button_primary_background_fill="*primary_600",
496
+ button_primary_background_fill_dark="*primary_600",
497
+ button_primary_text_color="white",
498
+ button_primary_text_color_dark="white",
499
+ ),
500
+ css="""
501
+ /* Medical blue theme with high contrast */
502
+ :root {
503
+ --medical-blue: #1e40af;
504
+ --medical-blue-light: #3b82f6;
505
+ --medical-blue-dark: #1e3a8a;
506
+ --text-primary: #f9fafb;
507
+ --text-secondary: #e5e7eb;
508
+ --bg-primary: #0f172a;
509
+ --bg-secondary: #1e293b;
510
+ --bg-tertiary: #334155;
511
+ }
512
+
513
+ /* Override default text colors for medical theme */
514
+ * {
515
+ color: var(--text-primary) !important;
516
+ }
517
+
518
+ /* Style the file upload area */
519
+ .file-upload {
520
+ border: 2px dashed var(--medical-blue-light) !important;
521
+ border-radius: 8px !important;
522
+ padding: 20px !important;
523
+ text-align: center !important;
524
+ background: var(--bg-secondary) !important;
525
+ transition: all 0.3s ease !important;
526
+ color: var(--text-primary) !important;
527
+ }
528
+
529
+ .file-upload:hover {
530
+ border-color: var(--medical-blue) !important;
531
+ background: var(--bg-tertiary) !important;
532
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
533
+ }
534
+
535
+ /* Ensure report text is readable with white background */
536
+ .medical-report {
537
+ background: #ffffff !important;
538
+ border: 2px solid var(--medical-blue-light) !important;
539
+ border-radius: 8px !important;
540
+ padding: 16px !important;
541
+ color: #1a1a1a !important;
542
+ }
543
+
544
+ .medical-report * {
545
+ color: #1f2937 !important; /* Dark gray text */
546
+ }
547
+
548
+ .medical-report h2 {
549
+ color: #1e40af !important; /* Medical blue for main heading */
550
+ }
551
+
552
+ .medical-report h3, .medical-report h4 {
553
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
554
+ }
555
+
556
+ .medical-report strong {
557
+ color: #374151 !important; /* Darker gray for labels */
558
+ }
559
+
560
+ .medical-report td {
561
+ color: #1f2937 !important; /* Ensure table text is dark */
562
+ }
563
+
564
+ /* Report sections with light blue background */
565
+ .medical-report > div {
566
+ background: #f0f9ff !important;
567
+ color: #1f2937 !important;
568
+ }
569
+
570
+ /* Medical blue accents for UI elements */
571
+ .gr-button-primary {
572
+ background: var(--medical-blue) !important;
573
+ border-color: var(--medical-blue) !important;
574
+ }
575
+
576
+ .gr-button-primary:hover {
577
+ background: var(--medical-blue-dark) !important;
578
+ border-color: var(--medical-blue-dark) !important;
579
+ }
580
+
581
+ /* Tab styling */
582
+ .gr-tab-item {
583
+ border-color: var(--medical-blue-light) !important;
584
+ }
585
+
586
+ .gr-tab-item.selected {
587
+ background: var(--medical-blue) !important;
588
+ color: white !important;
589
+ }
590
+
591
+ /* Accordion styling */
592
+ .gr-accordion {
593
+ border-color: var(--medical-blue-light) !important;
594
+ }
595
+
596
+ /* Slider track in medical blue */
597
+ input[type="range"]::-webkit-slider-track {
598
+ background: var(--bg-tertiary) !important;
599
+ }
600
+
601
+ input[type="range"]::-webkit-slider-thumb {
602
+ background: var(--medical-blue) !important;
603
+ }
604
+ """
605
+ ) as demo:
606
+ gr.Markdown("""
607
+ # πŸ₯ Medical Image Analyzer
608
+
609
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
610
+ """)
611
+
612
+ with gr.Row():
613
+ with gr.Column(scale=1):
614
+ # File upload - no file type restrictions
615
+ with gr.Group():
616
+ gr.Markdown("### πŸ“€ Upload Medical Image")
617
+ file_input = gr.File(
618
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
619
+ file_count="single",
620
+ type="filepath",
621
+ elem_classes="file-upload"
622
+ # Note: NO file_types parameter = accepts ALL files
623
+ )
624
+ gr.Markdown("""
625
+ <small style='color: #666;'>
626
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
627
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
628
+ </small>
629
+ """)
630
+
631
+ # Modality selection
632
+ modality = gr.Radio(
633
+ choices=["CT", "CR", "DX", "RX", "DR"],
634
+ value="CT",
635
+ label="Modality",
636
+ info="Will be auto-detected for DICOM files"
637
+ )
638
+
639
+ # Task selection
640
+ task = gr.Dropdown(
641
+ choices=[
642
+ ("🎯 Point Analysis", "analyze_point"),
643
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
644
+ ("πŸ“Š Full Analysis", "full_analysis")
645
+ ],
646
+ value="full_analysis",
647
+ label="Analysis Task"
648
+ )
649
+
650
+ # ROI settings
651
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
652
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
653
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
654
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
655
+
656
+ # Clinical context
657
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
658
+ symptoms = gr.CheckboxGroup(
659
+ choices=[
660
+ "dyspnea", "chest_pain", "abdominal_pain",
661
+ "trauma", "obesity_screening", "routine_check"
662
+ ],
663
+ label="Symptoms/Indication"
664
+ )
665
+
666
+ # Visualization options
667
+ with gr.Accordion("🎨 Visualization Options", open=True):
668
+ show_overlay = gr.Checkbox(
669
+ label="Show ROI/Segmentation Overlay",
670
+ value=True,
671
+ info="Display ROI circle or fat segmentation info on the image"
672
+ )
673
+
674
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
675
+
676
+ with gr.Column(scale=2):
677
+ # Results with tabs for different views
678
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
679
+ image_display = gr.Image(label="Medical Image", type="numpy")
680
+
681
+ with gr.Tab("🎯 Overlay View"):
682
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
683
+
684
+ file_info = gr.Textbox(label="File Information", lines=1)
685
+
686
+ with gr.Tab("πŸ“Š Visual Report"):
687
+ report_html = gr.HTML()
688
+
689
+ with gr.Tab("πŸ”§ JSON Output"):
690
+ json_output = gr.JSON(label="Structured Data for AI Agents")
691
+
692
+ # Examples and help
693
+ with gr.Row():
694
+ gr.Markdown("""
695
+ ### πŸ“ Supported Formats
696
+ - **DICOM**: Automatic HU value extraction and modality detection
697
+ - **PNG/JPG**: Interpreted based on selected modality
698
+ - **All Formats**: Automatic grayscale conversion
699
+ - **Files without extension**: Supported (e.g., IM_0001) - will try DICOM first
700
+
701
+ ### 🎯 Usage
702
+ 1. Upload a medical image file
703
+ 2. Select modality (auto-detected for DICOM)
704
+ 3. Choose analysis task
705
+ 4. Adjust ROI position for point analysis
706
+ 5. Click "Analyze"
707
+
708
+ ### πŸ’‘ Features
709
+ - **ROI Visualization**: See the exact area being analyzed
710
+ - **Fat Segmentation**: Visual percentages for CT scans
711
+ - **Multi-format Support**: Works with any medical image format
712
+ - **AI Agent Ready**: Structured JSON output for integration
713
+ """)
714
+
715
+ # Connect the interface
716
+ analyze_btn.click(
717
+ fn=process_and_analyze,
718
+ inputs=[file_input, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
719
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
720
+ )
721
+
722
+ # Auto-update ROI limits when image is loaded
723
+ def update_roi_on_upload(file_obj):
724
+ if file_obj is None:
725
+ return gr.update(), gr.update()
726
+
727
+ try:
728
+ analyzer = MedicalImageAnalyzer()
729
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
730
+
731
+ if 'shape' in metadata:
732
+ h, w = metadata['shape']
733
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
734
+ except:
735
+ pass
736
+
737
+ return gr.update(), gr.update()
738
+
739
+ file_input.change(
740
+ fn=update_roi_on_upload,
741
+ inputs=[file_input],
742
+ outputs=[roi_x, roi_y]
743
+ )
744
+
745
+ return demo
746
+
747
+ if __name__ == "__main__":
748
+ demo = create_demo()
749
+ demo.launch()
750
+ ```
751
+
752
+ ## `MedicalImageAnalyzer`
753
+
754
+ ### Initialization
755
+
756
+ <table>
757
+ <thead>
758
+ <tr>
759
+ <th align="left">name</th>
760
+ <th align="left" style="width: 25%;">type</th>
761
+ <th align="left">default</th>
762
+ <th align="left">description</th>
763
+ </tr>
764
+ </thead>
765
+ <tbody>
766
+ <tr>
767
+ <td align="left"><code>value</code></td>
768
+ <td align="left" style="width: 25%;">
769
+
770
+ ```python
771
+ typing.Optional[typing.Dict[str, typing.Any]][
772
+ typing.Dict[str, typing.Any][str, typing.Any], None
773
+ ]
774
+ ```
775
+
776
+ </td>
777
+ <td align="left"><code>None</code></td>
778
+ <td align="left">None</td>
779
+ </tr>
780
+
781
+ <tr>
782
+ <td align="left"><code>label</code></td>
783
+ <td align="left" style="width: 25%;">
784
+
785
+ ```python
786
+ typing.Optional[str][str, None]
787
+ ```
788
+
789
+ </td>
790
+ <td align="left"><code>None</code></td>
791
+ <td align="left">None</td>
792
+ </tr>
793
+
794
+ <tr>
795
+ <td align="left"><code>info</code></td>
796
+ <td align="left" style="width: 25%;">
797
+
798
+ ```python
799
+ typing.Optional[str][str, None]
800
+ ```
801
+
802
+ </td>
803
+ <td align="left"><code>None</code></td>
804
+ <td align="left">None</td>
805
+ </tr>
806
+
807
+ <tr>
808
+ <td align="left"><code>every</code></td>
809
+ <td align="left" style="width: 25%;">
810
+
811
+ ```python
812
+ typing.Optional[float][float, None]
813
+ ```
814
+
815
+ </td>
816
+ <td align="left"><code>None</code></td>
817
+ <td align="left">None</td>
818
+ </tr>
819
+
820
+ <tr>
821
+ <td align="left"><code>show_label</code></td>
822
+ <td align="left" style="width: 25%;">
823
+
824
+ ```python
825
+ typing.Optional[bool][bool, None]
826
+ ```
827
+
828
+ </td>
829
+ <td align="left"><code>None</code></td>
830
+ <td align="left">None</td>
831
+ </tr>
832
+
833
+ <tr>
834
+ <td align="left"><code>container</code></td>
835
+ <td align="left" style="width: 25%;">
836
+
837
+ ```python
838
+ typing.Optional[bool][bool, None]
839
+ ```
840
+
841
+ </td>
842
+ <td align="left"><code>None</code></td>
843
+ <td align="left">None</td>
844
+ </tr>
845
+
846
+ <tr>
847
+ <td align="left"><code>scale</code></td>
848
+ <td align="left" style="width: 25%;">
849
+
850
+ ```python
851
+ typing.Optional[int][int, None]
852
+ ```
853
+
854
+ </td>
855
+ <td align="left"><code>None</code></td>
856
+ <td align="left">None</td>
857
+ </tr>
858
+
859
+ <tr>
860
+ <td align="left"><code>min_width</code></td>
861
+ <td align="left" style="width: 25%;">
862
+
863
+ ```python
864
+ typing.Optional[int][int, None]
865
+ ```
866
+
867
+ </td>
868
+ <td align="left"><code>None</code></td>
869
+ <td align="left">None</td>
870
+ </tr>
871
+
872
+ <tr>
873
+ <td align="left"><code>visible</code></td>
874
+ <td align="left" style="width: 25%;">
875
+
876
+ ```python
877
+ typing.Optional[bool][bool, None]
878
+ ```
879
+
880
+ </td>
881
+ <td align="left"><code>None</code></td>
882
+ <td align="left">None</td>
883
+ </tr>
884
+
885
+ <tr>
886
+ <td align="left"><code>elem_id</code></td>
887
+ <td align="left" style="width: 25%;">
888
+
889
+ ```python
890
+ typing.Optional[str][str, None]
891
+ ```
892
+
893
+ </td>
894
+ <td align="left"><code>None</code></td>
895
+ <td align="left">None</td>
896
+ </tr>
897
+
898
+ <tr>
899
+ <td align="left"><code>elem_classes</code></td>
900
+ <td align="left" style="width: 25%;">
901
+
902
+ ```python
903
+ typing.Union[typing.List[str], str, NoneType][
904
+ typing.List[str][str], str, None
905
+ ]
906
+ ```
907
+
908
+ </td>
909
+ <td align="left"><code>None</code></td>
910
+ <td align="left">None</td>
911
+ </tr>
912
+
913
+ <tr>
914
+ <td align="left"><code>render</code></td>
915
+ <td align="left" style="width: 25%;">
916
+
917
+ ```python
918
+ typing.Optional[bool][bool, None]
919
+ ```
920
+
921
+ </td>
922
+ <td align="left"><code>None</code></td>
923
+ <td align="left">None</td>
924
+ </tr>
925
+
926
+ <tr>
927
+ <td align="left"><code>key</code></td>
928
+ <td align="left" style="width: 25%;">
929
+
930
+ ```python
931
+ typing.Union[int, str, NoneType][int, str, None]
932
+ ```
933
+
934
+ </td>
935
+ <td align="left"><code>None</code></td>
936
+ <td align="left">None</td>
937
+ </tr>
938
+
939
+ <tr>
940
+ <td align="left"><code>analysis_mode</code></td>
941
+ <td align="left" style="width: 25%;">
942
+
943
+ ```python
944
+ str
945
+ ```
946
+
947
+ </td>
948
+ <td align="left"><code>"structured"</code></td>
949
+ <td align="left">"structured" for AI agents, "visual" for human interpretation</td>
950
+ </tr>
951
+
952
+ <tr>
953
+ <td align="left"><code>include_confidence</code></td>
954
+ <td align="left" style="width: 25%;">
955
+
956
+ ```python
957
+ bool
958
+ ```
959
+
960
+ </td>
961
+ <td align="left"><code>True</code></td>
962
+ <td align="left">Include confidence scores in results</td>
963
+ </tr>
964
+
965
+ <tr>
966
+ <td align="left"><code>include_reasoning</code></td>
967
+ <td align="left" style="width: 25%;">
968
+
969
+ ```python
970
+ bool
971
+ ```
972
+
973
+ </td>
974
+ <td align="left"><code>True</code></td>
975
+ <td align="left">Include reasoning/explanation for findings</td>
976
+ </tr>
977
+
978
+ <tr>
979
+ <td align="left"><code>segmentation_types</code></td>
980
+ <td align="left" style="width: 25%;">
981
+
982
+ ```python
983
+ typing.List[str][str]
984
+ ```
985
+
986
+ </td>
987
+ <td align="left"><code>None</code></td>
988
+ <td align="left">List of segmentation types to perform</td>
989
+ </tr>
990
+ </tbody></table>
991
+
992
+
993
+ ### Events
994
+
995
+ | name | description |
996
+ |:-----|:------------|
997
+ | `change` | Triggered when the value of the MedicalImageAnalyzer changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
998
+ | `select` | Event listener for when the user selects or deselects the MedicalImageAnalyzer. Uses event data gradio.SelectData to carry `value` referring to the label of the MedicalImageAnalyzer, and `selected` to refer to state of the MedicalImageAnalyzer. See EventData documentation on how to use this event data |
999
+ | `upload` | This listener is triggered when the user uploads a file into the MedicalImageAnalyzer. |
1000
+ | `clear` | This listener is triggered when the user clears the MedicalImageAnalyzer using the clear button for the component. |
1001
+
1002
+
1003
+
1004
+ ### User function
1005
+
1006
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
1007
+
1008
+ - When used as an Input, the component only impacts the input signature of the user function.
1009
+ - When used as an output, the component only impacts the return signature of the user function.
1010
+
1011
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
1012
+
1013
+
1014
+
1015
+ ```python
1016
+ def predict(
1017
+ value: typing.Dict[str, typing.Any][str, typing.Any]
1018
+ ) -> typing.Dict[str, typing.Any][str, typing.Any]:
1019
+ return value
1020
+ ```
1021
+
1022
+ ---
1023
+
1024
+ Developed for veterinary medicine with ❀️ and cutting-edge web technology
1025
+
1026
+ **Gradio Agents & MCP Hackathon 2025 - Track 2 Submission**
src/backend/gradio_medical_image_analyzer/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from .medical_image_analyzer import MedicalImageAnalyzer
2
+
3
+ __all__ = ['MedicalImageAnalyzer']
src/backend/gradio_medical_image_analyzer/fat_segmentation.py ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ Fat Segmentation Module for Medical Image Analyzer
5
+ Implements subcutaneous and visceral fat detection using HU values
6
+ Adapted for multi-species veterinary and human medical imaging
7
+ """
8
+
9
+ import numpy as np
10
+ from scipy import ndimage
11
+ from skimage import morphology, measure
12
+ from typing import Tuple, Dict, Any, Optional
13
+ import cv2
14
+
15
+
16
+ class FatSegmentationEngine:
17
+ """
18
+ Engine for detecting and segmenting fat tissue in medical CT images
19
+ Supports both human and veterinary applications
20
+ """
21
+
22
+ # HU ranges for different tissues
23
+ FAT_HU_RANGE = (-190, -30) # Fat tissue Hounsfield Units
24
+ MUSCLE_HU_RANGE = (30, 80) # Muscle tissue for body outline
25
+
26
+ def __init__(self):
27
+ self.fat_mask = None
28
+ self.subcutaneous_mask = None
29
+ self.visceral_mask = None
30
+ self.body_outline = None
31
+
32
+ def segment_fat_tissue(self, hu_array: np.ndarray, species: str = "human") -> Dict[str, Any]:
33
+ """
34
+ Segment fat tissue into subcutaneous and visceral components
35
+
36
+ Args:
37
+ hu_array: 2D array of Hounsfield Unit values
38
+ species: Species type for specific adjustments
39
+
40
+ Returns:
41
+ Dictionary containing segmentation results and statistics
42
+ """
43
+ # Step 1: Detect all fat tissue based on HU values
44
+ self.fat_mask = self._detect_fat_tissue(hu_array)
45
+
46
+ # Step 2: Detect body outline for subcutaneous/visceral separation
47
+ self.body_outline = self._detect_body_outline(hu_array)
48
+
49
+ # Step 3: Separate subcutaneous and visceral fat
50
+ self.subcutaneous_mask, self.visceral_mask = self._separate_fat_types(
51
+ self.fat_mask, self.body_outline, species
52
+ )
53
+
54
+ # Step 4: Calculate statistics
55
+ stats = self._calculate_fat_statistics(hu_array)
56
+
57
+ # Step 5: Add clinical assessment
58
+ assessment = self.assess_obesity_risk(stats, species)
59
+
60
+ return {
61
+ 'fat_mask': self.fat_mask,
62
+ 'subcutaneous_mask': self.subcutaneous_mask,
63
+ 'visceral_mask': self.visceral_mask,
64
+ 'body_outline': self.body_outline,
65
+ 'statistics': stats,
66
+ 'assessment': assessment,
67
+ 'overlay_colors': self._generate_overlay_colors()
68
+ }
69
+
70
+ def _detect_fat_tissue(self, hu_array: np.ndarray) -> np.ndarray:
71
+ """Detect fat tissue based on HU range"""
72
+ fat_mask = (hu_array >= self.FAT_HU_RANGE[0]) & (hu_array <= self.FAT_HU_RANGE[1])
73
+
74
+ # Clean up noise with morphological operations
75
+ fat_mask = morphology.binary_opening(fat_mask, morphology.disk(2))
76
+ fat_mask = morphology.binary_closing(fat_mask, morphology.disk(3))
77
+
78
+ return fat_mask.astype(np.uint8)
79
+
80
+ def _detect_body_outline(self, hu_array: np.ndarray) -> np.ndarray:
81
+ """
82
+ Detect body outline to separate subcutaneous from visceral fat
83
+ Uses muscle tissue and air/tissue boundaries
84
+ """
85
+ # Threshold for air/tissue boundary (around -500 HU)
86
+ tissue_mask = hu_array > -500
87
+
88
+ # Fill holes and smooth the outline
89
+ tissue_mask = ndimage.binary_fill_holes(tissue_mask)
90
+ tissue_mask = morphology.binary_closing(tissue_mask, morphology.disk(5))
91
+
92
+ # Find the largest connected component (main body)
93
+ labeled = measure.label(tissue_mask)
94
+ props = measure.regionprops(labeled)
95
+
96
+ if props:
97
+ largest_region = max(props, key=lambda x: x.area)
98
+ body_mask = (labeled == largest_region.label)
99
+
100
+ # Get the outline/contour
101
+ outline = morphology.binary_erosion(body_mask, morphology.disk(10))
102
+ outline = body_mask & ~outline
103
+
104
+ return outline.astype(np.uint8)
105
+
106
+ return np.zeros_like(hu_array, dtype=np.uint8)
107
+
108
+ def _separate_fat_types(self, fat_mask: np.ndarray, body_outline: np.ndarray,
109
+ species: str = "human") -> Tuple[np.ndarray, np.ndarray]:
110
+ """
111
+ Separate subcutaneous and visceral fat based on body outline
112
+ """
113
+ # Distance transform from body outline
114
+ distance_from_outline = ndimage.distance_transform_edt(~body_outline.astype(bool))
115
+
116
+ # Adjust threshold based on species
117
+ distance_thresholds = {
118
+ "human": 25,
119
+ "dog": 20,
120
+ "cat": 15,
121
+ "horse": 30,
122
+ "cattle": 35
123
+ }
124
+
125
+ threshold = distance_thresholds.get(species.lower(), 20)
126
+
127
+ # Subcutaneous fat: fat tissue close to body surface
128
+ subcutaneous_mask = fat_mask & (distance_from_outline <= threshold)
129
+
130
+ # Visceral fat: remaining fat tissue inside the body
131
+ visceral_mask = fat_mask & ~subcutaneous_mask
132
+
133
+ # Additional filtering for visceral fat (inside body cavity)
134
+ body_center = self._find_body_center(body_outline)
135
+ visceral_mask = self._filter_visceral_fat(visceral_mask, body_center, species)
136
+
137
+ return subcutaneous_mask.astype(np.uint8), visceral_mask.astype(np.uint8)
138
+
139
+ def _find_body_center(self, body_outline: np.ndarray) -> Tuple[int, int]:
140
+ """Find the center of the body for visceral fat filtering"""
141
+ y_coords, x_coords = np.where(body_outline > 0)
142
+ if len(y_coords) > 0:
143
+ center_y = int(np.mean(y_coords))
144
+ center_x = int(np.mean(x_coords))
145
+ return (center_y, center_x)
146
+ return (body_outline.shape[0] // 2, body_outline.shape[1] // 2)
147
+
148
+ def _filter_visceral_fat(self, visceral_mask: np.ndarray, body_center: Tuple[int, int],
149
+ species: str = "human") -> np.ndarray:
150
+ """
151
+ Filter visceral fat to ensure it's inside the body cavity
152
+ """
153
+ # Create a mask for the central body region
154
+ center_y, center_x = body_center
155
+ h, w = visceral_mask.shape
156
+
157
+ # Adjust ellipse size based on species
158
+ ellipse_factors = {
159
+ "human": (0.35, 0.35),
160
+ "dog": (0.3, 0.3),
161
+ "cat": (0.25, 0.25),
162
+ "horse": (0.4, 0.35),
163
+ "cattle": (0.4, 0.4)
164
+ }
165
+
166
+ x_factor, y_factor = ellipse_factors.get(species.lower(), (0.3, 0.3))
167
+
168
+ # Create elliptical region around body center for visceral fat
169
+ y, x = np.ogrid[:h, :w]
170
+ ellipse_mask = ((x - center_x) / (w * x_factor))**2 + ((y - center_y) / (h * y_factor))**2 <= 1
171
+
172
+ # Keep only visceral fat within the central body region
173
+ filtered_visceral = visceral_mask & ellipse_mask
174
+
175
+ return filtered_visceral.astype(np.uint8)
176
+
177
+ def _calculate_fat_statistics(self, hu_array: np.ndarray) -> Dict[str, float]:
178
+ """Calculate fat tissue statistics"""
179
+ total_pixels = hu_array.size
180
+ fat_pixels = np.sum(self.fat_mask > 0)
181
+ subcutaneous_pixels = np.sum(self.subcutaneous_mask > 0)
182
+ visceral_pixels = np.sum(self.visceral_mask > 0)
183
+
184
+ # Calculate percentages
185
+ fat_percentage = (fat_pixels / total_pixels) * 100
186
+ subcutaneous_percentage = (subcutaneous_pixels / total_pixels) * 100
187
+ visceral_percentage = (visceral_pixels / total_pixels) * 100
188
+
189
+ # Calculate ratio
190
+ visceral_subcutaneous_ratio = (
191
+ visceral_pixels / subcutaneous_pixels
192
+ if subcutaneous_pixels > 0 else 0
193
+ )
194
+
195
+ # Calculate mean HU values for each tissue type
196
+ fat_hu_mean = np.mean(hu_array[self.fat_mask > 0]) if fat_pixels > 0 else 0
197
+
198
+ return {
199
+ 'total_fat_percentage': round(fat_percentage, 2),
200
+ 'subcutaneous_fat_percentage': round(subcutaneous_percentage, 2),
201
+ 'visceral_fat_percentage': round(visceral_percentage, 2),
202
+ 'visceral_subcutaneous_ratio': round(visceral_subcutaneous_ratio, 3),
203
+ 'total_fat_pixels': int(fat_pixels),
204
+ 'subcutaneous_fat_pixels': int(subcutaneous_pixels),
205
+ 'visceral_fat_pixels': int(visceral_pixels),
206
+ 'fat_mean_hu': round(float(fat_hu_mean), 1)
207
+ }
208
+
209
+ def _generate_overlay_colors(self) -> Dict[str, Tuple[int, int, int]]:
210
+ """Generate color overlays for visualization"""
211
+ return {
212
+ 'subcutaneous': (255, 255, 0, 128), # Yellow with transparency
213
+ 'visceral': (255, 0, 0, 128), # Red with transparency
214
+ 'body_outline': (0, 255, 0, 255) # Green outline
215
+ }
216
+
217
+ def create_overlay_image(self, base_image: np.ndarray) -> np.ndarray:
218
+ """
219
+ Create overlay image with fat segmentation highlighted
220
+
221
+ Args:
222
+ base_image: Original grayscale DICOM image (0-255)
223
+
224
+ Returns:
225
+ RGB image with fat overlays
226
+ """
227
+ # Convert to RGB
228
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
229
+
230
+ colors = self._generate_overlay_colors()
231
+
232
+ # Apply subcutaneous fat overlay (yellow)
233
+ if self.subcutaneous_mask is not None:
234
+ yellow_overlay = np.zeros_like(overlay_img)
235
+ yellow_overlay[self.subcutaneous_mask > 0] = colors['subcutaneous'][:3]
236
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
237
+
238
+ # Apply visceral fat overlay (red)
239
+ if self.visceral_mask is not None:
240
+ red_overlay = np.zeros_like(overlay_img)
241
+ red_overlay[self.visceral_mask > 0] = colors['visceral'][:3]
242
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
243
+
244
+ # Add body outline (green)
245
+ if self.body_outline is not None:
246
+ overlay_img[self.body_outline > 0] = colors['body_outline'][:3]
247
+
248
+ return overlay_img
249
+
250
+ def assess_obesity_risk(self, fat_stats: Dict[str, float], species: str = "human") -> Dict[str, Any]:
251
+ """
252
+ Assess obesity based on fat percentage and species-specific thresholds
253
+
254
+ Args:
255
+ fat_stats: Fat statistics from segmentation
256
+ species: Species type (human, dog, cat, etc.)
257
+
258
+ Returns:
259
+ Obesity assessment with recommendations
260
+ """
261
+ # Species-specific fat percentage thresholds
262
+ thresholds = {
263
+ 'human': {
264
+ 'normal': (10, 20),
265
+ 'overweight': (20, 30),
266
+ 'obese': (30, float('inf'))
267
+ },
268
+ 'dog': {
269
+ 'normal': (5, 15),
270
+ 'overweight': (15, 25),
271
+ 'obese': (25, float('inf'))
272
+ },
273
+ 'cat': {
274
+ 'normal': (10, 20),
275
+ 'overweight': (20, 30),
276
+ 'obese': (30, float('inf'))
277
+ },
278
+ 'horse': {
279
+ 'normal': (8, 18),
280
+ 'overweight': (18, 28),
281
+ 'obese': (28, float('inf'))
282
+ },
283
+ 'cattle': {
284
+ 'normal': (10, 25),
285
+ 'overweight': (25, 35),
286
+ 'obese': (35, float('inf'))
287
+ }
288
+ }
289
+
290
+ fat_percentage = fat_stats['total_fat_percentage']
291
+ species_thresholds = thresholds.get(species.lower(), thresholds['human'])
292
+
293
+ # Determine weight category
294
+ if fat_percentage <= species_thresholds['normal'][1]:
295
+ category = "Normal Weight"
296
+ color = "green"
297
+ recommendation = "Maintain current diet and exercise routine."
298
+ elif fat_percentage <= species_thresholds['overweight'][1]:
299
+ category = "Overweight"
300
+ color = "orange"
301
+ recommendation = "Consider dietary adjustments and increased exercise."
302
+ else:
303
+ category = "Obese"
304
+ color = "red"
305
+ recommendation = "Medical consultation recommended for weight management plan."
306
+
307
+ # Risk assessment based on visceral fat ratio
308
+ vs_ratio = fat_stats['visceral_subcutaneous_ratio']
309
+ if vs_ratio < 0.5:
310
+ risk_level = "Low Risk"
311
+ risk_color = "green"
312
+ elif vs_ratio < 1.0:
313
+ risk_level = "Moderate Risk"
314
+ risk_color = "orange"
315
+ else:
316
+ risk_level = "High Risk - Excess visceral fat"
317
+ risk_color = "red"
318
+
319
+ return {
320
+ 'category': category,
321
+ 'color': color,
322
+ 'recommendation': recommendation,
323
+ 'fat_percentage': fat_percentage,
324
+ 'visceral_subcutaneous_ratio': vs_ratio,
325
+ 'risk_level': risk_level,
326
+ 'risk_color': risk_color,
327
+ 'species': species
328
+ }
329
+
330
+
331
+ # Convenience functions for integration
332
+ def segment_fat(hu_array: np.ndarray, species: str = "human") -> Dict[str, Any]:
333
+ """
334
+ Convenience function for fat segmentation
335
+
336
+ Args:
337
+ hu_array: 2D array of Hounsfield Unit values
338
+ species: Species type for specific adjustments
339
+
340
+ Returns:
341
+ Segmentation results dictionary
342
+ """
343
+ engine = FatSegmentationEngine()
344
+ return engine.segment_fat_tissue(hu_array, species)
345
+
346
+
347
+ def create_fat_overlay(base_image: np.ndarray, segmentation_results: Dict[str, Any]) -> np.ndarray:
348
+ """
349
+ Create overlay visualization from segmentation results
350
+
351
+ Args:
352
+ base_image: Original grayscale image
353
+ segmentation_results: Results from segment_fat function
354
+
355
+ Returns:
356
+ RGB overlay image
357
+ """
358
+ engine = FatSegmentationEngine()
359
+ engine.fat_mask = segmentation_results.get('fat_mask')
360
+ engine.subcutaneous_mask = segmentation_results.get('subcutaneous_mask')
361
+ engine.visceral_mask = segmentation_results.get('visceral_mask')
362
+ engine.body_outline = segmentation_results.get('body_outline')
363
+
364
+ return engine.create_overlay_image(base_image)
365
+
366
+
367
+ # Example usage
368
+ if __name__ == "__main__":
369
+ print("πŸ”¬ Fat Segmentation Engine for Medical Image Analyzer")
370
+ print("Features:")
371
+ print(" - Multi-species support (human, dog, cat, horse, cattle)")
372
+ print(" - HU-based fat tissue detection")
373
+ print(" - Subcutaneous vs visceral fat separation")
374
+ print(" - Species-specific obesity assessment")
375
+ print(" - Clinical risk evaluation")
376
+ print(" - Visual overlay generation")
377
+ print(" - Comprehensive fat statistics")
src/backend/gradio_medical_image_analyzer/medical_image_analyzer.py ADDED
@@ -0,0 +1,788 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ Gradio Custom Component: Medical Image Analyzer
5
+ AI-Agent optimized component for medical image analysis
6
+ """
7
+
8
+ from __future__ import annotations
9
+ from typing import Any, Dict, List, Optional, Tuple, Union
10
+ import gradio as gr
11
+ from gradio.components.base import Component
12
+ from gradio.events import Events
13
+ import numpy as np
14
+ import json
15
+ from pathlib import Path
16
+ from PIL import Image
17
+ try:
18
+ import pydicom
19
+ PYDICOM_AVAILABLE = True
20
+ except ImportError:
21
+ PYDICOM_AVAILABLE = False
22
+
23
+ # Import our existing analyzers
24
+ try:
25
+ from .fat_segmentation import FatSegmentationEngine, segment_fat, create_fat_overlay
26
+ FAT_SEGMENTATION_AVAILABLE = True
27
+ except ImportError:
28
+ FAT_SEGMENTATION_AVAILABLE = False
29
+ FatSegmentationEngine = None
30
+ segment_fat = None
31
+ create_fat_overlay = None
32
+
33
+ try:
34
+ from .xray_analyzer import XRayAnalyzer, analyze_xray, classify_xray_tissue
35
+ XRAY_ANALYZER_AVAILABLE = True
36
+ except ImportError:
37
+ XRAY_ANALYZER_AVAILABLE = False
38
+ XRayAnalyzer = None
39
+ analyze_xray = None
40
+ classify_xray_tissue = None
41
+
42
+
43
+ class MedicalImageAnalyzer(Component):
44
+ """
45
+ A Gradio component for AI-agent compatible medical image analysis.
46
+
47
+ Provides structured output for:
48
+ - HU value analysis (CT only)
49
+ - Tissue classification
50
+ - Fat segmentation (subcutaneous, visceral)
51
+ - Confidence scores and reasoning
52
+ """
53
+
54
+ EVENTS = [
55
+ Events.change,
56
+ Events.select,
57
+ Events.upload,
58
+ Events.clear,
59
+ ]
60
+
61
+ # HU ranges for tissue classification (CT only)
62
+ HU_RANGES = {
63
+ 'air': {'min': -1000, 'max': -500, 'icon': '🌫️'},
64
+ 'fat': {'min': -100, 'max': -50, 'icon': '🟑'},
65
+ 'water': {'min': -10, 'max': 10, 'icon': 'πŸ’§'},
66
+ 'soft_tissue': {'min': 30, 'max': 80, 'icon': 'πŸ”΄'},
67
+ 'bone': {'min': 200, 'max': 3000, 'icon': '🦴'}
68
+ }
69
+
70
+ def __init__(
71
+ self,
72
+ value: Optional[Dict[str, Any]] = None,
73
+ *,
74
+ label: Optional[str] = None,
75
+ info: Optional[str] = None,
76
+ every: Optional[float] = None,
77
+ show_label: Optional[bool] = None,
78
+ container: Optional[bool] = None,
79
+ scale: Optional[int] = None,
80
+ min_width: Optional[int] = None,
81
+ visible: Optional[bool] = None,
82
+ elem_id: Optional[str] = None,
83
+ elem_classes: Optional[List[str] | str] = None,
84
+ render: Optional[bool] = None,
85
+ key: Optional[int | str] = None,
86
+ # Custom parameters
87
+ analysis_mode: str = "structured", # "structured" for agents, "visual" for humans
88
+ include_confidence: bool = True,
89
+ include_reasoning: bool = True,
90
+ segmentation_types: List[str] = None,
91
+ **kwargs,
92
+ ):
93
+ """
94
+ Initialize the Medical Image Analyzer component.
95
+
96
+ Args:
97
+ analysis_mode: "structured" for AI agents, "visual" for human interpretation
98
+ include_confidence: Include confidence scores in results
99
+ include_reasoning: Include reasoning/explanation for findings
100
+ segmentation_types: List of segmentation types to perform
101
+ """
102
+ self.analysis_mode = analysis_mode
103
+ self.include_confidence = include_confidence
104
+ self.include_reasoning = include_reasoning
105
+ self.segmentation_types = segmentation_types or ["fat_total", "fat_subcutaneous", "fat_visceral"]
106
+
107
+ if FAT_SEGMENTATION_AVAILABLE:
108
+ self.fat_engine = FatSegmentationEngine()
109
+ else:
110
+ self.fat_engine = None
111
+
112
+ super().__init__(
113
+ label=label,
114
+ info=info,
115
+ every=every,
116
+ show_label=show_label,
117
+ container=container,
118
+ scale=scale,
119
+ min_width=min_width,
120
+ visible=visible,
121
+ elem_id=elem_id,
122
+ elem_classes=elem_classes,
123
+ render=render,
124
+ key=key,
125
+ value=value,
126
+ **kwargs,
127
+ )
128
+
129
+ def preprocess(self, payload: Dict[str, Any]) -> Dict[str, Any]:
130
+ """
131
+ Preprocess input from frontend.
132
+ Expected format:
133
+ {
134
+ "image": numpy array or base64,
135
+ "modality": "CT" or "XR",
136
+ "pixel_spacing": [x, y] (optional),
137
+ "roi": {"x": int, "y": int, "radius": int} (optional),
138
+ "task": "analyze_point" | "segment_fat" | "full_analysis"
139
+ }
140
+ """
141
+ if payload is None:
142
+ return None
143
+
144
+ # Validate required fields
145
+ if "image" not in payload:
146
+ return {"error": "No image provided"}
147
+
148
+ return payload
149
+
150
+ def postprocess(self, value: Dict[str, Any]) -> Dict[str, Any]:
151
+ """
152
+ Postprocess output for frontend.
153
+ Returns structured data for AI agents or formatted HTML for visual mode.
154
+ """
155
+ if value is None:
156
+ return None
157
+
158
+ if "error" in value:
159
+ return value
160
+
161
+ # For visual mode, convert to HTML
162
+ if self.analysis_mode == "visual":
163
+ value["html_report"] = self._create_html_report(value)
164
+
165
+ return value
166
+
167
+ def analyze_image(
168
+ self,
169
+ image: np.ndarray,
170
+ modality: str = "CT",
171
+ pixel_spacing: Optional[List[float]] = None,
172
+ roi: Optional[Dict[str, int]] = None,
173
+ task: str = "full_analysis",
174
+ clinical_context: Optional[Dict[str, Any]] = None
175
+ ) -> Dict[str, Any]:
176
+ """
177
+ Main analysis function for medical images.
178
+
179
+ Args:
180
+ image: 2D numpy array of pixel values
181
+ modality: "CT" or "XR" (X-Ray)
182
+ pixel_spacing: [x, y] spacing in mm
183
+ roi: Region of interest {"x": int, "y": int, "radius": int}
184
+ task: Analysis task
185
+ clinical_context: Additional context for guided analysis
186
+
187
+ Returns:
188
+ Structured analysis results
189
+ """
190
+ # Handle None or invalid image
191
+ if image is None:
192
+ return {"error": "No image provided", "modality": modality}
193
+
194
+ results = {
195
+ "modality": modality,
196
+ "timestamp": self._get_timestamp(),
197
+ "measurements": {},
198
+ "findings": [],
199
+ "segmentation": {},
200
+ "quality_metrics": self._assess_image_quality(image)
201
+ }
202
+
203
+ if task == "analyze_point" and roi:
204
+ # Analyze specific point/region
205
+ results["point_analysis"] = self._analyze_roi(image, modality, roi)
206
+
207
+ elif task == "segment_fat" and modality == "CT":
208
+ # Fat segmentation
209
+ results["segmentation"] = self._perform_fat_segmentation(image, pixel_spacing)
210
+
211
+ elif task == "full_analysis":
212
+ # Complete analysis
213
+ if roi:
214
+ results["point_analysis"] = self._analyze_roi(image, modality, roi)
215
+ if modality == "CT":
216
+ results["segmentation"] = self._perform_fat_segmentation(image, pixel_spacing)
217
+ elif modality in ["CR", "DX", "RX", "DR", "X-Ray"]:
218
+ results["segmentation"] = self._perform_xray_analysis(image)
219
+ results["statistics"] = self._calculate_image_statistics(image)
220
+
221
+ # Add clinical interpretation if context provided
222
+ if clinical_context:
223
+ results["clinical_correlation"] = self._correlate_with_clinical(
224
+ results, clinical_context
225
+ )
226
+
227
+ return results
228
+
229
+ def _analyze_roi(self, image: np.ndarray, modality: str, roi: Dict[str, int]) -> Dict[str, Any]:
230
+ """Analyze a specific region of interest"""
231
+ x, y, radius = roi.get("x", 0), roi.get("y", 0), roi.get("radius", 5)
232
+
233
+ # Extract ROI
234
+ y_min = max(0, y - radius)
235
+ y_max = min(image.shape[0], y + radius)
236
+ x_min = max(0, x - radius)
237
+ x_max = min(image.shape[1], x + radius)
238
+
239
+ roi_pixels = image[y_min:y_max, x_min:x_max]
240
+
241
+ analysis = {
242
+ "location": {"x": x, "y": y},
243
+ "roi_size": {"width": x_max - x_min, "height": y_max - y_min},
244
+ "statistics": {
245
+ "mean": float(np.mean(roi_pixels)),
246
+ "std": float(np.std(roi_pixels)),
247
+ "min": float(np.min(roi_pixels)),
248
+ "max": float(np.max(roi_pixels))
249
+ }
250
+ }
251
+
252
+ if modality == "CT":
253
+ # HU-based analysis
254
+ center_value = float(image[y, x])
255
+ analysis["hu_value"] = center_value
256
+ analysis["tissue_type"] = self._classify_tissue_by_hu(center_value)
257
+
258
+ if self.include_confidence:
259
+ analysis["confidence"] = self._calculate_confidence(roi_pixels, center_value)
260
+
261
+ if self.include_reasoning:
262
+ analysis["reasoning"] = self._generate_reasoning(
263
+ center_value, analysis["tissue_type"], roi_pixels
264
+ )
265
+ else:
266
+ # Intensity-based analysis for X-Ray
267
+ analysis["intensity"] = float(image[y, x])
268
+ analysis["tissue_type"] = self._classify_xray_intensity(
269
+ image[y, x], x, y, image
270
+ )
271
+
272
+ return analysis
273
+
274
+ def _classify_tissue_by_hu(self, hu_value: float) -> Dict[str, str]:
275
+ """Classify tissue type based on HU value"""
276
+ for tissue_type, ranges in self.HU_RANGES.items():
277
+ if ranges['min'] <= hu_value <= ranges['max']:
278
+ return {
279
+ 'type': tissue_type,
280
+ 'icon': ranges['icon'],
281
+ 'hu_range': f"{ranges['min']} to {ranges['max']}"
282
+ }
283
+
284
+ # Edge cases
285
+ if hu_value < -1000:
286
+ return {'type': 'air', 'icon': '🌫️', 'hu_range': '< -1000'}
287
+ else:
288
+ return {'type': 'metal/artifact', 'icon': 'βš™οΈ', 'hu_range': '> 3000'}
289
+
290
+ def _classify_xray_intensity(self, intensity: float, x: int, y: int, image: np.ndarray) -> Dict[str, str]:
291
+ """Classify tissue in X-Ray based on intensity"""
292
+ if XRAY_ANALYZER_AVAILABLE:
293
+ # Use the advanced XRay analyzer
294
+ analyzer = XRayAnalyzer()
295
+ # Normalize the intensity
296
+ stats = self._calculate_image_statistics(image)
297
+ normalized = (intensity - stats['min']) / (stats['max'] - stats['min'])
298
+ result = analyzer.classify_pixel(normalized, x, y, image)
299
+ return {
300
+ 'type': result['type'],
301
+ 'icon': result['icon'],
302
+ 'confidence': result['confidence'],
303
+ 'color': result['color']
304
+ }
305
+ else:
306
+ # Fallback to simple classification
307
+ normalized = (intensity - stats['min']) / (stats['max'] - stats['min'])
308
+
309
+ if normalized > 0.9:
310
+ return {'type': 'bone/metal', 'icon': '🦴', 'intensity_range': 'very high'}
311
+ elif normalized > 0.6:
312
+ return {'type': 'bone', 'icon': '🦴', 'intensity_range': 'high'}
313
+ elif normalized > 0.3:
314
+ return {'type': 'soft_tissue', 'icon': 'πŸ”΄', 'intensity_range': 'medium'}
315
+ else:
316
+ return {'type': 'air/lung', 'icon': '🌫️', 'intensity_range': 'low'}
317
+
318
+ def _perform_fat_segmentation(
319
+ self,
320
+ image: np.ndarray,
321
+ pixel_spacing: Optional[List[float]] = None
322
+ ) -> Dict[str, Any]:
323
+ """Perform fat segmentation using our existing engine"""
324
+ if not self.fat_engine:
325
+ return {"error": "Fat segmentation not available"}
326
+
327
+ # Use existing fat segmentation
328
+ segmentation_result = self.fat_engine.segment_fat_tissue(image)
329
+
330
+ results = {
331
+ "statistics": segmentation_result.get("statistics", {}),
332
+ "segments": {}
333
+ }
334
+
335
+ # Calculate areas if pixel spacing provided
336
+ if pixel_spacing:
337
+ pixel_area_mm2 = pixel_spacing[0] * pixel_spacing[1]
338
+ for segment_type in ["total", "subcutaneous", "visceral"]:
339
+ pixel_key = f"{segment_type}_fat_pixels"
340
+ if pixel_key in results["statistics"]:
341
+ area_mm2 = results["statistics"][pixel_key] * pixel_area_mm2
342
+ results["segments"][segment_type] = {
343
+ "pixels": results["statistics"][pixel_key],
344
+ "area_mm2": area_mm2,
345
+ "area_cm2": area_mm2 / 100
346
+ }
347
+
348
+ # Add interpretation
349
+ if "total_fat_percentage" in results["statistics"]:
350
+ results["interpretation"] = self._interpret_fat_results(
351
+ results["statistics"]
352
+ )
353
+
354
+ return results
355
+
356
+ def _interpret_fat_results(self, stats: Dict[str, float]) -> Dict[str, Any]:
357
+ """Interpret fat segmentation results"""
358
+ interpretation = {
359
+ "obesity_risk": "normal",
360
+ "visceral_risk": "normal",
361
+ "recommendations": []
362
+ }
363
+
364
+ total_fat = stats.get("total_fat_percentage", 0)
365
+ visceral_ratio = stats.get("visceral_subcutaneous_ratio", 0)
366
+
367
+ # Obesity assessment
368
+ if total_fat > 40:
369
+ interpretation["obesity_risk"] = "severe"
370
+ interpretation["recommendations"].append("Immediate weight management required")
371
+ elif total_fat > 30:
372
+ interpretation["obesity_risk"] = "moderate"
373
+ interpretation["recommendations"].append("Weight reduction recommended")
374
+ elif total_fat > 25:
375
+ interpretation["obesity_risk"] = "mild"
376
+ interpretation["recommendations"].append("Monitor weight trend")
377
+
378
+ # Visceral fat assessment
379
+ if visceral_ratio > 0.5:
380
+ interpretation["visceral_risk"] = "high"
381
+ interpretation["recommendations"].append("High visceral fat - metabolic risk")
382
+ elif visceral_ratio > 0.3:
383
+ interpretation["visceral_risk"] = "moderate"
384
+
385
+ return interpretation
386
+
387
+ def _perform_xray_analysis(self, image: np.ndarray) -> Dict[str, Any]:
388
+ """Perform comprehensive X-ray analysis using XRayAnalyzer"""
389
+ if not XRAY_ANALYZER_AVAILABLE:
390
+ return {"error": "X-ray analysis not available"}
391
+
392
+ # Use the XRay analyzer
393
+ analysis_results = analyze_xray(image)
394
+
395
+ results = {
396
+ "segments": {},
397
+ "tissue_distribution": {},
398
+ "clinical_findings": []
399
+ }
400
+
401
+ # Process segmentation results
402
+ if "segments" in analysis_results:
403
+ for tissue_type, mask in analysis_results["segments"].items():
404
+ if np.any(mask):
405
+ pixel_count = np.sum(mask)
406
+ percentage = analysis_results["percentages"].get(tissue_type, 0)
407
+ results["segments"][tissue_type] = {
408
+ "pixels": int(pixel_count),
409
+ "percentage": round(percentage, 2),
410
+ "present": True
411
+ }
412
+
413
+ # Add tissue distribution
414
+ if "percentages" in analysis_results:
415
+ results["tissue_distribution"] = analysis_results["percentages"]
416
+
417
+ # Add clinical analysis
418
+ if "clinical_analysis" in analysis_results:
419
+ clinical = analysis_results["clinical_analysis"]
420
+
421
+ # Quality assessment
422
+ if "quality_assessment" in clinical:
423
+ results["quality"] = clinical["quality_assessment"]
424
+
425
+ # Abnormality detection
426
+ if "abnormality_detection" in clinical:
427
+ abnorm = clinical["abnormality_detection"]
428
+ if abnorm.get("detected", False):
429
+ for finding in abnorm.get("findings", []):
430
+ results["clinical_findings"].append({
431
+ "type": finding.get("type", "unknown"),
432
+ "description": finding.get("description", ""),
433
+ "confidence": finding.get("confidence", "low")
434
+ })
435
+
436
+ # Tissue distribution analysis
437
+ if "tissue_distribution" in clinical:
438
+ dist = clinical["tissue_distribution"]
439
+ if "bone_to_soft_ratio" in dist:
440
+ results["bone_soft_ratio"] = dist["bone_to_soft_ratio"]
441
+
442
+ # Add interpretation
443
+ results["interpretation"] = self._interpret_xray_results(results)
444
+
445
+ return results
446
+
447
+ def _interpret_xray_results(self, results: Dict[str, Any]) -> Dict[str, Any]:
448
+ """Interpret X-ray analysis results"""
449
+ interpretation = {
450
+ "summary": "Normal X-ray appearance",
451
+ "findings": [],
452
+ "recommendations": []
453
+ }
454
+
455
+ # Check for abnormal findings
456
+ if results.get("clinical_findings"):
457
+ interpretation["summary"] = "Abnormal findings detected"
458
+ for finding in results["clinical_findings"]:
459
+ interpretation["findings"].append(finding["description"])
460
+
461
+ # Check tissue distribution
462
+ tissue_dist = results.get("tissue_distribution", {})
463
+ if tissue_dist.get("metal", 0) > 0.5:
464
+ interpretation["findings"].append("Metal artifact/implant present")
465
+
466
+ if tissue_dist.get("fluid", 0) > 5:
467
+ interpretation["findings"].append("Possible fluid accumulation")
468
+ interpretation["recommendations"].append("Clinical correlation recommended")
469
+
470
+ # Check quality
471
+ quality = results.get("quality", {})
472
+ if quality.get("overall") in ["poor", "fair"]:
473
+ interpretation["recommendations"].append("Consider repeat imaging for better quality")
474
+
475
+ return interpretation
476
+
477
+ def _calculate_confidence(self, roi_pixels: np.ndarray, center_value: float) -> float:
478
+ """Calculate confidence score based on ROI homogeneity"""
479
+ if roi_pixels.size == 0:
480
+ return 0.0
481
+
482
+ # Handle single pixel or uniform regions
483
+ if roi_pixels.size == 1 or np.all(roi_pixels == roi_pixels.flat[0]):
484
+ return 1.0 # Perfect confidence for uniform regions
485
+
486
+ # Check how consistent the ROI is
487
+ std = np.std(roi_pixels)
488
+ mean = np.mean(roi_pixels)
489
+
490
+ # Handle zero std (uniform region)
491
+ if std == 0:
492
+ return 1.0
493
+
494
+ # How close is center value to mean
495
+ center_deviation = abs(center_value - mean) / std
496
+
497
+ # Coefficient of variation
498
+ cv = std / (abs(mean) + 1e-6)
499
+
500
+ # Lower CV = more homogeneous = higher confidence
501
+ # Also consider if center value is close to mean
502
+ confidence = max(0.0, min(1.0, 1.0 - cv))
503
+
504
+ # Reduce confidence if center is far from mean
505
+ if center_deviation > 2: # More than 2 standard deviations
506
+ confidence *= 0.8
507
+
508
+ return round(confidence, 2)
509
+
510
+ def _generate_reasoning(
511
+ self,
512
+ hu_value: float,
513
+ tissue_type: Dict[str, str],
514
+ roi_pixels: np.ndarray
515
+ ) -> str:
516
+ """Generate reasoning for the classification"""
517
+ reasoning_parts = []
518
+
519
+ # HU value interpretation
520
+ reasoning_parts.append(f"HU value of {hu_value:.1f} falls within {tissue_type['type']} range")
521
+
522
+ # Homogeneity assessment
523
+ std = np.std(roi_pixels)
524
+ if std < 10:
525
+ reasoning_parts.append("Homogeneous region suggests uniform tissue")
526
+ elif std < 30:
527
+ reasoning_parts.append("Moderate heterogeneity observed")
528
+ else:
529
+ reasoning_parts.append("High heterogeneity - possible mixed tissues or pathology")
530
+
531
+ return ". ".join(reasoning_parts)
532
+
533
+ def _calculate_image_statistics(self, image: np.ndarray) -> Dict[str, float]:
534
+ """Calculate comprehensive image statistics"""
535
+ return {
536
+ "min": float(np.min(image)),
537
+ "max": float(np.max(image)),
538
+ "mean": float(np.mean(image)),
539
+ "std": float(np.std(image)),
540
+ "median": float(np.median(image)),
541
+ "p5": float(np.percentile(image, 5)),
542
+ "p95": float(np.percentile(image, 95))
543
+ }
544
+
545
+ def _assess_image_quality(self, image: np.ndarray) -> Dict[str, Any]:
546
+ """Assess image quality metrics"""
547
+ # Simple quality metrics
548
+ dynamic_range = np.max(image) - np.min(image)
549
+ snr = np.mean(image) / (np.std(image) + 1e-6)
550
+
551
+ quality = {
552
+ "dynamic_range": float(dynamic_range),
553
+ "snr": float(snr),
554
+ "assessment": "good" if snr > 10 and dynamic_range > 100 else "poor"
555
+ }
556
+
557
+ return quality
558
+
559
+ def _correlate_with_clinical(
560
+ self,
561
+ analysis_results: Dict[str, Any],
562
+ clinical_context: Dict[str, Any]
563
+ ) -> Dict[str, Any]:
564
+ """Correlate findings with clinical context"""
565
+ correlation = {
566
+ "relevant_findings": [],
567
+ "clinical_significance": "unknown"
568
+ }
569
+
570
+ # Example correlation logic
571
+ if "symptoms" in clinical_context:
572
+ symptoms = clinical_context["symptoms"]
573
+
574
+ if "dyspnea" in symptoms and analysis_results.get("modality") == "CT":
575
+ # Check for lung pathology indicators
576
+ if "segmentation" in analysis_results:
577
+ fat_percent = analysis_results["segmentation"]["statistics"].get(
578
+ "total_fat_percentage", 0
579
+ )
580
+ if fat_percent > 35:
581
+ correlation["relevant_findings"].append(
582
+ "High body fat may contribute to dyspnea"
583
+ )
584
+
585
+ return correlation
586
+
587
+ def _create_html_report(self, results: Dict[str, Any]) -> str:
588
+ """Create HTML report for visual mode"""
589
+ html_parts = ['<div class="medical-analysis-report">']
590
+
591
+ # Header
592
+ html_parts.append(f'<h3>Medical Image Analysis Report</h3>')
593
+ html_parts.append(f'<p><strong>Modality:</strong> {results.get("modality", "Unknown")}</p>')
594
+
595
+ # Point analysis
596
+ if "point_analysis" in results:
597
+ pa = results["point_analysis"]
598
+ html_parts.append('<div class="point-analysis">')
599
+ html_parts.append('<h4>Point Analysis</h4>')
600
+
601
+ if "hu_value" in pa:
602
+ html_parts.append(f'<p>HU Value: {pa["hu_value"]:.1f}</p>')
603
+
604
+ tissue = pa.get("tissue_type", {})
605
+ html_parts.append(
606
+ f'<p>Tissue Type: {tissue.get("icon", "")} {tissue.get("type", "Unknown")}</p>'
607
+ )
608
+
609
+ if "confidence" in pa:
610
+ html_parts.append(f'<p>Confidence: {pa["confidence"]*100:.0f}%</p>')
611
+
612
+ if "reasoning" in pa:
613
+ html_parts.append(f'<p><em>{pa["reasoning"]}</em></p>')
614
+
615
+ html_parts.append('</div>')
616
+
617
+ # Segmentation results
618
+ if "segmentation" in results and "statistics" in results["segmentation"]:
619
+ stats = results["segmentation"]["statistics"]
620
+ html_parts.append('<div class="segmentation-results">')
621
+ html_parts.append('<h4>Fat Segmentation Analysis</h4>')
622
+ html_parts.append(f'<p>Total Fat: {stats.get("total_fat_percentage", 0):.1f}%</p>')
623
+ html_parts.append(f'<p>Subcutaneous: {stats.get("subcutaneous_fat_percentage", 0):.1f}%</p>')
624
+ html_parts.append(f'<p>Visceral: {stats.get("visceral_fat_percentage", 0):.1f}%</p>')
625
+
626
+ if "interpretation" in results["segmentation"]:
627
+ interp = results["segmentation"]["interpretation"]
628
+ html_parts.append(f'<p><strong>Risk:</strong> {interp.get("obesity_risk", "normal")}</p>')
629
+
630
+ html_parts.append('</div>')
631
+
632
+ html_parts.append('</div>')
633
+
634
+ # Add CSS
635
+ html_parts.insert(0, '''<style>
636
+ .medical-analysis-report {
637
+ font-family: Arial, sans-serif;
638
+ padding: 15px;
639
+ background: #f5f5f5;
640
+ border-radius: 8px;
641
+ }
642
+ .medical-analysis-report h3, .medical-analysis-report h4 {
643
+ color: #2c3e50;
644
+ margin-top: 10px;
645
+ }
646
+ .point-analysis, .segmentation-results {
647
+ background: white;
648
+ padding: 10px;
649
+ margin: 10px 0;
650
+ border-radius: 5px;
651
+ box-shadow: 0 2px 4px rgba(0,0,0,0.1);
652
+ }
653
+ </style>''')
654
+
655
+ return ''.join(html_parts)
656
+
657
+ def process_file(self, file_path: str) -> Tuple[np.ndarray, np.ndarray, Dict[str, Any]]:
658
+ """
659
+ Process uploaded file (DICOM or regular image)
660
+
661
+ Returns:
662
+ pixel_array: numpy array of pixel values
663
+ display_array: normalized array for display (0-255)
664
+ metadata: file metadata including modality
665
+ """
666
+ if not file_path:
667
+ raise ValueError("No file provided")
668
+
669
+ file_ext = Path(file_path).suffix.lower()
670
+
671
+ # Try DICOM first - always try to read as DICOM regardless of extension
672
+ if PYDICOM_AVAILABLE:
673
+ try:
674
+ ds = pydicom.dcmread(file_path, force=True)
675
+
676
+ # Extract pixel array
677
+ pixel_array = ds.pixel_array.astype(float)
678
+
679
+ # Get modality
680
+ modality = ds.get('Modality', 'CT')
681
+
682
+ # Apply DICOM transformations
683
+ if 'RescaleSlope' in ds and 'RescaleIntercept' in ds:
684
+ pixel_array = pixel_array * ds.RescaleSlope + ds.RescaleIntercept
685
+
686
+ # Normalize for display
687
+ if modality == 'CT':
688
+ # CT: typically -1000 to 3000 HU
689
+ display_array = np.clip((pixel_array + 1000) / 4000 * 255, 0, 255).astype(np.uint8)
690
+ else:
691
+ # X-ray: normalize to full range
692
+ pmin, pmax = np.percentile(pixel_array, [1, 99])
693
+ display_array = np.clip((pixel_array - pmin) / (pmax - pmin) * 255, 0, 255).astype(np.uint8)
694
+
695
+ metadata = {
696
+ 'modality': modality,
697
+ 'shape': pixel_array.shape,
698
+ 'patient_name': str(ds.get('PatientName', 'Anonymous')),
699
+ 'study_date': str(ds.get('StudyDate', '')),
700
+ 'file_type': 'DICOM'
701
+ }
702
+
703
+ if 'WindowCenter' in ds and 'WindowWidth' in ds:
704
+ metadata['window_center'] = float(ds.WindowCenter if isinstance(ds.WindowCenter, (int, float)) else ds.WindowCenter[0])
705
+ metadata['window_width'] = float(ds.WindowWidth if isinstance(ds.WindowWidth, (int, float)) else ds.WindowWidth[0])
706
+
707
+ return pixel_array, display_array, metadata
708
+
709
+ except Exception:
710
+ # If DICOM reading fails, try as regular image
711
+ pass
712
+
713
+ # Handle regular images
714
+ try:
715
+ img = Image.open(file_path)
716
+
717
+ # Convert to grayscale if needed
718
+ if img.mode != 'L':
719
+ img = img.convert('L')
720
+
721
+ pixel_array = np.array(img).astype(float)
722
+ display_array = pixel_array.astype(np.uint8)
723
+
724
+ # Guess modality from filename
725
+ filename_lower = Path(file_path).name.lower()
726
+ if 'ct' in filename_lower:
727
+ modality = 'CT'
728
+ else:
729
+ modality = 'CR' # Default to X-ray
730
+
731
+ metadata = {
732
+ 'modality': modality,
733
+ 'shape': pixel_array.shape,
734
+ 'file_type': 'Image',
735
+ 'format': img.format
736
+ }
737
+
738
+ return pixel_array, display_array, metadata
739
+
740
+ except Exception as e:
741
+ raise ValueError(f"Could not load file: {str(e)}")
742
+
743
+ def _get_timestamp(self) -> str:
744
+ """Get current timestamp"""
745
+ from datetime import datetime
746
+ return datetime.now().isoformat()
747
+
748
+ def api_info(self) -> Dict[str, Any]:
749
+ """Return API information for the component"""
750
+ return {
751
+ "info": {
752
+ "type": "object",
753
+ "description": "Medical image analysis results",
754
+ "properties": {
755
+ "modality": {"type": "string"},
756
+ "measurements": {"type": "object"},
757
+ "findings": {"type": "array"},
758
+ "segmentation": {"type": "object"},
759
+ "quality_metrics": {"type": "object"}
760
+ }
761
+ },
762
+ "serialized_info": True
763
+ }
764
+
765
+ def example_inputs(self) -> List[Any]:
766
+ """Provide example inputs"""
767
+ return [
768
+ {
769
+ "image": np.zeros((512, 512)),
770
+ "modality": "CT",
771
+ "task": "analyze_point",
772
+ "roi": {"x": 256, "y": 256, "radius": 10}
773
+ }
774
+ ]
775
+
776
+ def example_outputs(self) -> List[Any]:
777
+ """Provide example outputs"""
778
+ return [
779
+ {
780
+ "modality": "CT",
781
+ "point_analysis": {
782
+ "hu_value": -50.0,
783
+ "tissue_type": {"type": "fat", "icon": "🟑"},
784
+ "confidence": 0.95,
785
+ "reasoning": "HU value of -50.0 falls within fat range. Homogeneous region suggests uniform tissue"
786
+ }
787
+ }
788
+ ]
src/backend/gradio_medical_image_analyzer/medical_image_analyzer.pyi ADDED
@@ -0,0 +1,984 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ Gradio Custom Component: Medical Image Analyzer
5
+ AI-Agent optimized component for medical image analysis
6
+ """
7
+
8
+ from __future__ import annotations
9
+ from typing import Any, Dict, List, Optional, Tuple, Union
10
+ import gradio as gr
11
+ from gradio.components.base import Component
12
+ from gradio.events import Events
13
+ import numpy as np
14
+ import json
15
+
16
+ # Import our existing analyzers
17
+ try:
18
+ from .fat_segmentation import FatSegmentationEngine, segment_fat, create_fat_overlay
19
+ FAT_SEGMENTATION_AVAILABLE = True
20
+ except ImportError:
21
+ FAT_SEGMENTATION_AVAILABLE = False
22
+ FatSegmentationEngine = None
23
+ segment_fat = None
24
+ create_fat_overlay = None
25
+
26
+ try:
27
+ from .xray_analyzer import XRayAnalyzer, analyze_xray, classify_xray_tissue
28
+ XRAY_ANALYZER_AVAILABLE = True
29
+ except ImportError:
30
+ XRAY_ANALYZER_AVAILABLE = False
31
+ XRayAnalyzer = None
32
+ analyze_xray = None
33
+ classify_xray_tissue = None
34
+
35
+ from gradio.events import Dependency
36
+
37
+ class MedicalImageAnalyzer(Component):
38
+ """
39
+ A Gradio component for AI-agent compatible medical image analysis.
40
+
41
+ Provides structured output for:
42
+ - HU value analysis (CT only)
43
+ - Tissue classification
44
+ - Fat segmentation (subcutaneous, visceral)
45
+ - Confidence scores and reasoning
46
+ """
47
+
48
+ EVENTS = [
49
+ Events.change,
50
+ Events.select,
51
+ Events.upload,
52
+ Events.clear,
53
+ ]
54
+
55
+ # HU ranges for tissue classification (CT only)
56
+ HU_RANGES = {
57
+ 'air': {'min': -1000, 'max': -500, 'icon': '🌫️'},
58
+ 'fat': {'min': -100, 'max': -50, 'icon': '🟑'},
59
+ 'water': {'min': -10, 'max': 10, 'icon': 'πŸ’§'},
60
+ 'soft_tissue': {'min': 30, 'max': 80, 'icon': 'πŸ”΄'},
61
+ 'bone': {'min': 200, 'max': 3000, 'icon': '🦴'}
62
+ }
63
+
64
+ def __init__(
65
+ self,
66
+ value: Optional[Dict[str, Any]] = None,
67
+ *,
68
+ label: Optional[str] = None,
69
+ info: Optional[str] = None,
70
+ every: Optional[float] = None,
71
+ show_label: Optional[bool] = None,
72
+ container: Optional[bool] = None,
73
+ scale: Optional[int] = None,
74
+ min_width: Optional[int] = None,
75
+ visible: Optional[bool] = None,
76
+ elem_id: Optional[str] = None,
77
+ elem_classes: Optional[List[str] | str] = None,
78
+ render: Optional[bool] = None,
79
+ key: Optional[int | str] = None,
80
+ # Custom parameters
81
+ analysis_mode: str = "structured", # "structured" for agents, "visual" for humans
82
+ include_confidence: bool = True,
83
+ include_reasoning: bool = True,
84
+ segmentation_types: List[str] = None,
85
+ **kwargs,
86
+ ):
87
+ """
88
+ Initialize the Medical Image Analyzer component.
89
+
90
+ Args:
91
+ analysis_mode: "structured" for AI agents, "visual" for human interpretation
92
+ include_confidence: Include confidence scores in results
93
+ include_reasoning: Include reasoning/explanation for findings
94
+ segmentation_types: List of segmentation types to perform
95
+ """
96
+ self.analysis_mode = analysis_mode
97
+ self.include_confidence = include_confidence
98
+ self.include_reasoning = include_reasoning
99
+ self.segmentation_types = segmentation_types or ["fat_total", "fat_subcutaneous", "fat_visceral"]
100
+
101
+ if FAT_SEGMENTATION_AVAILABLE:
102
+ self.fat_engine = FatSegmentationEngine()
103
+ else:
104
+ self.fat_engine = None
105
+
106
+ super().__init__(
107
+ label=label,
108
+ info=info,
109
+ every=every,
110
+ show_label=show_label,
111
+ container=container,
112
+ scale=scale,
113
+ min_width=min_width,
114
+ visible=visible,
115
+ elem_id=elem_id,
116
+ elem_classes=elem_classes,
117
+ render=render,
118
+ key=key,
119
+ value=value,
120
+ **kwargs,
121
+ )
122
+
123
+ def preprocess(self, payload: Dict[str, Any]) -> Dict[str, Any]:
124
+ """
125
+ Preprocess input from frontend.
126
+ Expected format:
127
+ {
128
+ "image": numpy array or base64,
129
+ "modality": "CT" or "XR",
130
+ "pixel_spacing": [x, y] (optional),
131
+ "roi": {"x": int, "y": int, "radius": int} (optional),
132
+ "task": "analyze_point" | "segment_fat" | "full_analysis"
133
+ }
134
+ """
135
+ if payload is None:
136
+ return None
137
+
138
+ # Validate required fields
139
+ if "image" not in payload:
140
+ return {"error": "No image provided"}
141
+
142
+ return payload
143
+
144
+ def postprocess(self, value: Dict[str, Any]) -> Dict[str, Any]:
145
+ """
146
+ Postprocess output for frontend.
147
+ Returns structured data for AI agents or formatted HTML for visual mode.
148
+ """
149
+ if value is None:
150
+ return None
151
+
152
+ if "error" in value:
153
+ return value
154
+
155
+ # For visual mode, convert to HTML
156
+ if self.analysis_mode == "visual":
157
+ value["html_report"] = self._create_html_report(value)
158
+
159
+ return value
160
+
161
+ def analyze_image(
162
+ self,
163
+ image: np.ndarray,
164
+ modality: str = "CT",
165
+ pixel_spacing: Optional[List[float]] = None,
166
+ roi: Optional[Dict[str, int]] = None,
167
+ task: str = "full_analysis",
168
+ clinical_context: Optional[Dict[str, Any]] = None
169
+ ) -> Dict[str, Any]:
170
+ """
171
+ Main analysis function for medical images.
172
+
173
+ Args:
174
+ image: 2D numpy array of pixel values
175
+ modality: "CT" or "XR" (X-Ray)
176
+ pixel_spacing: [x, y] spacing in mm
177
+ roi: Region of interest {"x": int, "y": int, "radius": int}
178
+ task: Analysis task
179
+ clinical_context: Additional context for guided analysis
180
+
181
+ Returns:
182
+ Structured analysis results
183
+ """
184
+ # Handle None or invalid image
185
+ if image is None:
186
+ return {"error": "No image provided", "modality": modality}
187
+
188
+ results = {
189
+ "modality": modality,
190
+ "timestamp": self._get_timestamp(),
191
+ "measurements": {},
192
+ "findings": [],
193
+ "segmentation": {},
194
+ "quality_metrics": self._assess_image_quality(image)
195
+ }
196
+
197
+ if task == "analyze_point" and roi:
198
+ # Analyze specific point/region
199
+ results["point_analysis"] = self._analyze_roi(image, modality, roi)
200
+
201
+ elif task == "segment_fat" and modality == "CT":
202
+ # Fat segmentation
203
+ results["segmentation"] = self._perform_fat_segmentation(image, pixel_spacing)
204
+
205
+ elif task == "full_analysis":
206
+ # Complete analysis
207
+ if roi:
208
+ results["point_analysis"] = self._analyze_roi(image, modality, roi)
209
+ if modality == "CT":
210
+ results["segmentation"] = self._perform_fat_segmentation(image, pixel_spacing)
211
+ elif modality in ["CR", "DX", "RX", "DR", "X-Ray"]:
212
+ results["segmentation"] = self._perform_xray_analysis(image)
213
+ results["statistics"] = self._calculate_image_statistics(image)
214
+
215
+ # Add clinical interpretation if context provided
216
+ if clinical_context:
217
+ results["clinical_correlation"] = self._correlate_with_clinical(
218
+ results, clinical_context
219
+ )
220
+
221
+ return results
222
+
223
+ def _analyze_roi(self, image: np.ndarray, modality: str, roi: Dict[str, int]) -> Dict[str, Any]:
224
+ """Analyze a specific region of interest"""
225
+ x, y, radius = roi.get("x", 0), roi.get("y", 0), roi.get("radius", 5)
226
+
227
+ # Extract ROI
228
+ y_min = max(0, y - radius)
229
+ y_max = min(image.shape[0], y + radius)
230
+ x_min = max(0, x - radius)
231
+ x_max = min(image.shape[1], x + radius)
232
+
233
+ roi_pixels = image[y_min:y_max, x_min:x_max]
234
+
235
+ analysis = {
236
+ "location": {"x": x, "y": y},
237
+ "roi_size": {"width": x_max - x_min, "height": y_max - y_min},
238
+ "statistics": {
239
+ "mean": float(np.mean(roi_pixels)),
240
+ "std": float(np.std(roi_pixels)),
241
+ "min": float(np.min(roi_pixels)),
242
+ "max": float(np.max(roi_pixels))
243
+ }
244
+ }
245
+
246
+ if modality == "CT":
247
+ # HU-based analysis
248
+ center_value = float(image[y, x])
249
+ analysis["hu_value"] = center_value
250
+ analysis["tissue_type"] = self._classify_tissue_by_hu(center_value)
251
+
252
+ if self.include_confidence:
253
+ analysis["confidence"] = self._calculate_confidence(roi_pixels, center_value)
254
+
255
+ if self.include_reasoning:
256
+ analysis["reasoning"] = self._generate_reasoning(
257
+ center_value, analysis["tissue_type"], roi_pixels
258
+ )
259
+ else:
260
+ # Intensity-based analysis for X-Ray
261
+ analysis["intensity"] = float(image[y, x])
262
+ analysis["tissue_type"] = self._classify_xray_intensity(
263
+ image[y, x], x, y, image
264
+ )
265
+
266
+ return analysis
267
+
268
+ def _classify_tissue_by_hu(self, hu_value: float) -> Dict[str, str]:
269
+ """Classify tissue type based on HU value"""
270
+ for tissue_type, ranges in self.HU_RANGES.items():
271
+ if ranges['min'] <= hu_value <= ranges['max']:
272
+ return {
273
+ 'type': tissue_type,
274
+ 'icon': ranges['icon'],
275
+ 'hu_range': f"{ranges['min']} to {ranges['max']}"
276
+ }
277
+
278
+ # Edge cases
279
+ if hu_value < -1000:
280
+ return {'type': 'air', 'icon': '🌫️', 'hu_range': '< -1000'}
281
+ else:
282
+ return {'type': 'metal/artifact', 'icon': 'βš™οΈ', 'hu_range': '> 3000'}
283
+
284
+ def _classify_xray_intensity(self, intensity: float, x: int, y: int, image: np.ndarray) -> Dict[str, str]:
285
+ """Classify tissue in X-Ray based on intensity"""
286
+ if XRAY_ANALYZER_AVAILABLE:
287
+ # Use the advanced XRay analyzer
288
+ analyzer = XRayAnalyzer()
289
+ # Normalize the intensity
290
+ stats = self._calculate_image_statistics(image)
291
+ normalized = (intensity - stats['min']) / (stats['max'] - stats['min'])
292
+ result = analyzer.classify_pixel(normalized, x, y, image)
293
+ return {
294
+ 'type': result['type'],
295
+ 'icon': result['icon'],
296
+ 'confidence': result['confidence'],
297
+ 'color': result['color']
298
+ }
299
+ else:
300
+ # Fallback to simple classification
301
+ normalized = (intensity - stats['min']) / (stats['max'] - stats['min'])
302
+
303
+ if normalized > 0.9:
304
+ return {'type': 'bone/metal', 'icon': '🦴', 'intensity_range': 'very high'}
305
+ elif normalized > 0.6:
306
+ return {'type': 'bone', 'icon': '🦴', 'intensity_range': 'high'}
307
+ elif normalized > 0.3:
308
+ return {'type': 'soft_tissue', 'icon': 'πŸ”΄', 'intensity_range': 'medium'}
309
+ else:
310
+ return {'type': 'air/lung', 'icon': '🌫️', 'intensity_range': 'low'}
311
+
312
+ def _perform_fat_segmentation(
313
+ self,
314
+ image: np.ndarray,
315
+ pixel_spacing: Optional[List[float]] = None
316
+ ) -> Dict[str, Any]:
317
+ """Perform fat segmentation using our existing engine"""
318
+ if not self.fat_engine:
319
+ return {"error": "Fat segmentation not available"}
320
+
321
+ # Use existing fat segmentation
322
+ segmentation_result = self.fat_engine.segment_fat_tissue(image)
323
+
324
+ results = {
325
+ "statistics": segmentation_result.get("statistics", {}),
326
+ "segments": {}
327
+ }
328
+
329
+ # Calculate areas if pixel spacing provided
330
+ if pixel_spacing:
331
+ pixel_area_mm2 = pixel_spacing[0] * pixel_spacing[1]
332
+ for segment_type in ["total", "subcutaneous", "visceral"]:
333
+ pixel_key = f"{segment_type}_fat_pixels"
334
+ if pixel_key in results["statistics"]:
335
+ area_mm2 = results["statistics"][pixel_key] * pixel_area_mm2
336
+ results["segments"][segment_type] = {
337
+ "pixels": results["statistics"][pixel_key],
338
+ "area_mm2": area_mm2,
339
+ "area_cm2": area_mm2 / 100
340
+ }
341
+
342
+ # Add interpretation
343
+ if "total_fat_percentage" in results["statistics"]:
344
+ results["interpretation"] = self._interpret_fat_results(
345
+ results["statistics"]
346
+ )
347
+
348
+ return results
349
+
350
+ def _interpret_fat_results(self, stats: Dict[str, float]) -> Dict[str, Any]:
351
+ """Interpret fat segmentation results"""
352
+ interpretation = {
353
+ "obesity_risk": "normal",
354
+ "visceral_risk": "normal",
355
+ "recommendations": []
356
+ }
357
+
358
+ total_fat = stats.get("total_fat_percentage", 0)
359
+ visceral_ratio = stats.get("visceral_subcutaneous_ratio", 0)
360
+
361
+ # Obesity assessment
362
+ if total_fat > 40:
363
+ interpretation["obesity_risk"] = "severe"
364
+ interpretation["recommendations"].append("Immediate weight management required")
365
+ elif total_fat > 30:
366
+ interpretation["obesity_risk"] = "moderate"
367
+ interpretation["recommendations"].append("Weight reduction recommended")
368
+ elif total_fat > 25:
369
+ interpretation["obesity_risk"] = "mild"
370
+ interpretation["recommendations"].append("Monitor weight trend")
371
+
372
+ # Visceral fat assessment
373
+ if visceral_ratio > 0.5:
374
+ interpretation["visceral_risk"] = "high"
375
+ interpretation["recommendations"].append("High visceral fat - metabolic risk")
376
+ elif visceral_ratio > 0.3:
377
+ interpretation["visceral_risk"] = "moderate"
378
+
379
+ return interpretation
380
+
381
+ def _perform_xray_analysis(self, image: np.ndarray) -> Dict[str, Any]:
382
+ """Perform comprehensive X-ray analysis using XRayAnalyzer"""
383
+ if not XRAY_ANALYZER_AVAILABLE:
384
+ return {"error": "X-ray analysis not available"}
385
+
386
+ # Use the XRay analyzer
387
+ analysis_results = analyze_xray(image)
388
+
389
+ results = {
390
+ "segments": {},
391
+ "tissue_distribution": {},
392
+ "clinical_findings": []
393
+ }
394
+
395
+ # Process segmentation results
396
+ if "segments" in analysis_results:
397
+ for tissue_type, mask in analysis_results["segments"].items():
398
+ if np.any(mask):
399
+ pixel_count = np.sum(mask)
400
+ percentage = analysis_results["percentages"].get(tissue_type, 0)
401
+ results["segments"][tissue_type] = {
402
+ "pixels": int(pixel_count),
403
+ "percentage": round(percentage, 2),
404
+ "present": True
405
+ }
406
+
407
+ # Add tissue distribution
408
+ if "percentages" in analysis_results:
409
+ results["tissue_distribution"] = analysis_results["percentages"]
410
+
411
+ # Add clinical analysis
412
+ if "clinical_analysis" in analysis_results:
413
+ clinical = analysis_results["clinical_analysis"]
414
+
415
+ # Quality assessment
416
+ if "quality_assessment" in clinical:
417
+ results["quality"] = clinical["quality_assessment"]
418
+
419
+ # Abnormality detection
420
+ if "abnormality_detection" in clinical:
421
+ abnorm = clinical["abnormality_detection"]
422
+ if abnorm.get("detected", False):
423
+ for finding in abnorm.get("findings", []):
424
+ results["clinical_findings"].append({
425
+ "type": finding.get("type", "unknown"),
426
+ "description": finding.get("description", ""),
427
+ "confidence": finding.get("confidence", "low")
428
+ })
429
+
430
+ # Tissue distribution analysis
431
+ if "tissue_distribution" in clinical:
432
+ dist = clinical["tissue_distribution"]
433
+ if "bone_to_soft_ratio" in dist:
434
+ results["bone_soft_ratio"] = dist["bone_to_soft_ratio"]
435
+
436
+ # Add interpretation
437
+ results["interpretation"] = self._interpret_xray_results(results)
438
+
439
+ return results
440
+
441
+ def _interpret_xray_results(self, results: Dict[str, Any]) -> Dict[str, Any]:
442
+ """Interpret X-ray analysis results"""
443
+ interpretation = {
444
+ "summary": "Normal X-ray appearance",
445
+ "findings": [],
446
+ "recommendations": []
447
+ }
448
+
449
+ # Check for abnormal findings
450
+ if results.get("clinical_findings"):
451
+ interpretation["summary"] = "Abnormal findings detected"
452
+ for finding in results["clinical_findings"]:
453
+ interpretation["findings"].append(finding["description"])
454
+
455
+ # Check tissue distribution
456
+ tissue_dist = results.get("tissue_distribution", {})
457
+ if tissue_dist.get("metal", 0) > 0.5:
458
+ interpretation["findings"].append("Metal artifact/implant present")
459
+
460
+ if tissue_dist.get("fluid", 0) > 5:
461
+ interpretation["findings"].append("Possible fluid accumulation")
462
+ interpretation["recommendations"].append("Clinical correlation recommended")
463
+
464
+ # Check quality
465
+ quality = results.get("quality", {})
466
+ if quality.get("overall") in ["poor", "fair"]:
467
+ interpretation["recommendations"].append("Consider repeat imaging for better quality")
468
+
469
+ return interpretation
470
+
471
+ def _calculate_confidence(self, roi_pixels: np.ndarray, center_value: float) -> float:
472
+ """Calculate confidence score based on ROI homogeneity"""
473
+ if roi_pixels.size == 0:
474
+ return 0.0
475
+
476
+ # Handle single pixel or uniform regions
477
+ if roi_pixels.size == 1 or np.all(roi_pixels == roi_pixels.flat[0]):
478
+ return 1.0 # Perfect confidence for uniform regions
479
+
480
+ # Check how consistent the ROI is
481
+ std = np.std(roi_pixels)
482
+ mean = np.mean(roi_pixels)
483
+
484
+ # Handle zero std (uniform region)
485
+ if std == 0:
486
+ return 1.0
487
+
488
+ # How close is center value to mean
489
+ center_deviation = abs(center_value - mean) / std
490
+
491
+ # Coefficient of variation
492
+ cv = std / (abs(mean) + 1e-6)
493
+
494
+ # Lower CV = more homogeneous = higher confidence
495
+ # Also consider if center value is close to mean
496
+ confidence = max(0.0, min(1.0, 1.0 - cv))
497
+
498
+ # Reduce confidence if center is far from mean
499
+ if center_deviation > 2: # More than 2 standard deviations
500
+ confidence *= 0.8
501
+
502
+ return round(confidence, 2)
503
+
504
+ def _generate_reasoning(
505
+ self,
506
+ hu_value: float,
507
+ tissue_type: Dict[str, str],
508
+ roi_pixels: np.ndarray
509
+ ) -> str:
510
+ """Generate reasoning for the classification"""
511
+ reasoning_parts = []
512
+
513
+ # HU value interpretation
514
+ reasoning_parts.append(f"HU value of {hu_value:.1f} falls within {tissue_type['type']} range")
515
+
516
+ # Homogeneity assessment
517
+ std = np.std(roi_pixels)
518
+ if std < 10:
519
+ reasoning_parts.append("Homogeneous region suggests uniform tissue")
520
+ elif std < 30:
521
+ reasoning_parts.append("Moderate heterogeneity observed")
522
+ else:
523
+ reasoning_parts.append("High heterogeneity - possible mixed tissues or pathology")
524
+
525
+ return ". ".join(reasoning_parts)
526
+
527
+ def _calculate_image_statistics(self, image: np.ndarray) -> Dict[str, float]:
528
+ """Calculate comprehensive image statistics"""
529
+ return {
530
+ "min": float(np.min(image)),
531
+ "max": float(np.max(image)),
532
+ "mean": float(np.mean(image)),
533
+ "std": float(np.std(image)),
534
+ "median": float(np.median(image)),
535
+ "p5": float(np.percentile(image, 5)),
536
+ "p95": float(np.percentile(image, 95))
537
+ }
538
+
539
+ def _assess_image_quality(self, image: np.ndarray) -> Dict[str, Any]:
540
+ """Assess image quality metrics"""
541
+ # Simple quality metrics
542
+ dynamic_range = np.max(image) - np.min(image)
543
+ snr = np.mean(image) / (np.std(image) + 1e-6)
544
+
545
+ quality = {
546
+ "dynamic_range": float(dynamic_range),
547
+ "snr": float(snr),
548
+ "assessment": "good" if snr > 10 and dynamic_range > 100 else "poor"
549
+ }
550
+
551
+ return quality
552
+
553
+ def _correlate_with_clinical(
554
+ self,
555
+ analysis_results: Dict[str, Any],
556
+ clinical_context: Dict[str, Any]
557
+ ) -> Dict[str, Any]:
558
+ """Correlate findings with clinical context"""
559
+ correlation = {
560
+ "relevant_findings": [],
561
+ "clinical_significance": "unknown"
562
+ }
563
+
564
+ # Example correlation logic
565
+ if "symptoms" in clinical_context:
566
+ symptoms = clinical_context["symptoms"]
567
+
568
+ if "dyspnea" in symptoms and analysis_results.get("modality") == "CT":
569
+ # Check for lung pathology indicators
570
+ if "segmentation" in analysis_results:
571
+ fat_percent = analysis_results["segmentation"]["statistics"].get(
572
+ "total_fat_percentage", 0
573
+ )
574
+ if fat_percent > 35:
575
+ correlation["relevant_findings"].append(
576
+ "High body fat may contribute to dyspnea"
577
+ )
578
+
579
+ return correlation
580
+
581
+ def _create_html_report(self, results: Dict[str, Any]) -> str:
582
+ """Create HTML report for visual mode"""
583
+ html_parts = ['<div class="medical-analysis-report">']
584
+
585
+ # Header
586
+ html_parts.append(f'<h3>Medical Image Analysis Report</h3>')
587
+ html_parts.append(f'<p><strong>Modality:</strong> {results.get("modality", "Unknown")}</p>')
588
+
589
+ # Point analysis
590
+ if "point_analysis" in results:
591
+ pa = results["point_analysis"]
592
+ html_parts.append('<div class="point-analysis">')
593
+ html_parts.append('<h4>Point Analysis</h4>')
594
+
595
+ if "hu_value" in pa:
596
+ html_parts.append(f'<p>HU Value: {pa["hu_value"]:.1f}</p>')
597
+
598
+ tissue = pa.get("tissue_type", {})
599
+ html_parts.append(
600
+ f'<p>Tissue Type: {tissue.get("icon", "")} {tissue.get("type", "Unknown")}</p>'
601
+ )
602
+
603
+ if "confidence" in pa:
604
+ html_parts.append(f'<p>Confidence: {pa["confidence"]*100:.0f}%</p>')
605
+
606
+ if "reasoning" in pa:
607
+ html_parts.append(f'<p><em>{pa["reasoning"]}</em></p>')
608
+
609
+ html_parts.append('</div>')
610
+
611
+ # Segmentation results
612
+ if "segmentation" in results and "statistics" in results["segmentation"]:
613
+ stats = results["segmentation"]["statistics"]
614
+ html_parts.append('<div class="segmentation-results">')
615
+ html_parts.append('<h4>Fat Segmentation Analysis</h4>')
616
+ html_parts.append(f'<p>Total Fat: {stats.get("total_fat_percentage", 0):.1f}%</p>')
617
+ html_parts.append(f'<p>Subcutaneous: {stats.get("subcutaneous_fat_percentage", 0):.1f}%</p>')
618
+ html_parts.append(f'<p>Visceral: {stats.get("visceral_fat_percentage", 0):.1f}%</p>')
619
+
620
+ if "interpretation" in results["segmentation"]:
621
+ interp = results["segmentation"]["interpretation"]
622
+ html_parts.append(f'<p><strong>Risk:</strong> {interp.get("obesity_risk", "normal")}</p>')
623
+
624
+ html_parts.append('</div>')
625
+
626
+ html_parts.append('</div>')
627
+
628
+ # Add CSS
629
+ html_parts.insert(0, '''<style>
630
+ .medical-analysis-report {
631
+ font-family: Arial, sans-serif;
632
+ padding: 15px;
633
+ background: #f5f5f5;
634
+ border-radius: 8px;
635
+ }
636
+ .medical-analysis-report h3, .medical-analysis-report h4 {
637
+ color: #2c3e50;
638
+ margin-top: 10px;
639
+ }
640
+ .point-analysis, .segmentation-results {
641
+ background: white;
642
+ padding: 10px;
643
+ margin: 10px 0;
644
+ border-radius: 5px;
645
+ box-shadow: 0 2px 4px rgba(0,0,0,0.1);
646
+ }
647
+ </style>''')
648
+
649
+ return ''.join(html_parts)
650
+
651
+ def process_file(self, file_path: str) -> Tuple[np.ndarray, np.ndarray, Dict[str, Any]]:
652
+ """
653
+ Process uploaded file (DICOM or regular image)
654
+
655
+ Returns:
656
+ pixel_array: numpy array of pixel values
657
+ display_array: normalized array for display (0-255)
658
+ metadata: file metadata including modality
659
+ """
660
+ if not file_path:
661
+ raise ValueError("No file provided")
662
+
663
+ file_ext = Path(file_path).suffix.lower()
664
+
665
+ # Try DICOM first - always try to read as DICOM regardless of extension
666
+ if PYDICOM_AVAILABLE:
667
+ try:
668
+ ds = pydicom.dcmread(file_path, force=True)
669
+
670
+ # Extract pixel array
671
+ pixel_array = ds.pixel_array.astype(float)
672
+
673
+ # Get modality
674
+ modality = ds.get('Modality', 'CT')
675
+
676
+ # Apply DICOM transformations
677
+ if 'RescaleSlope' in ds and 'RescaleIntercept' in ds:
678
+ pixel_array = pixel_array * ds.RescaleSlope + ds.RescaleIntercept
679
+
680
+ # Normalize for display
681
+ if modality == 'CT':
682
+ # CT: typically -1000 to 3000 HU
683
+ display_array = np.clip((pixel_array + 1000) / 4000 * 255, 0, 255).astype(np.uint8)
684
+ else:
685
+ # X-ray: normalize to full range
686
+ pmin, pmax = np.percentile(pixel_array, [1, 99])
687
+ display_array = np.clip((pixel_array - pmin) / (pmax - pmin) * 255, 0, 255).astype(np.uint8)
688
+
689
+ metadata = {
690
+ 'modality': modality,
691
+ 'shape': pixel_array.shape,
692
+ 'patient_name': str(ds.get('PatientName', 'Anonymous')),
693
+ 'study_date': str(ds.get('StudyDate', '')),
694
+ 'file_type': 'DICOM'
695
+ }
696
+
697
+ if 'WindowCenter' in ds and 'WindowWidth' in ds:
698
+ metadata['window_center'] = float(ds.WindowCenter if isinstance(ds.WindowCenter, (int, float)) else ds.WindowCenter[0])
699
+ metadata['window_width'] = float(ds.WindowWidth if isinstance(ds.WindowWidth, (int, float)) else ds.WindowWidth[0])
700
+
701
+ return pixel_array, display_array, metadata
702
+
703
+ except Exception:
704
+ # If DICOM reading fails, try as regular image
705
+ pass
706
+
707
+ # Handle regular images
708
+ try:
709
+ img = Image.open(file_path)
710
+
711
+ # Convert to grayscale if needed
712
+ if img.mode != 'L':
713
+ img = img.convert('L')
714
+
715
+ pixel_array = np.array(img).astype(float)
716
+ display_array = pixel_array.astype(np.uint8)
717
+
718
+ # Guess modality from filename
719
+ filename_lower = Path(file_path).name.lower()
720
+ if 'ct' in filename_lower:
721
+ modality = 'CT'
722
+ else:
723
+ modality = 'CR' # Default to X-ray
724
+
725
+ metadata = {
726
+ 'modality': modality,
727
+ 'shape': pixel_array.shape,
728
+ 'file_type': 'Image',
729
+ 'format': img.format
730
+ }
731
+
732
+ return pixel_array, display_array, metadata
733
+
734
+ except Exception as e:
735
+ raise ValueError(f"Could not load file: {str(e)}")
736
+
737
+ def _get_timestamp(self) -> str:
738
+ """Get current timestamp"""
739
+ from datetime import datetime
740
+ return datetime.now().isoformat()
741
+
742
+ def api_info(self) -> Dict[str, Any]:
743
+ """Return API information for the component"""
744
+ return {
745
+ "info": {
746
+ "type": "object",
747
+ "description": "Medical image analysis results",
748
+ "properties": {
749
+ "modality": {"type": "string"},
750
+ "measurements": {"type": "object"},
751
+ "findings": {"type": "array"},
752
+ "segmentation": {"type": "object"},
753
+ "quality_metrics": {"type": "object"}
754
+ }
755
+ },
756
+ "serialized_info": True
757
+ }
758
+
759
+ def example_inputs(self) -> List[Any]:
760
+ """Provide example inputs"""
761
+ return [
762
+ {
763
+ "image": np.zeros((512, 512)),
764
+ "modality": "CT",
765
+ "task": "analyze_point",
766
+ "roi": {"x": 256, "y": 256, "radius": 10}
767
+ }
768
+ ]
769
+
770
+ def example_outputs(self) -> List[Any]:
771
+ """Provide example outputs"""
772
+ return [
773
+ {
774
+ "modality": "CT",
775
+ "point_analysis": {
776
+ "hu_value": -50.0,
777
+ "tissue_type": {"type": "fat", "icon": "οΏ½οΏ½οΏ½"},
778
+ "confidence": 0.95,
779
+ "reasoning": "HU value of -50.0 falls within fat range. Homogeneous region suggests uniform tissue"
780
+ }
781
+ }
782
+ ]
783
+ from typing import Callable, Literal, Sequence, Any, TYPE_CHECKING
784
+ from gradio.blocks import Block
785
+ if TYPE_CHECKING:
786
+ from gradio.components import Timer
787
+ from gradio.components.base import Component
788
+
789
+
790
+ def change(self,
791
+ fn: Callable[..., Any] | None = None,
792
+ inputs: Block | Sequence[Block] | set[Block] | None = None,
793
+ outputs: Block | Sequence[Block] | None = None,
794
+ api_name: str | None | Literal[False] = None,
795
+ scroll_to_output: bool = False,
796
+ show_progress: Literal["full", "minimal", "hidden"] = "full",
797
+ show_progress_on: Component | Sequence[Component] | None = None,
798
+ queue: bool | None = None,
799
+ batch: bool = False,
800
+ max_batch_size: int = 4,
801
+ preprocess: bool = True,
802
+ postprocess: bool = True,
803
+ cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
804
+ every: Timer | float | None = None,
805
+ trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
806
+ js: str | Literal[True] | None = None,
807
+ concurrency_limit: int | None | Literal["default"] = "default",
808
+ concurrency_id: str | None = None,
809
+ show_api: bool = True,
810
+ key: int | str | tuple[int | str, ...] | None = None,
811
+
812
+ ) -> Dependency:
813
+ """
814
+ Parameters:
815
+ fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
816
+ inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
817
+ outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
818
+ api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, will use the functions name as the endpoint route. If set to a string, the endpoint will be exposed in the api docs with the given name.
819
+ scroll_to_output: if True, will scroll to output component on completion
820
+ show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
821
+ show_progress_on: Component or list of components to show the progress animation on. If None, will show the progress animation on all of the output components.
822
+ queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
823
+ batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
824
+ max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
825
+ preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
826
+ postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
827
+ cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
828
+ every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
829
+ trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
830
+ js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
831
+ concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
832
+ concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
833
+ show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
834
+ key: A unique key for this event listener to be used in @gr.render(). If set, this value identifies an event as identical across re-renders when the key is identical.
835
+
836
+ """
837
+ ...
838
+
839
+ def select(self,
840
+ fn: Callable[..., Any] | None = None,
841
+ inputs: Block | Sequence[Block] | set[Block] | None = None,
842
+ outputs: Block | Sequence[Block] | None = None,
843
+ api_name: str | None | Literal[False] = None,
844
+ scroll_to_output: bool = False,
845
+ show_progress: Literal["full", "minimal", "hidden"] = "full",
846
+ show_progress_on: Component | Sequence[Component] | None = None,
847
+ queue: bool | None = None,
848
+ batch: bool = False,
849
+ max_batch_size: int = 4,
850
+ preprocess: bool = True,
851
+ postprocess: bool = True,
852
+ cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
853
+ every: Timer | float | None = None,
854
+ trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
855
+ js: str | Literal[True] | None = None,
856
+ concurrency_limit: int | None | Literal["default"] = "default",
857
+ concurrency_id: str | None = None,
858
+ show_api: bool = True,
859
+ key: int | str | tuple[int | str, ...] | None = None,
860
+
861
+ ) -> Dependency:
862
+ """
863
+ Parameters:
864
+ fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
865
+ inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
866
+ outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
867
+ api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, will use the functions name as the endpoint route. If set to a string, the endpoint will be exposed in the api docs with the given name.
868
+ scroll_to_output: if True, will scroll to output component on completion
869
+ show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
870
+ show_progress_on: Component or list of components to show the progress animation on. If None, will show the progress animation on all of the output components.
871
+ queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
872
+ batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
873
+ max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
874
+ preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
875
+ postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
876
+ cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
877
+ every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
878
+ trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
879
+ js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
880
+ concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
881
+ concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
882
+ show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
883
+ key: A unique key for this event listener to be used in @gr.render(). If set, this value identifies an event as identical across re-renders when the key is identical.
884
+
885
+ """
886
+ ...
887
+
888
+ def upload(self,
889
+ fn: Callable[..., Any] | None = None,
890
+ inputs: Block | Sequence[Block] | set[Block] | None = None,
891
+ outputs: Block | Sequence[Block] | None = None,
892
+ api_name: str | None | Literal[False] = None,
893
+ scroll_to_output: bool = False,
894
+ show_progress: Literal["full", "minimal", "hidden"] = "full",
895
+ show_progress_on: Component | Sequence[Component] | None = None,
896
+ queue: bool | None = None,
897
+ batch: bool = False,
898
+ max_batch_size: int = 4,
899
+ preprocess: bool = True,
900
+ postprocess: bool = True,
901
+ cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
902
+ every: Timer | float | None = None,
903
+ trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
904
+ js: str | Literal[True] | None = None,
905
+ concurrency_limit: int | None | Literal["default"] = "default",
906
+ concurrency_id: str | None = None,
907
+ show_api: bool = True,
908
+ key: int | str | tuple[int | str, ...] | None = None,
909
+
910
+ ) -> Dependency:
911
+ """
912
+ Parameters:
913
+ fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
914
+ inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
915
+ outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
916
+ api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, will use the functions name as the endpoint route. If set to a string, the endpoint will be exposed in the api docs with the given name.
917
+ scroll_to_output: if True, will scroll to output component on completion
918
+ show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
919
+ show_progress_on: Component or list of components to show the progress animation on. If None, will show the progress animation on all of the output components.
920
+ queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
921
+ batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
922
+ max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
923
+ preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
924
+ postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
925
+ cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
926
+ every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
927
+ trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
928
+ js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
929
+ concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
930
+ concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
931
+ show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
932
+ key: A unique key for this event listener to be used in @gr.render(). If set, this value identifies an event as identical across re-renders when the key is identical.
933
+
934
+ """
935
+ ...
936
+
937
+ def clear(self,
938
+ fn: Callable[..., Any] | None = None,
939
+ inputs: Block | Sequence[Block] | set[Block] | None = None,
940
+ outputs: Block | Sequence[Block] | None = None,
941
+ api_name: str | None | Literal[False] = None,
942
+ scroll_to_output: bool = False,
943
+ show_progress: Literal["full", "minimal", "hidden"] = "full",
944
+ show_progress_on: Component | Sequence[Component] | None = None,
945
+ queue: bool | None = None,
946
+ batch: bool = False,
947
+ max_batch_size: int = 4,
948
+ preprocess: bool = True,
949
+ postprocess: bool = True,
950
+ cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
951
+ every: Timer | float | None = None,
952
+ trigger_mode: Literal["once", "multiple", "always_last"] | None = None,
953
+ js: str | Literal[True] | None = None,
954
+ concurrency_limit: int | None | Literal["default"] = "default",
955
+ concurrency_id: str | None = None,
956
+ show_api: bool = True,
957
+ key: int | str | tuple[int | str, ...] | None = None,
958
+
959
+ ) -> Dependency:
960
+ """
961
+ Parameters:
962
+ fn: the function to call when this event is triggered. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
963
+ inputs: list of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
964
+ outputs: list of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
965
+ api_name: defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, will use the functions name as the endpoint route. If set to a string, the endpoint will be exposed in the api docs with the given name.
966
+ scroll_to_output: if True, will scroll to output component on completion
967
+ show_progress: how to show the progress animation while event is running: "full" shows a spinner which covers the output component area as well as a runtime display in the upper right corner, "minimal" only shows the runtime display, "hidden" shows no progress animation at all
968
+ show_progress_on: Component or list of components to show the progress animation on. If None, will show the progress animation on all of the output components.
969
+ queue: if True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
970
+ batch: if True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
971
+ max_batch_size: maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
972
+ preprocess: if False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
973
+ postprocess: if False, will not run postprocessing of component data before returning 'fn' output to the browser.
974
+ cancels: a list of other events to cancel when this listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
975
+ every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
976
+ trigger_mode: if "once" (default for all events except `.change()`) would not allow any submissions while an event is pending. If set to "multiple", unlimited submissions are allowed while pending, and "always_last" (default for `.change()` and `.key_up()` events) would allow a second submission after the pending event is complete.
977
+ js: optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components.
978
+ concurrency_limit: if set, this is the maximum number of this event that can be running simultaneously. Can be set to None to mean no concurrency_limit (any number of this event can be running simultaneously). Set to "default" to use the default concurrency limit (defined by the `default_concurrency_limit` parameter in `Blocks.queue()`, which itself is 1 by default).
979
+ concurrency_id: if set, this is the id of the concurrency group. Events with the same concurrency_id will be limited by the lowest set concurrency_limit.
980
+ show_api: whether to show this event in the "view API" page of the Gradio app, or in the ".view_api()" method of the Gradio clients. Unlike setting api_name to False, setting show_api to False will still allow downstream apps as well as the Clients to use this event. If fn is None, show_api will automatically be set to False.
981
+ key: A unique key for this event listener to be used in @gr.render(). If set, this value identifies an event as identical across re-renders when the key is identical.
982
+
983
+ """
984
+ ...
src/backend/gradio_medical_image_analyzer/templates/component.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # This file is required for Gradio custom components
2
+ # It exports the component class
3
+ from ..medical_image_analyzer import MedicalImageAnalyzer
4
+
5
+ __all__ = ["MedicalImageAnalyzer"]
src/backend/gradio_medical_image_analyzer/templates/component/index.js ADDED
@@ -0,0 +1,775 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ const {
2
+ HtmlTagHydration: Te,
3
+ SvelteComponent: Ee,
4
+ add_render_callback: ye,
5
+ append_hydration: g,
6
+ assign: Ne,
7
+ attr: R,
8
+ check_outros: Ue,
9
+ children: V,
10
+ claim_component: te,
11
+ claim_element: p,
12
+ claim_html_tag: Se,
13
+ claim_space: F,
14
+ claim_text: he,
15
+ create_component: le,
16
+ destroy_component: ne,
17
+ detach: h,
18
+ element: v,
19
+ empty: me,
20
+ get_spread_object: De,
21
+ get_spread_update: Ae,
22
+ get_svelte_dataset: j,
23
+ group_outros: Fe,
24
+ init: Me,
25
+ insert_hydration: U,
26
+ listen: fe,
27
+ mount_component: ie,
28
+ noop: Le,
29
+ run_all: Pe,
30
+ safe_not_equal: je,
31
+ select_option: de,
32
+ select_value: ke,
33
+ set_data: qe,
34
+ set_input_value: Q,
35
+ set_style: q,
36
+ space: M,
37
+ src_url_equal: Ce,
38
+ text: pe,
39
+ transition_in: W,
40
+ transition_out: Y
41
+ } = window.__gradio__svelte__internal;
42
+ function Ve(o) {
43
+ let e, t, l, a, s, r, u = "CT", m, I = "CR (X-Ray)", C, z = "DX (X-Ray)", k, B = "RX (X-Ray)", c, b = "DR (X-Ray)", S, w, O, Z = "Point Analysis", T, G = "Fat Segmentation (CT)", E, X = "Full Analysis", H, D, A, se, ae, L, oe, x, $, re, _e;
44
+ l = new IconButton({ props: { Icon: UploadIcon } }), l.$on(
45
+ "click",
46
+ /*handle_clear*/
47
+ o[21]
48
+ );
49
+ let n = (
50
+ /*uploaded_file*/
51
+ o[15] && we(o)
52
+ ), i = (
53
+ /*visual_report*/
54
+ o[17] && Oe(o)
55
+ ), f = (
56
+ /*analysis_mode*/
57
+ o[14] === "structured" && /*analysis_results*/
58
+ o[16] && Re(o)
59
+ );
60
+ return {
61
+ c() {
62
+ e = v("div"), t = v("div"), le(l.$$.fragment), a = M(), s = v("select"), r = v("option"), r.textContent = u, m = v("option"), m.textContent = I, C = v("option"), C.textContent = z, k = v("option"), k.textContent = B, c = v("option"), c.textContent = b, S = M(), w = v("select"), O = v("option"), O.textContent = Z, T = v("option"), T.textContent = G, E = v("option"), E.textContent = X, H = M(), D = v("label"), A = v("input"), se = pe(`
63
+ Show ROI`), ae = M(), L = v("div"), n && n.c(), oe = M(), i && i.c(), x = M(), f && f.c(), this.h();
64
+ },
65
+ l(_) {
66
+ e = p(_, "DIV", { class: !0 });
67
+ var d = V(e);
68
+ t = p(d, "DIV", { class: !0 });
69
+ var N = V(t);
70
+ te(l.$$.fragment, N), a = F(N), s = p(N, "SELECT", { class: !0 });
71
+ var P = V(s);
72
+ r = p(P, "OPTION", { "data-svelte-h": !0 }), j(r) !== "svelte-1uwvdsi" && (r.textContent = u), m = p(P, "OPTION", { "data-svelte-h": !0 }), j(m) !== "svelte-iiiy1a" && (m.textContent = I), C = p(P, "OPTION", { "data-svelte-h": !0 }), j(C) !== "svelte-16a5ymm" && (C.textContent = z), k = p(P, "OPTION", { "data-svelte-h": !0 }), j(k) !== "svelte-bjfw5q" && (k.textContent = B), c = p(P, "OPTION", { "data-svelte-h": !0 }), j(c) !== "svelte-121hs3y" && (c.textContent = b), P.forEach(h), S = F(N), w = p(N, "SELECT", { class: !0 });
73
+ var K = V(w);
74
+ O = p(K, "OPTION", { "data-svelte-h": !0 }), j(O) !== "svelte-17yivkd" && (O.textContent = Z), T = p(K, "OPTION", { "data-svelte-h": !0 }), j(T) !== "svelte-cf7bpu" && (T.textContent = G), E = p(K, "OPTION", { "data-svelte-h": !0 }), j(E) !== "svelte-d3m60d" && (E.textContent = X), K.forEach(h), H = F(N), D = p(N, "LABEL", { class: !0 });
75
+ var ee = V(D);
76
+ A = p(ee, "INPUT", { type: !0 }), se = he(ee, `
77
+ Show ROI`), ee.forEach(h), N.forEach(h), ae = F(d), L = p(d, "DIV", { class: !0 });
78
+ var ce = V(L);
79
+ n && n.l(ce), ce.forEach(h), oe = F(d), i && i.l(d), x = F(d), f && f.l(d), d.forEach(h), this.h();
80
+ },
81
+ h() {
82
+ r.__value = "CT", Q(r, r.__value), m.__value = "CR", Q(m, m.__value), C.__value = "DX", Q(C, C.__value), k.__value = "RX", Q(k, k.__value), c.__value = "DR", Q(c, c.__value), R(s, "class", "modality-select svelte-197pbtm"), /*modality*/
83
+ o[1] === void 0 && ye(() => (
84
+ /*select0_change_handler*/
85
+ o[27].call(s)
86
+ )), O.__value = "analyze_point", Q(O, O.__value), T.__value = "segment_fat", Q(T, T.__value), E.__value = "full_analysis", Q(E, E.__value), R(w, "class", "task-select svelte-197pbtm"), /*task*/
87
+ o[2] === void 0 && ye(() => (
88
+ /*select1_change_handler*/
89
+ o[28].call(w)
90
+ )), R(A, "type", "checkbox"), R(D, "class", "roi-toggle svelte-197pbtm"), R(t, "class", "controls svelte-197pbtm"), R(L, "class", "image-container svelte-197pbtm"), R(e, "class", "analyzer-container svelte-197pbtm");
91
+ },
92
+ m(_, d) {
93
+ U(_, e, d), g(e, t), ie(l, t, null), g(t, a), g(t, s), g(s, r), g(s, m), g(s, C), g(s, k), g(s, c), de(
94
+ s,
95
+ /*modality*/
96
+ o[1],
97
+ !0
98
+ ), g(t, S), g(t, w), g(w, O), g(w, T), g(w, E), de(
99
+ w,
100
+ /*task*/
101
+ o[2],
102
+ !0
103
+ ), g(t, H), g(t, D), g(D, A), A.checked = /*show_roi*/
104
+ o[19], g(D, se), g(e, ae), g(e, L), n && n.m(L, null), g(e, oe), i && i.m(e, null), g(e, x), f && f.m(e, null), $ = !0, re || (_e = [
105
+ fe(
106
+ s,
107
+ "change",
108
+ /*select0_change_handler*/
109
+ o[27]
110
+ ),
111
+ fe(
112
+ w,
113
+ "change",
114
+ /*select1_change_handler*/
115
+ o[28]
116
+ ),
117
+ fe(
118
+ A,
119
+ "change",
120
+ /*input_change_handler*/
121
+ o[29]
122
+ ),
123
+ fe(
124
+ L,
125
+ "click",
126
+ /*handle_roi_click*/
127
+ o[22]
128
+ )
129
+ ], re = !0);
130
+ },
131
+ p(_, d) {
132
+ d[0] & /*modality*/
133
+ 2 && de(
134
+ s,
135
+ /*modality*/
136
+ _[1]
137
+ ), d[0] & /*task*/
138
+ 4 && de(
139
+ w,
140
+ /*task*/
141
+ _[2]
142
+ ), d[0] & /*show_roi*/
143
+ 524288 && (A.checked = /*show_roi*/
144
+ _[19]), /*uploaded_file*/
145
+ _[15] ? n ? n.p(_, d) : (n = we(_), n.c(), n.m(L, null)) : n && (n.d(1), n = null), /*visual_report*/
146
+ _[17] ? i ? i.p(_, d) : (i = Oe(_), i.c(), i.m(e, x)) : i && (i.d(1), i = null), /*analysis_mode*/
147
+ _[14] === "structured" && /*analysis_results*/
148
+ _[16] ? f ? f.p(_, d) : (f = Re(_), f.c(), f.m(e, null)) : f && (f.d(1), f = null);
149
+ },
150
+ i(_) {
151
+ $ || (W(l.$$.fragment, _), $ = !0);
152
+ },
153
+ o(_) {
154
+ Y(l.$$.fragment, _), $ = !1;
155
+ },
156
+ d(_) {
157
+ _ && h(e), ne(l), n && n.d(), i && i.d(), f && f.d(), re = !1, Pe(_e);
158
+ }
159
+ };
160
+ }
161
+ function Xe(o) {
162
+ let e, t;
163
+ return e = new Upload({
164
+ props: {
165
+ filetype: "*",
166
+ root: (
167
+ /*root*/
168
+ o[8]
169
+ ),
170
+ dragging: Qe,
171
+ $$slots: { default: [Be] },
172
+ $$scope: { ctx: o }
173
+ }
174
+ }), e.$on(
175
+ "load",
176
+ /*handle_upload*/
177
+ o[20]
178
+ ), {
179
+ c() {
180
+ le(e.$$.fragment);
181
+ },
182
+ l(l) {
183
+ te(e.$$.fragment, l);
184
+ },
185
+ m(l, a) {
186
+ ie(e, l, a), t = !0;
187
+ },
188
+ p(l, a) {
189
+ const s = {};
190
+ a[0] & /*root*/
191
+ 256 && (s.root = /*root*/
192
+ l[8]), a[0] & /*gradio*/
193
+ 8192 | a[1] & /*$$scope*/
194
+ 4 && (s.$$scope = { dirty: a, ctx: l }), e.$set(s);
195
+ },
196
+ i(l) {
197
+ t || (W(e.$$.fragment, l), t = !0);
198
+ },
199
+ o(l) {
200
+ Y(e.$$.fragment, l), t = !1;
201
+ },
202
+ d(l) {
203
+ ne(e, l);
204
+ }
205
+ };
206
+ }
207
+ function we(o) {
208
+ let e, t, l, a, s = (
209
+ /*show_roi*/
210
+ o[19] && Ie(o)
211
+ );
212
+ return {
213
+ c() {
214
+ e = v("img"), l = M(), s && s.c(), a = me(), this.h();
215
+ },
216
+ l(r) {
217
+ e = p(r, "IMG", { src: !0, alt: !0, class: !0 }), l = F(r), s && s.l(r), a = me(), this.h();
218
+ },
219
+ h() {
220
+ Ce(e.src, t = URL.createObjectURL(
221
+ /*uploaded_file*/
222
+ o[15]
223
+ )) || R(e, "src", t), R(e, "alt", "Medical scan"), R(e, "class", "svelte-197pbtm");
224
+ },
225
+ m(r, u) {
226
+ U(r, e, u), U(r, l, u), s && s.m(r, u), U(r, a, u);
227
+ },
228
+ p(r, u) {
229
+ u[0] & /*uploaded_file*/
230
+ 32768 && !Ce(e.src, t = URL.createObjectURL(
231
+ /*uploaded_file*/
232
+ r[15]
233
+ )) && R(e, "src", t), /*show_roi*/
234
+ r[19] ? s ? s.p(r, u) : (s = Ie(r), s.c(), s.m(a.parentNode, a)) : s && (s.d(1), s = null);
235
+ },
236
+ d(r) {
237
+ r && (h(e), h(l), h(a)), s && s.d(r);
238
+ }
239
+ };
240
+ }
241
+ function Ie(o) {
242
+ let e;
243
+ return {
244
+ c() {
245
+ e = v("div"), this.h();
246
+ },
247
+ l(t) {
248
+ e = p(t, "DIV", { class: !0, style: !0 }), V(e).forEach(h), this.h();
249
+ },
250
+ h() {
251
+ R(e, "class", "roi-marker svelte-197pbtm"), q(
252
+ e,
253
+ "left",
254
+ /*roi*/
255
+ o[18].x + "px"
256
+ ), q(
257
+ e,
258
+ "top",
259
+ /*roi*/
260
+ o[18].y + "px"
261
+ ), q(
262
+ e,
263
+ "width",
264
+ /*roi*/
265
+ o[18].radius * 2 + "px"
266
+ ), q(
267
+ e,
268
+ "height",
269
+ /*roi*/
270
+ o[18].radius * 2 + "px"
271
+ );
272
+ },
273
+ m(t, l) {
274
+ U(t, e, l);
275
+ },
276
+ p(t, l) {
277
+ l[0] & /*roi*/
278
+ 262144 && q(
279
+ e,
280
+ "left",
281
+ /*roi*/
282
+ t[18].x + "px"
283
+ ), l[0] & /*roi*/
284
+ 262144 && q(
285
+ e,
286
+ "top",
287
+ /*roi*/
288
+ t[18].y + "px"
289
+ ), l[0] & /*roi*/
290
+ 262144 && q(
291
+ e,
292
+ "width",
293
+ /*roi*/
294
+ t[18].radius * 2 + "px"
295
+ ), l[0] & /*roi*/
296
+ 262144 && q(
297
+ e,
298
+ "height",
299
+ /*roi*/
300
+ t[18].radius * 2 + "px"
301
+ );
302
+ },
303
+ d(t) {
304
+ t && h(e);
305
+ }
306
+ };
307
+ }
308
+ function Oe(o) {
309
+ let e, t;
310
+ return {
311
+ c() {
312
+ e = v("div"), t = new Te(!1), this.h();
313
+ },
314
+ l(l) {
315
+ e = p(l, "DIV", { class: !0 });
316
+ var a = V(e);
317
+ t = Se(a, !1), a.forEach(h), this.h();
318
+ },
319
+ h() {
320
+ t.a = null, R(e, "class", "report-container svelte-197pbtm");
321
+ },
322
+ m(l, a) {
323
+ U(l, e, a), t.m(
324
+ /*visual_report*/
325
+ o[17],
326
+ e
327
+ );
328
+ },
329
+ p(l, a) {
330
+ a[0] & /*visual_report*/
331
+ 131072 && t.p(
332
+ /*visual_report*/
333
+ l[17]
334
+ );
335
+ },
336
+ d(l) {
337
+ l && h(e);
338
+ }
339
+ };
340
+ }
341
+ function Re(o) {
342
+ let e, t, l = "JSON Output (for AI Agents)", a, s, r = JSON.stringify(
343
+ /*analysis_results*/
344
+ o[16],
345
+ null,
346
+ 2
347
+ ) + "", u;
348
+ return {
349
+ c() {
350
+ e = v("details"), t = v("summary"), t.textContent = l, a = M(), s = v("pre"), u = pe(r), this.h();
351
+ },
352
+ l(m) {
353
+ e = p(m, "DETAILS", { class: !0 });
354
+ var I = V(e);
355
+ t = p(I, "SUMMARY", { class: !0, "data-svelte-h": !0 }), j(t) !== "svelte-16bwjzd" && (t.textContent = l), a = F(I), s = p(I, "PRE", { class: !0 });
356
+ var C = V(s);
357
+ u = he(C, r), C.forEach(h), I.forEach(h), this.h();
358
+ },
359
+ h() {
360
+ R(t, "class", "svelte-197pbtm"), R(s, "class", "svelte-197pbtm"), R(e, "class", "json-output svelte-197pbtm");
361
+ },
362
+ m(m, I) {
363
+ U(m, e, I), g(e, t), g(e, a), g(e, s), g(s, u);
364
+ },
365
+ p(m, I) {
366
+ I[0] & /*analysis_results*/
367
+ 65536 && r !== (r = JSON.stringify(
368
+ /*analysis_results*/
369
+ m[16],
370
+ null,
371
+ 2
372
+ ) + "") && qe(u, r);
373
+ },
374
+ d(m) {
375
+ m && h(e);
376
+ }
377
+ };
378
+ }
379
+ function ze(o) {
380
+ let e, t, l, a, s = "Supports: DICOM (.dcm), Images (.png, .jpg), and files without extensions (IM_0001, etc.)";
381
+ return {
382
+ c() {
383
+ e = pe("Drop Medical Image File Here - or - Click to Upload"), t = v("br"), l = M(), a = v("span"), a.textContent = s, this.h();
384
+ },
385
+ l(r) {
386
+ e = he(r, "Drop Medical Image File Here - or - Click to Upload"), t = p(r, "BR", {}), l = F(r), a = p(r, "SPAN", { style: !0, "data-svelte-h": !0 }), j(a) !== "svelte-l91joy" && (a.textContent = s), this.h();
387
+ },
388
+ h() {
389
+ q(a, "font-size", "0.9em"), q(a, "color", "var(--body-text-color-subdued)");
390
+ },
391
+ m(r, u) {
392
+ U(r, e, u), U(r, t, u), U(r, l, u), U(r, a, u);
393
+ },
394
+ p: Le,
395
+ d(r) {
396
+ r && (h(e), h(t), h(l), h(a));
397
+ }
398
+ };
399
+ }
400
+ function Be(o) {
401
+ let e, t;
402
+ return e = new UploadText({
403
+ props: {
404
+ i18n: (
405
+ /*gradio*/
406
+ o[13].i18n
407
+ ),
408
+ type: "file",
409
+ $$slots: { default: [ze] },
410
+ $$scope: { ctx: o }
411
+ }
412
+ }), {
413
+ c() {
414
+ le(e.$$.fragment);
415
+ },
416
+ l(l) {
417
+ te(e.$$.fragment, l);
418
+ },
419
+ m(l, a) {
420
+ ie(e, l, a), t = !0;
421
+ },
422
+ p(l, a) {
423
+ const s = {};
424
+ a[0] & /*gradio*/
425
+ 8192 && (s.i18n = /*gradio*/
426
+ l[13].i18n), a[1] & /*$$scope*/
427
+ 4 && (s.$$scope = { dirty: a, ctx: l }), e.$set(s);
428
+ },
429
+ i(l) {
430
+ t || (W(e.$$.fragment, l), t = !0);
431
+ },
432
+ o(l) {
433
+ Y(e.$$.fragment, l), t = !1;
434
+ },
435
+ d(l) {
436
+ ne(e, l);
437
+ }
438
+ };
439
+ }
440
+ function He(o) {
441
+ let e, t, l, a, s, r, u, m;
442
+ const I = [
443
+ {
444
+ autoscroll: (
445
+ /*gradio*/
446
+ o[13].autoscroll
447
+ )
448
+ },
449
+ { i18n: (
450
+ /*gradio*/
451
+ o[13].i18n
452
+ ) },
453
+ /*loading_status*/
454
+ o[9]
455
+ ];
456
+ let C = {};
457
+ for (let c = 0; c < I.length; c += 1)
458
+ C = Ne(C, I[c]);
459
+ e = new StatusTracker({ props: C }), l = new BlockLabel({
460
+ props: {
461
+ show_label: (
462
+ /*show_label*/
463
+ o[7]
464
+ ),
465
+ Icon: Image,
466
+ label: (
467
+ /*label*/
468
+ o[6] || "Medical Image Analyzer"
469
+ )
470
+ }
471
+ });
472
+ const z = [Xe, Ve], k = [];
473
+ function B(c, b) {
474
+ return (
475
+ /*value*/
476
+ c[0] === null || !/*uploaded_file*/
477
+ c[15] ? 0 : 1
478
+ );
479
+ }
480
+ return s = B(o), r = k[s] = z[s](o), {
481
+ c() {
482
+ le(e.$$.fragment), t = M(), le(l.$$.fragment), a = M(), r.c(), u = me();
483
+ },
484
+ l(c) {
485
+ te(e.$$.fragment, c), t = F(c), te(l.$$.fragment, c), a = F(c), r.l(c), u = me();
486
+ },
487
+ m(c, b) {
488
+ ie(e, c, b), U(c, t, b), ie(l, c, b), U(c, a, b), k[s].m(c, b), U(c, u, b), m = !0;
489
+ },
490
+ p(c, b) {
491
+ const S = b[0] & /*gradio, loading_status*/
492
+ 8704 ? Ae(I, [
493
+ b[0] & /*gradio*/
494
+ 8192 && {
495
+ autoscroll: (
496
+ /*gradio*/
497
+ c[13].autoscroll
498
+ )
499
+ },
500
+ b[0] & /*gradio*/
501
+ 8192 && { i18n: (
502
+ /*gradio*/
503
+ c[13].i18n
504
+ ) },
505
+ b[0] & /*loading_status*/
506
+ 512 && De(
507
+ /*loading_status*/
508
+ c[9]
509
+ )
510
+ ]) : {};
511
+ e.$set(S);
512
+ const w = {};
513
+ b[0] & /*show_label*/
514
+ 128 && (w.show_label = /*show_label*/
515
+ c[7]), b[0] & /*label*/
516
+ 64 && (w.label = /*label*/
517
+ c[6] || "Medical Image Analyzer"), l.$set(w);
518
+ let O = s;
519
+ s = B(c), s === O ? k[s].p(c, b) : (Fe(), Y(k[O], 1, 1, () => {
520
+ k[O] = null;
521
+ }), Ue(), r = k[s], r ? r.p(c, b) : (r = k[s] = z[s](c), r.c()), W(r, 1), r.m(u.parentNode, u));
522
+ },
523
+ i(c) {
524
+ m || (W(e.$$.fragment, c), W(l.$$.fragment, c), W(r), m = !0);
525
+ },
526
+ o(c) {
527
+ Y(e.$$.fragment, c), Y(l.$$.fragment, c), Y(r), m = !1;
528
+ },
529
+ d(c) {
530
+ c && (h(t), h(a), h(u)), ne(e, c), ne(l, c), k[s].d(c);
531
+ }
532
+ };
533
+ }
534
+ function Je(o) {
535
+ let e, t;
536
+ return e = new Block({
537
+ props: {
538
+ visible: (
539
+ /*visible*/
540
+ o[5]
541
+ ),
542
+ elem_id: (
543
+ /*elem_id*/
544
+ o[3]
545
+ ),
546
+ elem_classes: (
547
+ /*elem_classes*/
548
+ o[4]
549
+ ),
550
+ container: (
551
+ /*container*/
552
+ o[10]
553
+ ),
554
+ scale: (
555
+ /*scale*/
556
+ o[11]
557
+ ),
558
+ min_width: (
559
+ /*min_width*/
560
+ o[12]
561
+ ),
562
+ allow_overflow: !1,
563
+ padding: !0,
564
+ $$slots: { default: [He] },
565
+ $$scope: { ctx: o }
566
+ }
567
+ }), {
568
+ c() {
569
+ le(e.$$.fragment);
570
+ },
571
+ l(l) {
572
+ te(e.$$.fragment, l);
573
+ },
574
+ m(l, a) {
575
+ ie(e, l, a), t = !0;
576
+ },
577
+ p(l, a) {
578
+ const s = {};
579
+ a[0] & /*visible*/
580
+ 32 && (s.visible = /*visible*/
581
+ l[5]), a[0] & /*elem_id*/
582
+ 8 && (s.elem_id = /*elem_id*/
583
+ l[3]), a[0] & /*elem_classes*/
584
+ 16 && (s.elem_classes = /*elem_classes*/
585
+ l[4]), a[0] & /*container*/
586
+ 1024 && (s.container = /*container*/
587
+ l[10]), a[0] & /*scale*/
588
+ 2048 && (s.scale = /*scale*/
589
+ l[11]), a[0] & /*min_width*/
590
+ 4096 && (s.min_width = /*min_width*/
591
+ l[12]), a[0] & /*root, gradio, value, uploaded_file, analysis_results, analysis_mode, visual_report, roi, show_roi, task, modality, show_label, label, loading_status*/
592
+ 1041351 | a[1] & /*$$scope*/
593
+ 4 && (s.$$scope = { dirty: a, ctx: l }), e.$set(s);
594
+ },
595
+ i(l) {
596
+ t || (W(e.$$.fragment, l), t = !0);
597
+ },
598
+ o(l) {
599
+ Y(e.$$.fragment, l), t = !1;
600
+ },
601
+ d(l) {
602
+ ne(e, l);
603
+ }
604
+ };
605
+ }
606
+ let Qe = !1;
607
+ function We(o, e, t) {
608
+ let { elem_id: l = "" } = e, { elem_classes: a = [] } = e, { visible: s = !0 } = e, { value: r = null } = e, { label: u } = e, { show_label: m } = e, { show_download_button: I } = e, { root: C } = e, { proxy_url: z } = e, { loading_status: k } = e, { container: B = !0 } = e, { scale: c = null } = e, { min_width: b = void 0 } = e, { gradio: S } = e, { analysis_mode: w = "structured" } = e, { include_confidence: O = !0 } = e, { include_reasoning: Z = !0 } = e, { modality: T = "CT" } = e, { task: G = "full_analysis" } = e, E = null, X = { x: 256, y: 256, radius: 10 }, H = !1, D = null, A = "";
609
+ async function se(n) {
610
+ var _;
611
+ const i = URL.createObjectURL(n), f = ((_ = n.name.split(".").pop()) == null ? void 0 : _.toLowerCase()) || "";
612
+ try {
613
+ if (!f || f === "dcm" || f === "dicom" || n.type === "application/dicom" || n.name.startsWith("IM_")) {
614
+ const d = new FormData();
615
+ d.append("file", n);
616
+ const N = await fetch(`${C}/process_dicom`, { method: "POST", body: d });
617
+ if (N.ok)
618
+ return await N.json();
619
+ }
620
+ return {
621
+ url: i,
622
+ name: n.name,
623
+ size: n.size,
624
+ type: n.type || "application/octet-stream"
625
+ };
626
+ } catch (d) {
627
+ throw console.error("Error loading file:", d), d;
628
+ }
629
+ }
630
+ function ae({ detail: n }) {
631
+ const i = n;
632
+ se(i).then((f) => {
633
+ t(15, E = i), S.dispatch && S.dispatch("upload", { file: i, data: f });
634
+ }).catch((f) => {
635
+ console.error("Upload error:", f);
636
+ });
637
+ }
638
+ function L() {
639
+ t(0, r = null), t(15, E = null), t(16, D = null), t(17, A = ""), S.dispatch("clear");
640
+ }
641
+ function oe(n) {
642
+ if (!H) return;
643
+ const i = n.target.getBoundingClientRect();
644
+ t(18, X.x = Math.round(n.clientX - i.left), X), t(18, X.y = Math.round(n.clientY - i.top), X), S.dispatch && S.dispatch("change", { roi: X });
645
+ }
646
+ function x(n) {
647
+ var f, _, d, N, P, K, ee, ce, ve;
648
+ if (!n) return "";
649
+ let i = '<div class="medical-report">';
650
+ if (i += "<h3>πŸ₯ Medical Image Analysis Report</h3>", i += '<div class="report-section">', i += "<h4>πŸ“‹ Basic Information</h4>", i += `<p><strong>Modality:</strong> ${n.modality || "Unknown"}</p>`, i += `<p><strong>Timestamp:</strong> ${n.timestamp || "N/A"}</p>`, i += "</div>", n.point_analysis) {
651
+ const y = n.point_analysis;
652
+ i += '<div class="report-section">', i += "<h4>🎯 Point Analysis</h4>", i += `<p><strong>Location:</strong> (${(f = y.location) == null ? void 0 : f.x}, ${(_ = y.location) == null ? void 0 : _.y})</p>`, n.modality === "CT" ? i += `<p><strong>HU Value:</strong> ${((d = y.hu_value) == null ? void 0 : d.toFixed(1)) || "N/A"}</p>` : i += `<p><strong>Intensity:</strong> ${((N = y.intensity) == null ? void 0 : N.toFixed(3)) || "N/A"}</p>`, y.tissue_type && (i += `<p><strong>Tissue Type:</strong> ${y.tissue_type.icon || ""} ${y.tissue_type.type || "Unknown"}</p>`), O && y.confidence !== void 0 && (i += `<p><strong>Confidence:</strong> ${y.confidence}</p>`), Z && y.reasoning && (i += `<p class="reasoning">πŸ’­ ${y.reasoning}</p>`), i += "</div>";
653
+ }
654
+ if ((P = n.segmentation) != null && P.statistics) {
655
+ const y = n.segmentation.statistics;
656
+ if (n.modality === "CT" && y.total_fat_percentage !== void 0) {
657
+ if (i += '<div class="report-section">', i += "<h4>πŸ”¬ Fat Segmentation</h4>", i += '<div class="stats-grid">', i += `<div><strong>Total Fat:</strong> ${y.total_fat_percentage.toFixed(1)}%</div>`, i += `<div><strong>Subcutaneous:</strong> ${y.subcutaneous_fat_percentage.toFixed(1)}%</div>`, i += `<div><strong>Visceral:</strong> ${y.visceral_fat_percentage.toFixed(1)}%</div>`, i += `<div><strong>V/S Ratio:</strong> ${y.visceral_subcutaneous_ratio.toFixed(2)}</div>`, i += "</div>", n.segmentation.interpretation) {
658
+ const J = n.segmentation.interpretation;
659
+ i += '<div class="interpretation">', i += `<p><strong>Obesity Risk:</strong> <span class="risk-${J.obesity_risk}">${J.obesity_risk.toUpperCase()}</span></p>`, i += `<p><strong>Visceral Risk:</strong> <span class="risk-${J.visceral_risk}">${J.visceral_risk.toUpperCase()}</span></p>`, ((K = J.recommendations) == null ? void 0 : K.length) > 0 && (i += "<p><strong>Recommendations:</strong></p>", i += "<ul>", J.recommendations.forEach((ge) => {
660
+ i += `<li>${ge}</li>`;
661
+ }), i += "</ul>"), i += "</div>";
662
+ }
663
+ i += "</div>";
664
+ } else if (n.segmentation.tissue_distribution) {
665
+ i += '<div class="report-section">', i += "<h4>🦴 Tissue Distribution</h4>", i += '<div class="tissue-grid">';
666
+ const J = n.segmentation.tissue_distribution, ge = {
667
+ bone: "🦴",
668
+ soft_tissue: "πŸ”΄",
669
+ air: "🌫️",
670
+ metal: "βš™οΈ",
671
+ fat: "🟑",
672
+ fluid: "πŸ’§"
673
+ };
674
+ Object.entries(J).forEach(([ue, be]) => {
675
+ be > 0 && (i += '<div class="tissue-item">', i += `<div class="tissue-icon">${ge[ue] || "πŸ“"}</div>`, i += `<div class="tissue-name">${ue.replace("_", " ")}</div>`, i += `<div class="tissue-percentage">${be.toFixed(1)}%</div>`, i += "</div>");
676
+ }), i += "</div>", ((ee = n.segmentation.clinical_findings) == null ? void 0 : ee.length) > 0 && (i += '<div class="clinical-findings">', i += "<p><strong>⚠️ Clinical Findings:</strong></p>", i += "<ul>", n.segmentation.clinical_findings.forEach((ue) => {
677
+ i += `<li>${ue.description} (Confidence: ${ue.confidence})</li>`;
678
+ }), i += "</ul>", i += "</div>"), i += "</div>";
679
+ }
680
+ }
681
+ if (n.quality_metrics) {
682
+ const y = n.quality_metrics;
683
+ i += '<div class="report-section">', i += "<h4>πŸ“Š Image Quality</h4>", i += `<p><strong>Overall Quality:</strong> <span class="quality-${y.overall_quality}">${((ce = y.overall_quality) == null ? void 0 : ce.toUpperCase()) || "UNKNOWN"}</span></p>`, ((ve = y.issues) == null ? void 0 : ve.length) > 0 && (i += `<p><strong>Issues:</strong> ${y.issues.join(", ")}</p>`), i += "</div>";
684
+ }
685
+ return i += "</div>", i;
686
+ }
687
+ function $() {
688
+ T = ke(this), t(1, T);
689
+ }
690
+ function re() {
691
+ G = ke(this), t(2, G);
692
+ }
693
+ function _e() {
694
+ H = this.checked, t(19, H);
695
+ }
696
+ return o.$$set = (n) => {
697
+ "elem_id" in n && t(3, l = n.elem_id), "elem_classes" in n && t(4, a = n.elem_classes), "visible" in n && t(5, s = n.visible), "value" in n && t(0, r = n.value), "label" in n && t(6, u = n.label), "show_label" in n && t(7, m = n.show_label), "show_download_button" in n && t(23, I = n.show_download_button), "root" in n && t(8, C = n.root), "proxy_url" in n && t(24, z = n.proxy_url), "loading_status" in n && t(9, k = n.loading_status), "container" in n && t(10, B = n.container), "scale" in n && t(11, c = n.scale), "min_width" in n && t(12, b = n.min_width), "gradio" in n && t(13, S = n.gradio), "analysis_mode" in n && t(14, w = n.analysis_mode), "include_confidence" in n && t(25, O = n.include_confidence), "include_reasoning" in n && t(26, Z = n.include_reasoning), "modality" in n && t(1, T = n.modality), "task" in n && t(2, G = n.task);
698
+ }, o.$$.update = () => {
699
+ o.$$.dirty[0] & /*analysis_results*/
700
+ 65536 && D && t(17, A = x(D)), o.$$.dirty[0] & /*uploaded_file, analysis_results, visual_report*/
701
+ 229376 && t(0, r = {
702
+ image: E,
703
+ analysis: D,
704
+ report: A
705
+ });
706
+ }, [
707
+ r,
708
+ T,
709
+ G,
710
+ l,
711
+ a,
712
+ s,
713
+ u,
714
+ m,
715
+ C,
716
+ k,
717
+ B,
718
+ c,
719
+ b,
720
+ S,
721
+ w,
722
+ E,
723
+ D,
724
+ A,
725
+ X,
726
+ H,
727
+ ae,
728
+ L,
729
+ oe,
730
+ I,
731
+ z,
732
+ O,
733
+ Z,
734
+ $,
735
+ re,
736
+ _e
737
+ ];
738
+ }
739
+ class Ye extends Ee {
740
+ constructor(e) {
741
+ super(), Me(
742
+ this,
743
+ e,
744
+ We,
745
+ Je,
746
+ je,
747
+ {
748
+ elem_id: 3,
749
+ elem_classes: 4,
750
+ visible: 5,
751
+ value: 0,
752
+ label: 6,
753
+ show_label: 7,
754
+ show_download_button: 23,
755
+ root: 8,
756
+ proxy_url: 24,
757
+ loading_status: 9,
758
+ container: 10,
759
+ scale: 11,
760
+ min_width: 12,
761
+ gradio: 13,
762
+ analysis_mode: 14,
763
+ include_confidence: 25,
764
+ include_reasoning: 26,
765
+ modality: 1,
766
+ task: 2
767
+ },
768
+ null,
769
+ [-1, -1]
770
+ );
771
+ }
772
+ }
773
+ export {
774
+ Ye as default
775
+ };
src/backend/gradio_medical_image_analyzer/templates/component/style.css ADDED
@@ -0,0 +1 @@
 
 
1
+ .analyzer-container.svelte-197pbtm.svelte-197pbtm{display:flex;flex-direction:column;gap:1rem}.controls.svelte-197pbtm.svelte-197pbtm{display:flex;gap:.5rem;align-items:center;flex-wrap:wrap}.modality-select.svelte-197pbtm.svelte-197pbtm,.task-select.svelte-197pbtm.svelte-197pbtm{padding:.5rem;border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);background:var(--background-fill-primary)}.roi-toggle.svelte-197pbtm.svelte-197pbtm{display:flex;align-items:center;gap:.5rem;cursor:pointer}.image-container.svelte-197pbtm.svelte-197pbtm{position:relative;overflow:hidden;border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);cursor:crosshair}.image-container.svelte-197pbtm img.svelte-197pbtm{width:100%;height:auto;display:block}.roi-marker.svelte-197pbtm.svelte-197pbtm{position:absolute;border:2px solid #ff0000;border-radius:50%;pointer-events:none;transform:translate(-50%,-50%);box-shadow:0 0 0 1px #ffffff80}.report-container.svelte-197pbtm.svelte-197pbtm{background:var(--background-fill-secondary);border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);padding:1rem;overflow-x:auto}.medical-report{font-family:var(--font);color:var(--body-text-color)}.medical-report h3{color:var(--body-text-color);border-bottom:2px solid var(--color-accent);padding-bottom:.5rem;margin-bottom:1rem}.medical-report h4{color:var(--body-text-color);margin-top:1rem;margin-bottom:.5rem}.report-section{background:var(--background-fill-primary);padding:1rem;border-radius:var(--radius-sm);margin-bottom:1rem}.stats-grid,.tissue-grid{display:grid;grid-template-columns:repeat(auto-fit,minmax(150px,1fr));gap:.5rem;margin-top:.5rem}.tissue-item{text-align:center;padding:.5rem;background:var(--background-fill-secondary);border-radius:var(--radius-sm)}.tissue-icon{font-size:2rem;margin-bottom:.25rem}.tissue-name{font-weight:700;text-transform:capitalize}.tissue-percentage{color:var(--color-accent);font-size:1.2rem;font-weight:700}.reasoning{font-style:italic;color:var(--body-text-color-subdued);margin-top:.5rem}.interpretation{margin-top:1rem;padding:.5rem;background:var(--background-fill-secondary);border-radius:var(--radius-sm)}.risk-normal{color:#27ae60}.risk-moderate{color:#f39c12}.risk-high,.risk-severe{color:#e74c3c}.quality-excellent,.quality-good{color:#27ae60}.quality-fair{color:#f39c12}.quality-poor{color:#e74c3c}.clinical-findings{margin-top:1rem;padding:.5rem;background:#fff3cd;border-left:4px solid #ffc107;border-radius:var(--radius-sm)}.json-output.svelte-197pbtm.svelte-197pbtm{margin-top:1rem;background:var(--background-fill-secondary);border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);padding:1rem}.json-output.svelte-197pbtm summary.svelte-197pbtm{cursor:pointer;font-weight:700;margin-bottom:.5rem}.json-output.svelte-197pbtm pre.svelte-197pbtm{margin:0;overflow-x:auto;font-size:.875rem;background:var(--background-fill-primary);padding:.5rem;border-radius:var(--radius-sm)}
src/backend/gradio_medical_image_analyzer/templates/example/index.js ADDED
@@ -0,0 +1,399 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ const {
2
+ SvelteComponent: U,
3
+ append_hydration: m,
4
+ attr: f,
5
+ children: y,
6
+ claim_element: p,
7
+ claim_space: k,
8
+ claim_text: b,
9
+ detach: c,
10
+ element: h,
11
+ get_svelte_dataset: R,
12
+ init: j,
13
+ insert_hydration: d,
14
+ noop: I,
15
+ safe_not_equal: z,
16
+ set_data: E,
17
+ space: C,
18
+ src_url_equal: w,
19
+ text: g,
20
+ toggle_class: v
21
+ } = window.__gradio__svelte__internal;
22
+ function B(n) {
23
+ let e, a = "No example";
24
+ return {
25
+ c() {
26
+ e = h("div"), e.textContent = a, this.h();
27
+ },
28
+ l(l) {
29
+ e = p(l, "DIV", { class: !0, "data-svelte-h": !0 }), R(e) !== "svelte-1xsigcs" && (e.textContent = a), this.h();
30
+ },
31
+ h() {
32
+ f(e, "class", "empty-example svelte-16yp9bf");
33
+ },
34
+ m(l, t) {
35
+ d(l, e, t);
36
+ },
37
+ p: I,
38
+ d(l) {
39
+ l && c(e);
40
+ }
41
+ };
42
+ }
43
+ function F(n) {
44
+ let e, a, l = (
45
+ /*value*/
46
+ n[0].image && S(n)
47
+ ), t = (
48
+ /*value*/
49
+ n[0].analysis && A(n)
50
+ );
51
+ return {
52
+ c() {
53
+ e = h("div"), l && l.c(), a = C(), t && t.c(), this.h();
54
+ },
55
+ l(i) {
56
+ e = p(i, "DIV", { class: !0 });
57
+ var s = y(e);
58
+ l && l.l(s), a = k(s), t && t.l(s), s.forEach(c), this.h();
59
+ },
60
+ h() {
61
+ f(e, "class", "example-content svelte-16yp9bf");
62
+ },
63
+ m(i, s) {
64
+ d(i, e, s), l && l.m(e, null), m(e, a), t && t.m(e, null);
65
+ },
66
+ p(i, s) {
67
+ /*value*/
68
+ i[0].image ? l ? l.p(i, s) : (l = S(i), l.c(), l.m(e, a)) : l && (l.d(1), l = null), /*value*/
69
+ i[0].analysis ? t ? t.p(i, s) : (t = A(i), t.c(), t.m(e, null)) : t && (t.d(1), t = null);
70
+ },
71
+ d(i) {
72
+ i && c(e), l && l.d(), t && t.d();
73
+ }
74
+ };
75
+ }
76
+ function S(n) {
77
+ let e;
78
+ function a(i, s) {
79
+ return typeof /*value*/
80
+ i[0].image == "string" ? K : (
81
+ /*value*/
82
+ i[0].image.url ? J : H
83
+ );
84
+ }
85
+ let l = a(n), t = l(n);
86
+ return {
87
+ c() {
88
+ e = h("div"), t.c(), this.h();
89
+ },
90
+ l(i) {
91
+ e = p(i, "DIV", { class: !0 });
92
+ var s = y(e);
93
+ t.l(s), s.forEach(c), this.h();
94
+ },
95
+ h() {
96
+ f(e, "class", "image-preview svelte-16yp9bf");
97
+ },
98
+ m(i, s) {
99
+ d(i, e, s), t.m(e, null);
100
+ },
101
+ p(i, s) {
102
+ l === (l = a(i)) && t ? t.p(i, s) : (t.d(1), t = l(i), t && (t.c(), t.m(e, null)));
103
+ },
104
+ d(i) {
105
+ i && c(e), t.d();
106
+ }
107
+ };
108
+ }
109
+ function H(n) {
110
+ let e, a = "πŸ“· Image";
111
+ return {
112
+ c() {
113
+ e = h("div"), e.textContent = a, this.h();
114
+ },
115
+ l(l) {
116
+ e = p(l, "DIV", { class: !0, "data-svelte-h": !0 }), R(e) !== "svelte-1hvroc5" && (e.textContent = a), this.h();
117
+ },
118
+ h() {
119
+ f(e, "class", "placeholder svelte-16yp9bf");
120
+ },
121
+ m(l, t) {
122
+ d(l, e, t);
123
+ },
124
+ p: I,
125
+ d(l) {
126
+ l && c(e);
127
+ }
128
+ };
129
+ }
130
+ function J(n) {
131
+ let e, a;
132
+ return {
133
+ c() {
134
+ e = h("img"), this.h();
135
+ },
136
+ l(l) {
137
+ e = p(l, "IMG", { src: !0, alt: !0, class: !0 }), this.h();
138
+ },
139
+ h() {
140
+ w(e.src, a = /*value*/
141
+ n[0].image.url) || f(e, "src", a), f(e, "alt", "Medical scan example"), f(e, "class", "svelte-16yp9bf");
142
+ },
143
+ m(l, t) {
144
+ d(l, e, t);
145
+ },
146
+ p(l, t) {
147
+ t & /*value*/
148
+ 1 && !w(e.src, a = /*value*/
149
+ l[0].image.url) && f(e, "src", a);
150
+ },
151
+ d(l) {
152
+ l && c(e);
153
+ }
154
+ };
155
+ }
156
+ function K(n) {
157
+ let e, a;
158
+ return {
159
+ c() {
160
+ e = h("img"), this.h();
161
+ },
162
+ l(l) {
163
+ e = p(l, "IMG", { src: !0, alt: !0, class: !0 }), this.h();
164
+ },
165
+ h() {
166
+ w(e.src, a = /*value*/
167
+ n[0].image) || f(e, "src", a), f(e, "alt", "Medical scan example"), f(e, "class", "svelte-16yp9bf");
168
+ },
169
+ m(l, t) {
170
+ d(l, e, t);
171
+ },
172
+ p(l, t) {
173
+ t & /*value*/
174
+ 1 && !w(e.src, a = /*value*/
175
+ l[0].image) && f(e, "src", a);
176
+ },
177
+ d(l) {
178
+ l && c(e);
179
+ }
180
+ };
181
+ }
182
+ function A(n) {
183
+ var r, _, D;
184
+ let e, a, l, t = (
185
+ /*value*/
186
+ n[0].analysis.modality && P(n)
187
+ ), i = (
188
+ /*value*/
189
+ ((r = n[0].analysis.point_analysis) == null ? void 0 : r.tissue_type) && q(n)
190
+ ), s = (
191
+ /*value*/
192
+ ((D = (_ = n[0].analysis.segmentation) == null ? void 0 : _.interpretation) == null ? void 0 : D.obesity_risk) && G(n)
193
+ );
194
+ return {
195
+ c() {
196
+ e = h("div"), t && t.c(), a = C(), i && i.c(), l = C(), s && s.c(), this.h();
197
+ },
198
+ l(o) {
199
+ e = p(o, "DIV", { class: !0 });
200
+ var u = y(e);
201
+ t && t.l(u), a = k(u), i && i.l(u), l = k(u), s && s.l(u), u.forEach(c), this.h();
202
+ },
203
+ h() {
204
+ f(e, "class", "analysis-preview svelte-16yp9bf");
205
+ },
206
+ m(o, u) {
207
+ d(o, e, u), t && t.m(e, null), m(e, a), i && i.m(e, null), m(e, l), s && s.m(e, null);
208
+ },
209
+ p(o, u) {
210
+ var V, M, N;
211
+ /*value*/
212
+ o[0].analysis.modality ? t ? t.p(o, u) : (t = P(o), t.c(), t.m(e, a)) : t && (t.d(1), t = null), /*value*/
213
+ (V = o[0].analysis.point_analysis) != null && V.tissue_type ? i ? i.p(o, u) : (i = q(o), i.c(), i.m(e, l)) : i && (i.d(1), i = null), /*value*/
214
+ (N = (M = o[0].analysis.segmentation) == null ? void 0 : M.interpretation) != null && N.obesity_risk ? s ? s.p(o, u) : (s = G(o), s.c(), s.m(e, null)) : s && (s.d(1), s = null);
215
+ },
216
+ d(o) {
217
+ o && c(e), t && t.d(), i && i.d(), s && s.d();
218
+ }
219
+ };
220
+ }
221
+ function P(n) {
222
+ let e, a = (
223
+ /*value*/
224
+ n[0].analysis.modality + ""
225
+ ), l;
226
+ return {
227
+ c() {
228
+ e = h("span"), l = g(a), this.h();
229
+ },
230
+ l(t) {
231
+ e = p(t, "SPAN", { class: !0 });
232
+ var i = y(e);
233
+ l = b(i, a), i.forEach(c), this.h();
234
+ },
235
+ h() {
236
+ f(e, "class", "modality-badge svelte-16yp9bf");
237
+ },
238
+ m(t, i) {
239
+ d(t, e, i), m(e, l);
240
+ },
241
+ p(t, i) {
242
+ i & /*value*/
243
+ 1 && a !== (a = /*value*/
244
+ t[0].analysis.modality + "") && E(l, a);
245
+ },
246
+ d(t) {
247
+ t && c(e);
248
+ }
249
+ };
250
+ }
251
+ function q(n) {
252
+ let e, a = (
253
+ /*value*/
254
+ (n[0].analysis.point_analysis.tissue_type.icon || "") + ""
255
+ ), l, t, i = (
256
+ /*value*/
257
+ (n[0].analysis.point_analysis.tissue_type.type || "Unknown") + ""
258
+ ), s;
259
+ return {
260
+ c() {
261
+ e = h("span"), l = g(a), t = C(), s = g(i), this.h();
262
+ },
263
+ l(r) {
264
+ e = p(r, "SPAN", { class: !0 });
265
+ var _ = y(e);
266
+ l = b(_, a), t = k(_), s = b(_, i), _.forEach(c), this.h();
267
+ },
268
+ h() {
269
+ f(e, "class", "tissue-type svelte-16yp9bf");
270
+ },
271
+ m(r, _) {
272
+ d(r, e, _), m(e, l), m(e, t), m(e, s);
273
+ },
274
+ p(r, _) {
275
+ _ & /*value*/
276
+ 1 && a !== (a = /*value*/
277
+ (r[0].analysis.point_analysis.tissue_type.icon || "") + "") && E(l, a), _ & /*value*/
278
+ 1 && i !== (i = /*value*/
279
+ (r[0].analysis.point_analysis.tissue_type.type || "Unknown") + "") && E(s, i);
280
+ },
281
+ d(r) {
282
+ r && c(e);
283
+ }
284
+ };
285
+ }
286
+ function G(n) {
287
+ let e, a, l = (
288
+ /*value*/
289
+ n[0].analysis.segmentation.interpretation.obesity_risk + ""
290
+ ), t, i;
291
+ return {
292
+ c() {
293
+ e = h("span"), a = g("Risk: "), t = g(l), this.h();
294
+ },
295
+ l(s) {
296
+ e = p(s, "SPAN", { class: !0 });
297
+ var r = y(e);
298
+ a = b(r, "Risk: "), t = b(r, l), r.forEach(c), this.h();
299
+ },
300
+ h() {
301
+ f(e, "class", i = "risk-badge risk-" + /*value*/
302
+ n[0].analysis.segmentation.interpretation.obesity_risk + " svelte-16yp9bf");
303
+ },
304
+ m(s, r) {
305
+ d(s, e, r), m(e, a), m(e, t);
306
+ },
307
+ p(s, r) {
308
+ r & /*value*/
309
+ 1 && l !== (l = /*value*/
310
+ s[0].analysis.segmentation.interpretation.obesity_risk + "") && E(t, l), r & /*value*/
311
+ 1 && i !== (i = "risk-badge risk-" + /*value*/
312
+ s[0].analysis.segmentation.interpretation.obesity_risk + " svelte-16yp9bf") && f(e, "class", i);
313
+ },
314
+ d(s) {
315
+ s && c(e);
316
+ }
317
+ };
318
+ }
319
+ function L(n) {
320
+ let e;
321
+ function a(i, s) {
322
+ return (
323
+ /*value*/
324
+ i[0] ? F : B
325
+ );
326
+ }
327
+ let l = a(n), t = l(n);
328
+ return {
329
+ c() {
330
+ e = h("div"), t.c(), this.h();
331
+ },
332
+ l(i) {
333
+ e = p(i, "DIV", { class: !0 });
334
+ var s = y(e);
335
+ t.l(s), s.forEach(c), this.h();
336
+ },
337
+ h() {
338
+ f(e, "class", "example-container svelte-16yp9bf"), v(
339
+ e,
340
+ "table",
341
+ /*type*/
342
+ n[1] === "table"
343
+ ), v(
344
+ e,
345
+ "gallery",
346
+ /*type*/
347
+ n[1] === "gallery"
348
+ ), v(
349
+ e,
350
+ "selected",
351
+ /*selected*/
352
+ n[2]
353
+ );
354
+ },
355
+ m(i, s) {
356
+ d(i, e, s), t.m(e, null);
357
+ },
358
+ p(i, [s]) {
359
+ l === (l = a(i)) && t ? t.p(i, s) : (t.d(1), t = l(i), t && (t.c(), t.m(e, null))), s & /*type*/
360
+ 2 && v(
361
+ e,
362
+ "table",
363
+ /*type*/
364
+ i[1] === "table"
365
+ ), s & /*type*/
366
+ 2 && v(
367
+ e,
368
+ "gallery",
369
+ /*type*/
370
+ i[1] === "gallery"
371
+ ), s & /*selected*/
372
+ 4 && v(
373
+ e,
374
+ "selected",
375
+ /*selected*/
376
+ i[2]
377
+ );
378
+ },
379
+ i: I,
380
+ o: I,
381
+ d(i) {
382
+ i && c(e), t.d();
383
+ }
384
+ };
385
+ }
386
+ function O(n, e, a) {
387
+ let { value: l } = e, { type: t } = e, { selected: i = !1 } = e;
388
+ return n.$$set = (s) => {
389
+ "value" in s && a(0, l = s.value), "type" in s && a(1, t = s.type), "selected" in s && a(2, i = s.selected);
390
+ }, [l, t, i];
391
+ }
392
+ class Q extends U {
393
+ constructor(e) {
394
+ super(), j(this, e, O, L, z, { value: 0, type: 1, selected: 2 });
395
+ }
396
+ }
397
+ export {
398
+ Q as default
399
+ };
src/backend/gradio_medical_image_analyzer/templates/example/style.css ADDED
@@ -0,0 +1 @@
 
 
1
+ .example-container.svelte-16yp9bf.svelte-16yp9bf{overflow:hidden;border-radius:var(--radius-sm);background:var(--background-fill-secondary);position:relative;transition:all .2s ease}.example-container.svelte-16yp9bf.svelte-16yp9bf:hover{transform:translateY(-2px);box-shadow:0 4px 12px #0000001a}.example-container.selected.svelte-16yp9bf.svelte-16yp9bf{border:2px solid var(--color-accent)}.example-container.table.svelte-16yp9bf.svelte-16yp9bf{display:flex;align-items:center;padding:.5rem;gap:.5rem}.example-container.gallery.svelte-16yp9bf.svelte-16yp9bf{aspect-ratio:1}.example-content.svelte-16yp9bf.svelte-16yp9bf{display:flex;flex-direction:column;height:100%}.table.svelte-16yp9bf .example-content.svelte-16yp9bf{flex-direction:row;align-items:center;gap:.5rem}.image-preview.svelte-16yp9bf.svelte-16yp9bf{flex:1;overflow:hidden;display:flex;align-items:center;justify-content:center;background:var(--background-fill-primary)}.gallery.svelte-16yp9bf .image-preview.svelte-16yp9bf{height:70%}.table.svelte-16yp9bf .image-preview.svelte-16yp9bf{width:60px;height:60px;flex:0 0 60px;border-radius:var(--radius-sm)}.image-preview.svelte-16yp9bf img.svelte-16yp9bf{width:100%;height:100%;object-fit:cover}.placeholder.svelte-16yp9bf.svelte-16yp9bf{color:var(--body-text-color-subdued);font-size:2rem;opacity:.5}.analysis-preview.svelte-16yp9bf.svelte-16yp9bf{padding:.5rem;display:flex;flex-wrap:wrap;gap:.25rem;align-items:center;font-size:.875rem}.gallery.svelte-16yp9bf .analysis-preview.svelte-16yp9bf{background:var(--background-fill-primary);border-top:1px solid var(--border-color-primary)}.modality-badge.svelte-16yp9bf.svelte-16yp9bf{background:var(--color-accent);color:#fff;padding:.125rem .5rem;border-radius:var(--radius-sm);font-weight:700;font-size:.75rem}.tissue-type.svelte-16yp9bf.svelte-16yp9bf{background:var(--background-fill-secondary);padding:.125rem .5rem;border-radius:var(--radius-sm);border:1px solid var(--border-color-primary)}.risk-badge.svelte-16yp9bf.svelte-16yp9bf{padding:.125rem .5rem;border-radius:var(--radius-sm);font-weight:700;font-size:.75rem}.risk-normal.svelte-16yp9bf.svelte-16yp9bf{background:#d4edda;color:#155724}.risk-moderate.svelte-16yp9bf.svelte-16yp9bf{background:#fff3cd;color:#856404}.risk-high.svelte-16yp9bf.svelte-16yp9bf,.risk-severe.svelte-16yp9bf.svelte-16yp9bf{background:#f8d7da;color:#721c24}.empty-example.svelte-16yp9bf.svelte-16yp9bf{display:flex;align-items:center;justify-content:center;height:100%;color:var(--body-text-color-subdued);font-style:italic}
src/backend/gradio_medical_image_analyzer/xray_analyzer.py ADDED
@@ -0,0 +1,770 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ X-Ray Image Analyzer for bone and tissue segmentation
5
+ Extended for multi-tissue analysis in medical imaging
6
+ """
7
+
8
+ import numpy as np
9
+ from scipy import ndimage
10
+ from skimage import filters, morphology, measure
11
+ from skimage.exposure import equalize_adapthist
12
+ from typing import Dict, Any, Optional, List, Tuple
13
+ import cv2
14
+
15
+
16
+ class XRayAnalyzer:
17
+ """Analyze X-Ray images for comprehensive tissue segmentation"""
18
+
19
+ def __init__(self):
20
+ self.modality_types = ['CR', 'DX', 'RX', 'DR'] # Computed/Digital/Direct Radiography, X-Ray
21
+ self.tissue_types = ['bone', 'soft_tissue', 'air', 'metal', 'fat', 'fluid']
22
+
23
+ def analyze_xray_image(self, pixel_array: np.ndarray, metadata: dict = None) -> dict:
24
+ """
25
+ Analyze X-Ray image and segment different structures
26
+
27
+ Args:
28
+ pixel_array: 2D numpy array of pixel values
29
+ metadata: DICOM metadata (optional)
30
+
31
+ Returns:
32
+ Dictionary with segmentation results
33
+ """
34
+ # Normalize image to 0-1 range
35
+ normalized = self._normalize_image(pixel_array)
36
+
37
+ # Calculate image statistics
38
+ stats = self._calculate_statistics(normalized)
39
+
40
+ # Segment different structures with enhanced tissue detection
41
+ segments = {
42
+ 'bone': self._segment_bone(normalized, stats),
43
+ 'soft_tissue': self._segment_soft_tissue(normalized, stats),
44
+ 'air': self._segment_air(normalized, stats),
45
+ 'metal': self._detect_metal(normalized, stats),
46
+ 'fat': self._segment_fat_xray(normalized, stats),
47
+ 'fluid': self._detect_fluid(normalized, stats)
48
+ }
49
+
50
+ # Calculate percentages
51
+ total_pixels = pixel_array.size
52
+ percentages = {
53
+ name: (np.sum(mask) / total_pixels * 100)
54
+ for name, mask in segments.items()
55
+ }
56
+
57
+ # Perform clinical analysis
58
+ clinical_analysis = self._perform_clinical_analysis(segments, stats, metadata)
59
+
60
+ return {
61
+ 'segments': segments,
62
+ 'percentages': percentages,
63
+ 'statistics': stats,
64
+ 'clinical_analysis': clinical_analysis,
65
+ 'overlay': self._create_overlay(normalized, segments),
66
+ 'tissue_map': self._create_tissue_map(segments)
67
+ }
68
+
69
+ def _normalize_image(self, image: np.ndarray) -> np.ndarray:
70
+ """Normalize image to 0-1 range"""
71
+ img_min, img_max = image.min(), image.max()
72
+ if img_max - img_min == 0:
73
+ return np.zeros_like(image, dtype=np.float32)
74
+ return (image - img_min) / (img_max - img_min)
75
+
76
+ def _calculate_statistics(self, image: np.ndarray) -> dict:
77
+ """Calculate comprehensive image statistics for adaptive processing"""
78
+ return {
79
+ 'mean': np.mean(image),
80
+ 'std': np.std(image),
81
+ 'median': np.median(image),
82
+ 'skewness': float(np.mean(((image - np.mean(image)) / np.std(image)) ** 3)),
83
+ 'kurtosis': float(np.mean(((image - np.mean(image)) / np.std(image)) ** 4) - 3),
84
+ 'percentiles': {
85
+ f'p{p}': np.percentile(image, p)
86
+ for p in [1, 5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 75, 80, 85, 90, 95, 99]
87
+ },
88
+ 'histogram': np.histogram(image, bins=256)[0]
89
+ }
90
+
91
+ def _segment_bone(self, image: np.ndarray, stats: dict) -> np.ndarray:
92
+ """
93
+ Enhanced bone segmentation using multiple techniques
94
+ """
95
+ percentiles = stats['percentiles']
96
+
97
+ # Method 1: Percentile-based thresholding
98
+ bone_threshold = percentiles['p80']
99
+ bone_mask = image > bone_threshold
100
+
101
+ # Method 2: Otsu's method on high-intensity regions
102
+ high_intensity = image > percentiles['p60']
103
+ if np.any(high_intensity):
104
+ otsu_thresh = filters.threshold_otsu(image[high_intensity])
105
+ bone_mask_otsu = image > otsu_thresh
106
+ bone_mask = bone_mask | bone_mask_otsu
107
+
108
+ # Method 3: Gradient-based edge detection for cortical bone
109
+ gradient_magnitude = filters.sobel(image)
110
+ high_gradient = gradient_magnitude > np.percentile(gradient_magnitude, 90)
111
+ bone_edges = high_gradient & (image > percentiles['p70'])
112
+
113
+ # Combine methods
114
+ bone_mask = bone_mask | bone_edges
115
+
116
+ # Clean up using morphological operations
117
+ bone_mask = morphology.remove_small_objects(bone_mask, min_size=100)
118
+ bone_mask = morphology.binary_closing(bone_mask, morphology.disk(3))
119
+ bone_mask = morphology.binary_dilation(bone_mask, morphology.disk(1))
120
+
121
+ return bone_mask.astype(np.uint8)
122
+
123
+ def _segment_soft_tissue(self, image: np.ndarray, stats: dict) -> np.ndarray:
124
+ """
125
+ Enhanced soft tissue segmentation with better discrimination
126
+ """
127
+ percentiles = stats['percentiles']
128
+
129
+ # Soft tissue is between air and bone
130
+ soft_lower = percentiles['p20']
131
+ soft_upper = percentiles['p75']
132
+
133
+ # Initial mask
134
+ soft_mask = (image > soft_lower) & (image < soft_upper)
135
+
136
+ # Use adaptive thresholding for better edge detection
137
+ img_uint8 = (image * 255).astype(np.uint8)
138
+
139
+ # Multiple adaptive threshold scales
140
+ adaptive_masks = []
141
+ for block_size in [31, 51, 71]:
142
+ adaptive_thresh = cv2.adaptiveThreshold(
143
+ img_uint8, 255,
144
+ cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
145
+ cv2.THRESH_BINARY,
146
+ blockSize=block_size,
147
+ C=2
148
+ )
149
+ adaptive_masks.append(adaptive_thresh > 0)
150
+
151
+ # Combine adaptive masks
152
+ combined_adaptive = np.logical_and.reduce(adaptive_masks)
153
+ soft_mask = soft_mask & combined_adaptive
154
+
155
+ # Remove bone and air regions
156
+ bone_mask = self._segment_bone(image, stats)
157
+ air_mask = self._segment_air(image, stats)
158
+ soft_mask = soft_mask & ~bone_mask & ~air_mask
159
+
160
+ # Clean up
161
+ soft_mask = morphology.remove_small_objects(soft_mask, min_size=300)
162
+ soft_mask = morphology.binary_closing(soft_mask, morphology.disk(2))
163
+
164
+ return soft_mask.astype(np.uint8)
165
+
166
+ def _segment_air(self, image: np.ndarray, stats: dict) -> np.ndarray:
167
+ """
168
+ Enhanced air/lung segmentation with better boundary detection
169
+ """
170
+ percentiles = stats['percentiles']
171
+
172
+ # Air/lung regions are typically very dark
173
+ air_threshold = percentiles['p15']
174
+ air_mask = image < air_threshold
175
+
176
+ # For chest X-rays, lungs are large dark regions
177
+ # Remove small dark spots (could be noise)
178
+ air_mask = morphology.remove_small_objects(air_mask, min_size=1000)
179
+
180
+ # Fill holes to get complete lung fields
181
+ air_mask = ndimage.binary_fill_holes(air_mask)
182
+
183
+ # Refine boundaries using watershed
184
+ distance = ndimage.distance_transform_edt(air_mask)
185
+ local_maxima = morphology.local_maxima(distance)
186
+ markers = measure.label(local_maxima)
187
+
188
+ if np.any(markers):
189
+ air_mask = morphology.watershed(-distance, markers, mask=air_mask)
190
+ air_mask = air_mask > 0
191
+
192
+ return air_mask.astype(np.uint8)
193
+
194
+ def _detect_metal(self, image: np.ndarray, stats: dict) -> np.ndarray:
195
+ """
196
+ Enhanced metal detection including surgical implants
197
+ """
198
+ percentiles = stats['percentiles']
199
+
200
+ # Metal often saturates the detector
201
+ metal_threshold = percentiles['p99']
202
+ metal_mask = image > metal_threshold
203
+
204
+ # Check for high local contrast (characteristic of metal)
205
+ local_std = ndimage.generic_filter(image, np.std, size=5)
206
+ high_contrast = local_std > stats['std'] * 2
207
+
208
+ # Saturation detection - completely white areas
209
+ saturated = image >= 0.99
210
+
211
+ # Combine criteria
212
+ metal_mask = (metal_mask & high_contrast) | saturated
213
+
214
+ # Metal objects often have sharp edges
215
+ edges = filters.sobel(image)
216
+ sharp_edges = edges > np.percentile(edges, 95)
217
+ metal_mask = metal_mask | (sharp_edges & (image > percentiles['p95']))
218
+
219
+ return metal_mask.astype(np.uint8)
220
+
221
+ def _segment_fat_xray(self, image: np.ndarray, stats: dict) -> np.ndarray:
222
+ """
223
+ Detect fat tissue in X-ray (appears darker than muscle but lighter than air)
224
+ """
225
+ percentiles = stats['percentiles']
226
+
227
+ # Fat appears darker than soft tissue but lighter than air
228
+ fat_lower = percentiles['p15']
229
+ fat_upper = percentiles['p40']
230
+
231
+ fat_mask = (image > fat_lower) & (image < fat_upper)
232
+
233
+ # Fat has relatively uniform texture
234
+ texture = ndimage.generic_filter(image, np.std, size=7)
235
+ low_texture = texture < stats['std'] * 0.5
236
+
237
+ fat_mask = fat_mask & low_texture
238
+
239
+ # Remove air regions
240
+ air_mask = self._segment_air(image, stats)
241
+ fat_mask = fat_mask & ~air_mask
242
+
243
+ # Clean up
244
+ fat_mask = morphology.remove_small_objects(fat_mask, min_size=200)
245
+ fat_mask = morphology.binary_closing(fat_mask, morphology.disk(2))
246
+
247
+ return fat_mask.astype(np.uint8)
248
+
249
+ def _detect_fluid(self, image: np.ndarray, stats: dict) -> np.ndarray:
250
+ """
251
+ Detect fluid accumulation (pleural effusion, ascites, etc.)
252
+ """
253
+ percentiles = stats['percentiles']
254
+
255
+ # Fluid has intermediate density between air and soft tissue
256
+ fluid_lower = percentiles['p25']
257
+ fluid_upper = percentiles['p60']
258
+
259
+ fluid_mask = (image > fluid_lower) & (image < fluid_upper)
260
+
261
+ # Fluid tends to accumulate in dependent regions and has smooth boundaries
262
+ # Use gradient to find smooth regions
263
+ gradient = filters.sobel(image)
264
+ smooth_regions = gradient < np.percentile(gradient, 30)
265
+
266
+ fluid_mask = fluid_mask & smooth_regions
267
+
268
+ # Fluid collections are usually larger areas
269
+ fluid_mask = morphology.remove_small_objects(fluid_mask, min_size=500)
270
+
271
+ # Apply closing to fill gaps
272
+ fluid_mask = morphology.binary_closing(fluid_mask, morphology.disk(5))
273
+
274
+ return fluid_mask.astype(np.uint8)
275
+
276
+ def _create_tissue_map(self, segments: dict) -> np.ndarray:
277
+ """
278
+ Create a labeled tissue map where each pixel has a tissue type ID
279
+ """
280
+ tissue_map = np.zeros(list(segments.values())[0].shape, dtype=np.uint8)
281
+
282
+ # Assign tissue IDs (higher priority overwrites lower)
283
+ tissue_priorities = [
284
+ ('air', 1),
285
+ ('fat', 2),
286
+ ('fluid', 3),
287
+ ('soft_tissue', 4),
288
+ ('bone', 5),
289
+ ('metal', 6)
290
+ ]
291
+
292
+ for tissue_name, tissue_id in tissue_priorities:
293
+ if tissue_name in segments:
294
+ tissue_map[segments[tissue_name] > 0] = tissue_id
295
+
296
+ return tissue_map
297
+
298
+ def _create_overlay(self, image: np.ndarray, segments: dict) -> np.ndarray:
299
+ """Create enhanced color overlay visualization"""
300
+ # Convert to RGB
301
+ rgb_image = np.stack([image, image, image], axis=2)
302
+
303
+ # Enhanced color mappings
304
+ colors = {
305
+ 'bone': [1.0, 1.0, 0.8], # Light yellow
306
+ 'soft_tissue': [1.0, 0.7, 0.7], # Light red
307
+ 'air': [0.7, 0.7, 1.0], # Light blue
308
+ 'metal': [1.0, 0.5, 0.0], # Orange
309
+ 'fat': [0.9, 0.9, 0.5], # Pale yellow
310
+ 'fluid': [0.5, 0.8, 1.0] # Cyan
311
+ }
312
+
313
+ # Apply colors with transparency
314
+ alpha = 0.3
315
+ for name, mask in segments.items():
316
+ if name in colors and np.any(mask):
317
+ for i in range(3):
318
+ rgb_image[:, :, i] = np.where(
319
+ mask,
320
+ rgb_image[:, :, i] * (1 - alpha) + colors[name][i] * alpha,
321
+ rgb_image[:, :, i]
322
+ )
323
+
324
+ return rgb_image
325
+
326
+ def _perform_clinical_analysis(self, segments: dict, stats: dict,
327
+ metadata: Optional[dict]) -> dict:
328
+ """
329
+ Perform clinical analysis based on segmentation results
330
+ """
331
+ analysis = {
332
+ 'tissue_distribution': self._analyze_tissue_distribution(segments),
333
+ 'abnormality_detection': self._detect_abnormalities(segments, stats),
334
+ 'quality_assessment': self._assess_image_quality(stats)
335
+ }
336
+
337
+ # Add body-part specific analysis if metadata available
338
+ if metadata and 'BodyPartExamined' in metadata:
339
+ body_part = metadata['BodyPartExamined'].lower()
340
+ analysis['body_part_analysis'] = self._analyze_body_part(
341
+ segments, stats, body_part
342
+ )
343
+
344
+ return analysis
345
+
346
+ def _analyze_tissue_distribution(self, segments: dict) -> dict:
347
+ """Analyze the distribution of different tissues"""
348
+ total_pixels = list(segments.values())[0].size
349
+
350
+ distribution = {}
351
+ for tissue, mask in segments.items():
352
+ pixels = np.sum(mask)
353
+ percentage = (pixels / total_pixels) * 100
354
+ distribution[tissue] = {
355
+ 'pixels': int(pixels),
356
+ 'percentage': round(percentage, 2),
357
+ 'present': pixels > 100 # Minimum threshold
358
+ }
359
+
360
+ # Calculate ratios
361
+ if distribution['soft_tissue']['pixels'] > 0:
362
+ distribution['bone_to_soft_ratio'] = round(
363
+ distribution['bone']['pixels'] / distribution['soft_tissue']['pixels'],
364
+ 3
365
+ )
366
+
367
+ return distribution
368
+
369
+ def _detect_abnormalities(self, segments: dict, stats: dict) -> dict:
370
+ """Detect potential abnormalities in the image"""
371
+ abnormalities = {
372
+ 'detected': False,
373
+ 'findings': []
374
+ }
375
+
376
+ # Check for unusual tissue distributions
377
+ tissue_dist = self._analyze_tissue_distribution(segments)
378
+
379
+ # High metal content might indicate implants
380
+ if tissue_dist['metal']['percentage'] > 0.5:
381
+ abnormalities['detected'] = True
382
+ abnormalities['findings'].append({
383
+ 'type': 'metal_implant',
384
+ 'confidence': 'high',
385
+ 'description': 'Metal implant or foreign body detected'
386
+ })
387
+
388
+ # Fluid accumulation
389
+ if tissue_dist['fluid']['percentage'] > 5:
390
+ abnormalities['detected'] = True
391
+ abnormalities['findings'].append({
392
+ 'type': 'fluid_accumulation',
393
+ 'confidence': 'medium',
394
+ 'description': 'Possible fluid accumulation detected'
395
+ })
396
+
397
+ # Asymmetry detection for bilateral structures
398
+ if 'air' in segments:
399
+ asymmetry = self._check_bilateral_symmetry(segments['air'])
400
+ if asymmetry > 0.3: # 30% asymmetry threshold
401
+ abnormalities['detected'] = True
402
+ abnormalities['findings'].append({
403
+ 'type': 'asymmetry',
404
+ 'confidence': 'medium',
405
+ 'description': f'Bilateral asymmetry detected ({asymmetry:.1%})'
406
+ })
407
+
408
+ return abnormalities
409
+
410
+ def _check_bilateral_symmetry(self, mask: np.ndarray) -> float:
411
+ """Check symmetry of bilateral structures"""
412
+ height, width = mask.shape
413
+ left_half = mask[:, :width//2]
414
+ right_half = mask[:, width//2:]
415
+
416
+ # Flip right half for comparison
417
+ right_half_flipped = np.fliplr(right_half)
418
+
419
+ # Calculate difference
420
+ if right_half_flipped.shape[1] != left_half.shape[1]:
421
+ # Handle odd width
422
+ min_width = min(left_half.shape[1], right_half_flipped.shape[1])
423
+ left_half = left_half[:, :min_width]
424
+ right_half_flipped = right_half_flipped[:, :min_width]
425
+
426
+ difference = np.sum(np.abs(left_half.astype(float) - right_half_flipped.astype(float)))
427
+ total = np.sum(left_half) + np.sum(right_half_flipped)
428
+
429
+ if total == 0:
430
+ return 0.0
431
+
432
+ return difference / total
433
+
434
+ def _assess_image_quality(self, stats: dict) -> dict:
435
+ """Assess the quality of the X-ray image"""
436
+ quality = {
437
+ 'overall': 'good',
438
+ 'issues': []
439
+ }
440
+
441
+ # Check contrast
442
+ if stats['std'] < 0.1:
443
+ quality['overall'] = 'poor'
444
+ quality['issues'].append('Low contrast')
445
+
446
+ # Check if image is too dark or too bright
447
+ if stats['mean'] < 0.2:
448
+ quality['overall'] = 'fair' if quality['overall'] == 'good' else 'poor'
449
+ quality['issues'].append('Underexposed')
450
+ elif stats['mean'] > 0.8:
451
+ quality['overall'] = 'fair' if quality['overall'] == 'good' else 'poor'
452
+ quality['issues'].append('Overexposed')
453
+
454
+ # Check histogram distribution
455
+ hist = stats['histogram']
456
+ if np.max(hist) > 0.5 * np.sum(hist):
457
+ quality['overall'] = 'fair' if quality['overall'] == 'good' else 'poor'
458
+ quality['issues'].append('Poor histogram distribution')
459
+
460
+ return quality
461
+
462
+ def _analyze_body_part(self, segments: dict, stats: dict, body_part: str) -> dict:
463
+ """Perform body-part specific analysis"""
464
+ if 'chest' in body_part or 'thorax' in body_part:
465
+ return self._analyze_chest_xray(segments, stats)
466
+ elif 'abdom' in body_part:
467
+ return self._analyze_abdominal_xray(segments, stats)
468
+ elif 'extrem' in body_part or 'limb' in body_part:
469
+ return self._analyze_extremity_xray(segments, stats)
470
+ else:
471
+ return {'body_part': body_part, 'analysis': 'Generic analysis performed'}
472
+
473
+ def _analyze_chest_xray(self, segments: dict, stats: dict) -> dict:
474
+ """Specific analysis for chest X-rays"""
475
+ analysis = {
476
+ 'lung_fields': 'not_assessed',
477
+ 'cardiac_size': 'not_assessed',
478
+ 'mediastinum': 'not_assessed'
479
+ }
480
+
481
+ # Analyze lung fields
482
+ if 'air' in segments:
483
+ air_mask = segments['air']
484
+ labeled = measure.label(air_mask)
485
+ regions = measure.regionprops(labeled)
486
+
487
+ large_regions = [r for r in regions if r.area > 1000]
488
+ if len(large_regions) >= 2:
489
+ analysis['lung_fields'] = 'bilateral_present'
490
+ # Check symmetry
491
+ symmetry = self._check_bilateral_symmetry(air_mask)
492
+ if symmetry < 0.2:
493
+ analysis['lung_symmetry'] = 'symmetric'
494
+ else:
495
+ analysis['lung_symmetry'] = 'asymmetric'
496
+ elif len(large_regions) == 1:
497
+ analysis['lung_fields'] = 'unilateral_present'
498
+ else:
499
+ analysis['lung_fields'] = 'not_visualized'
500
+
501
+ # Analyze cardiac silhouette
502
+ if 'soft_tissue' in segments:
503
+ soft_mask = segments['soft_tissue']
504
+ height, width = soft_mask.shape
505
+ central_region = soft_mask[height//3:2*height//3, width//3:2*width//3]
506
+
507
+ if np.any(central_region):
508
+ analysis['cardiac_size'] = 'present'
509
+ # Simple cardiothoracic ratio estimation
510
+ cardiac_width = np.sum(np.any(central_region, axis=0))
511
+ thoracic_width = width
512
+ ctr = cardiac_width / thoracic_width
513
+ analysis['cardiothoracic_ratio'] = round(ctr, 2)
514
+
515
+ if ctr > 0.5:
516
+ analysis['cardiac_assessment'] = 'enlarged'
517
+ else:
518
+ analysis['cardiac_assessment'] = 'normal'
519
+
520
+ return analysis
521
+
522
+ def _analyze_abdominal_xray(self, segments: dict, stats: dict) -> dict:
523
+ """Specific analysis for abdominal X-rays"""
524
+ analysis = {
525
+ 'gas_pattern': 'not_assessed',
526
+ 'soft_tissue_masses': 'not_assessed',
527
+ 'calcifications': 'not_assessed'
528
+ }
529
+
530
+ # Analyze gas patterns
531
+ if 'air' in segments:
532
+ air_mask = segments['air']
533
+ labeled = measure.label(air_mask)
534
+ regions = measure.regionprops(labeled)
535
+
536
+ small_gas = [r for r in regions if 50 < r.area < 500]
537
+ large_gas = [r for r in regions if r.area >= 500]
538
+
539
+ if len(small_gas) > 10:
540
+ analysis['gas_pattern'] = 'increased_small_bowel_gas'
541
+ elif len(large_gas) > 3:
542
+ analysis['gas_pattern'] = 'distended_bowel_loops'
543
+ else:
544
+ analysis['gas_pattern'] = 'normal'
545
+
546
+ # Check for calcifications (bright spots)
547
+ if 'bone' in segments:
548
+ bone_mask = segments['bone']
549
+ labeled = measure.label(bone_mask)
550
+ regions = measure.regionprops(labeled)
551
+
552
+ small_bright = [r for r in regions if r.area < 50]
553
+ if len(small_bright) > 5:
554
+ analysis['calcifications'] = 'present'
555
+ else:
556
+ analysis['calcifications'] = 'none_detected'
557
+
558
+ return analysis
559
+
560
+ def _analyze_extremity_xray(self, segments: dict, stats: dict) -> dict:
561
+ """Specific analysis for extremity X-rays"""
562
+ analysis = {
563
+ 'bone_integrity': 'not_assessed',
564
+ 'joint_spaces': 'not_assessed',
565
+ 'soft_tissue_swelling': 'not_assessed'
566
+ }
567
+
568
+ # Analyze bone continuity
569
+ if 'bone' in segments:
570
+ bone_mask = segments['bone']
571
+
572
+ # Check for discontinuities (potential fractures)
573
+ skeleton = morphology.skeletonize(bone_mask)
574
+ endpoints = self._find_endpoints(skeleton)
575
+
576
+ if len(endpoints) > 4: # More than expected endpoints
577
+ analysis['bone_integrity'] = 'possible_discontinuity'
578
+ else:
579
+ analysis['bone_integrity'] = 'continuous'
580
+
581
+ # Analyze bone density
582
+ analysis['bone_density'] = self._assess_bone_density_pattern(bone_mask)
583
+
584
+ # Check for soft tissue swelling
585
+ if 'soft_tissue' in segments:
586
+ soft_mask = segments['soft_tissue']
587
+ soft_area = np.sum(soft_mask)
588
+ total_area = soft_mask.size
589
+
590
+ soft_percentage = (soft_area / total_area) * 100
591
+ if soft_percentage > 40:
592
+ analysis['soft_tissue_swelling'] = 'increased'
593
+ else:
594
+ analysis['soft_tissue_swelling'] = 'normal'
595
+
596
+ return analysis
597
+
598
+ def _find_endpoints(self, skeleton: np.ndarray) -> List[Tuple[int, int]]:
599
+ """Find endpoints in a skeletonized image"""
600
+ endpoints = []
601
+
602
+ # Define 8-connectivity kernel
603
+ kernel = np.array([[1, 1, 1],
604
+ [1, 0, 1],
605
+ [1, 1, 1]])
606
+
607
+ # Find points with only one neighbor
608
+ for i in range(1, skeleton.shape[0] - 1):
609
+ for j in range(1, skeleton.shape[1] - 1):
610
+ if skeleton[i, j]:
611
+ neighbors = np.sum(kernel * skeleton[i-1:i+2, j-1:j+2])
612
+ if neighbors == 1:
613
+ endpoints.append((i, j))
614
+
615
+ return endpoints
616
+
617
+ def _assess_bone_density_pattern(self, bone_mask: np.ndarray) -> str:
618
+ """Assess bone density patterns"""
619
+ # Simple assessment based on coverage
620
+ bone_pixels = np.sum(bone_mask)
621
+ total_pixels = bone_mask.size
622
+ coverage = bone_pixels / total_pixels
623
+
624
+ if coverage > 0.15:
625
+ return 'normal'
626
+ elif coverage > 0.10:
627
+ return 'mildly_decreased'
628
+ else:
629
+ return 'significantly_decreased'
630
+
631
+ def classify_pixel(self, pixel_value: float, x: int, y: int,
632
+ image_array: np.ndarray) -> dict:
633
+ """
634
+ Enhanced pixel classification with spatial context
635
+
636
+ Args:
637
+ pixel_value: Normalized pixel value (0-1)
638
+ x, y: Pixel coordinates
639
+ image_array: Full image array for context
640
+
641
+ Returns:
642
+ Classification result with confidence
643
+ """
644
+ # Get local statistics
645
+ window_size = 5
646
+ half_window = window_size // 2
647
+
648
+ # Ensure we don't go out of bounds
649
+ x_start = max(0, x - half_window)
650
+ x_end = min(image_array.shape[1], x + half_window + 1)
651
+ y_start = max(0, y - half_window)
652
+ y_end = min(image_array.shape[0], y + half_window + 1)
653
+
654
+ local_region = image_array[y_start:y_end, x_start:x_end]
655
+ local_mean = np.mean(local_region)
656
+ local_std = np.std(local_region)
657
+
658
+ # Calculate image statistics if not provided
659
+ image_stats = self._calculate_statistics(image_array)
660
+ percentiles = image_stats['percentiles']
661
+
662
+ # Enhanced classification with confidence
663
+ if pixel_value > percentiles['p95']:
664
+ tissue_type = 'Metal/Implant'
665
+ icon = 'βš™οΈ'
666
+ color = '#FFA500'
667
+ confidence = 'high' if pixel_value > percentiles['p99'] else 'medium'
668
+ elif pixel_value > percentiles['p80']:
669
+ tissue_type = 'Bone'
670
+ icon = '🦴'
671
+ color = '#FFFACD'
672
+ # Check local texture for bone confidence
673
+ if local_std > image_stats['std'] * 0.8:
674
+ confidence = 'high'
675
+ else:
676
+ confidence = 'medium'
677
+ elif pixel_value > percentiles['p60']:
678
+ tissue_type = 'Dense Soft Tissue'
679
+ icon = 'πŸ’ͺ'
680
+ color = '#FFB6C1'
681
+ confidence = 'medium'
682
+ elif pixel_value > percentiles['p40']:
683
+ tissue_type = 'Soft Tissue'
684
+ icon = 'πŸ”΄'
685
+ color = '#FFC0CB'
686
+ confidence = 'high' if local_std < image_stats['std'] * 0.5 else 'medium'
687
+ elif pixel_value > percentiles['p20']:
688
+ # Could be fat or fluid
689
+ if local_std < image_stats['std'] * 0.3:
690
+ tissue_type = 'Fat'
691
+ icon = '🟑'
692
+ color = '#FFFFE0'
693
+ else:
694
+ tissue_type = 'Fluid'
695
+ icon = 'πŸ’§'
696
+ color = '#87CEEB'
697
+ confidence = 'medium'
698
+ else:
699
+ tissue_type = 'Air/Lung'
700
+ icon = '🌫️'
701
+ color = '#ADD8E6'
702
+ confidence = 'high' if pixel_value < percentiles['p10'] else 'medium'
703
+
704
+ return {
705
+ 'type': tissue_type,
706
+ 'icon': icon,
707
+ 'color': color,
708
+ 'confidence': confidence,
709
+ 'pixel_value': round(pixel_value, 3),
710
+ 'local_context': {
711
+ 'mean': round(local_mean, 3),
712
+ 'std': round(local_std, 3)
713
+ }
714
+ }
715
+
716
+
717
+ # Convenience functions for integration
718
+ def analyze_xray(pixel_array: np.ndarray, body_part: Optional[str] = None,
719
+ metadata: Optional[dict] = None) -> dict:
720
+ """
721
+ Convenience function for X-ray analysis
722
+
723
+ Args:
724
+ pixel_array: X-ray image array
725
+ body_part: Optional body part specification
726
+ metadata: Optional DICOM metadata
727
+
728
+ Returns:
729
+ Comprehensive analysis results
730
+ """
731
+ analyzer = XRayAnalyzer()
732
+ results = analyzer.analyze_xray_image(pixel_array, metadata)
733
+
734
+ # Add body part specific analysis if specified
735
+ if body_part and 'clinical_analysis' in results:
736
+ results['clinical_analysis']['body_part_analysis'] = analyzer._analyze_body_part(
737
+ results['segments'], results['statistics'], body_part
738
+ )
739
+
740
+ return results
741
+
742
+
743
+ def classify_xray_tissue(pixel_value: float, x: int, y: int,
744
+ image_array: np.ndarray) -> dict:
745
+ """
746
+ Convenience function for tissue classification at a specific pixel
747
+
748
+ Args:
749
+ pixel_value: Normalized pixel value
750
+ x, y: Pixel coordinates
751
+ image_array: Full image for context
752
+
753
+ Returns:
754
+ Tissue classification result
755
+ """
756
+ analyzer = XRayAnalyzer()
757
+ return analyzer.classify_pixel(pixel_value, x, y, image_array)
758
+
759
+
760
+ # Example usage
761
+ if __name__ == "__main__":
762
+ print("πŸ”¬ Advanced X-Ray Analyzer for Medical Image Analysis")
763
+ print("Features:")
764
+ print(" - Multi-tissue segmentation (bone, soft tissue, air, metal, fat, fluid)")
765
+ print(" - Clinical abnormality detection")
766
+ print(" - Body-part specific analysis (chest, abdomen, extremity)")
767
+ print(" - Image quality assessment")
768
+ print(" - Spatial context-aware tissue classification")
769
+ print(" - Symmetry and structural analysis")
770
+ print(" - Comprehensive statistical analysis")
src/build_frontend.sh ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Build script for Medical Image Analyzer frontend
3
+
4
+ echo "πŸ₯ Building Medical Image Analyzer frontend..."
5
+
6
+ # Navigate to frontend directory
7
+ cd frontend
8
+
9
+ # Install dependencies
10
+ echo "πŸ“¦ Installing dependencies..."
11
+ npm install
12
+
13
+ # Build the component
14
+ echo "πŸ”¨ Building component..."
15
+ npm run build
16
+
17
+ # Copy built files to templates
18
+ echo "πŸ“‹ Copying built files to templates..."
19
+ mkdir -p ../backend/gradio_medical_image_analyzer/templates
20
+ cp -r dist/* ../backend/gradio_medical_image_analyzer/templates/
21
+
22
+ echo "βœ… Frontend build complete!"
src/demo/app.py ADDED
@@ -0,0 +1,693 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Demo for MedicalImageAnalyzer - Enhanced with file upload and overlay visualization
4
+ """
5
+
6
+ import gradio as gr
7
+ import numpy as np
8
+ import sys
9
+ import os
10
+ import cv2
11
+ from pathlib import Path
12
+
13
+ # Add backend to path
14
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
15
+
16
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
17
+
18
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
19
+ """Draw ROI circle on the image"""
20
+ # Convert to RGB if grayscale
21
+ if len(image.shape) == 2:
22
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
23
+ else:
24
+ image_rgb = image.copy()
25
+
26
+ # Draw ROI circle
27
+ center = (int(roi_x), int(roi_y))
28
+ radius = int(roi_radius)
29
+
30
+ # Draw outer circle (white)
31
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
32
+ # Draw inner circle (red)
33
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
34
+ # Draw center cross
35
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
36
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
37
+
38
+ return image_rgb
39
+
40
+ def create_fat_overlay(base_image, segmentation_results):
41
+ """Create overlay image with fat segmentation highlighted"""
42
+ # Convert to RGB
43
+ if len(base_image.shape) == 2:
44
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
45
+ else:
46
+ overlay_img = base_image.copy()
47
+
48
+ # Check if we have segmentation masks
49
+ if not segmentation_results or 'segments' not in segmentation_results:
50
+ return overlay_img
51
+
52
+ segments = segmentation_results.get('segments', {})
53
+
54
+ # Apply subcutaneous fat overlay (yellow)
55
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
56
+ mask = segments['subcutaneous']['mask']
57
+ yellow_overlay = np.zeros_like(overlay_img)
58
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
59
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
60
+
61
+ # Apply visceral fat overlay (red)
62
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
63
+ mask = segments['visceral']['mask']
64
+ red_overlay = np.zeros_like(overlay_img)
65
+ red_overlay[mask > 0] = [255, 0, 0] # Red
66
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
67
+
68
+ # Add legend
69
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
70
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
71
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
72
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
73
+
74
+ return overlay_img
75
+
76
+ def process_and_analyze(file_obj, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
77
+ """
78
+ Processes uploaded file and performs analysis
79
+ """
80
+ if file_obj is None:
81
+ return None, "No file selected", None, {}, None
82
+
83
+ # Create analyzer instance
84
+ analyzer = MedicalImageAnalyzer(
85
+ analysis_mode="structured",
86
+ include_confidence=True,
87
+ include_reasoning=True
88
+ )
89
+
90
+ try:
91
+ # Process the file (DICOM or image)
92
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
93
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
94
+
95
+ # Update modality from file metadata if it's a DICOM
96
+ if metadata.get('file_type') == 'DICOM' and 'modality' in metadata:
97
+ modality = metadata['modality']
98
+
99
+ # Prepare analysis parameters
100
+ analysis_params = {
101
+ "image": pixel_array,
102
+ "modality": modality,
103
+ "task": task
104
+ }
105
+
106
+ # Add ROI if applicable
107
+ if task in ["analyze_point", "full_analysis"]:
108
+ # Scale ROI coordinates to image size
109
+ h, w = pixel_array.shape
110
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
111
+ roi_y_scaled = int(roi_y * h / 512)
112
+
113
+ analysis_params["roi"] = {
114
+ "x": roi_x_scaled,
115
+ "y": roi_y_scaled,
116
+ "radius": roi_radius
117
+ }
118
+
119
+ # Add clinical context
120
+ if symptoms:
121
+ analysis_params["clinical_context"] = {"symptoms": symptoms}
122
+
123
+ # Perform analysis
124
+ results = analyzer.analyze_image(**analysis_params)
125
+
126
+ # Create visual report
127
+ visual_report = create_visual_report(results, metadata)
128
+
129
+ # Add metadata info
130
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
131
+ info += f"πŸ₯ {modality} | "
132
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
133
+
134
+ if metadata.get('window_center'):
135
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
136
+
137
+ # Create overlay image if requested
138
+ overlay_image = None
139
+ if show_overlay:
140
+ # For ROI visualization
141
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
142
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x_scaled, roi_y_scaled, roi_radius)
143
+
144
+ # For fat segmentation overlay (simplified version since we don't have masks in current implementation)
145
+ elif task == "segment_fat" and 'segmentation' in results and modality == 'CT':
146
+ # For now, just draw ROI since we don't have actual masks
147
+ overlay_image = display_array.copy()
148
+ if len(overlay_image.shape) == 2:
149
+ overlay_image = cv2.cvtColor(overlay_image, cv2.COLOR_GRAY2RGB)
150
+ # Add text overlay about fat percentages
151
+ if 'statistics' in results['segmentation']:
152
+ stats = results['segmentation']['statistics']
153
+ cv2.putText(overlay_image, f"Total Fat: {stats.get('total_fat_percentage', 0):.1f}%",
154
+ (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
155
+ cv2.putText(overlay_image, f"Subcutaneous: {stats.get('subcutaneous_fat_percentage', 0):.1f}%",
156
+ (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
157
+ cv2.putText(overlay_image, f"Visceral: {stats.get('visceral_fat_percentage', 0):.1f}%",
158
+ (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
159
+
160
+ return display_array, info, visual_report, results, overlay_image
161
+
162
+ except Exception as e:
163
+ error_msg = f"Error: {str(e)}"
164
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
165
+
166
+ def create_visual_report(results, metadata):
167
+ """Creates a visual HTML report with improved styling"""
168
+ html = f"""
169
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
170
+ padding: 24px;
171
+ background: #ffffff;
172
+ border-radius: 12px;
173
+ max-width: 100%;
174
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
175
+ color: #1a1a1a !important;'>
176
+
177
+ <h2 style='color: #1e40af !important;
178
+ border-bottom: 3px solid #3b82f6;
179
+ padding-bottom: 12px;
180
+ margin-bottom: 20px;
181
+ font-size: 24px;
182
+ font-weight: 600;'>
183
+ πŸ₯ Medical Image Analysis Report
184
+ </h2>
185
+
186
+ <div style='background: #f0f9ff;
187
+ padding: 20px;
188
+ margin: 16px 0;
189
+ border-radius: 8px;
190
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
191
+ <h3 style='color: #1e3a8a !important;
192
+ font-size: 18px;
193
+ font-weight: 600;
194
+ margin-bottom: 12px;'>
195
+ πŸ“‹ Metadata
196
+ </h3>
197
+ <table style='width: 100%; border-collapse: collapse;'>
198
+ <tr>
199
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
200
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
201
+ </tr>
202
+ <tr>
203
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
204
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
205
+ </tr>
206
+ <tr>
207
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
208
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
209
+ </tr>
210
+ <tr>
211
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
212
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
213
+ </tr>
214
+ </table>
215
+ </div>
216
+ """
217
+
218
+ # Point Analysis
219
+ if 'point_analysis' in results:
220
+ pa = results['point_analysis']
221
+ tissue = pa.get('tissue_type', {})
222
+
223
+ html += f"""
224
+ <div style='background: #f0f9ff;
225
+ padding: 20px;
226
+ margin: 16px 0;
227
+ border-radius: 8px;
228
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
229
+ <h3 style='color: #1e3a8a !important;
230
+ font-size: 18px;
231
+ font-weight: 600;
232
+ margin-bottom: 12px;'>
233
+ 🎯 Point Analysis
234
+ </h3>
235
+ <table style='width: 100%; border-collapse: collapse;'>
236
+ <tr>
237
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
238
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
239
+ </tr>
240
+ """
241
+
242
+ if results.get('modality') == 'CT':
243
+ html += f"""
244
+ <tr>
245
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
246
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
247
+ </tr>
248
+ """
249
+ else:
250
+ html += f"""
251
+ <tr>
252
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
253
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
254
+ </tr>
255
+ """
256
+
257
+ html += f"""
258
+ <tr>
259
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
260
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
261
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
262
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
263
+ </td>
264
+ </tr>
265
+ <tr>
266
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
267
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
268
+ </tr>
269
+ </table>
270
+ """
271
+
272
+ if 'reasoning' in pa:
273
+ html += f"""
274
+ <div style='margin-top: 12px;
275
+ padding: 12px;
276
+ background: #dbeafe;
277
+ border-left: 3px solid #3b82f6;
278
+ border-radius: 4px;'>
279
+ <p style='margin: 0; color: #1e40af !important; font-style: italic;'>
280
+ πŸ’­ {pa['reasoning']}
281
+ </p>
282
+ </div>
283
+ """
284
+
285
+ html += "</div>"
286
+
287
+ # Segmentation Results
288
+ if 'segmentation' in results and results['segmentation']:
289
+ seg = results['segmentation']
290
+
291
+ if 'statistics' in seg:
292
+ # Fat segmentation for CT
293
+ stats = seg['statistics']
294
+ html += f"""
295
+ <div style='background: #f0f9ff;
296
+ padding: 20px;
297
+ margin: 16px 0;
298
+ border-radius: 8px;
299
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
300
+ <h3 style='color: #1e3a8a !important;
301
+ font-size: 18px;
302
+ font-weight: 600;
303
+ margin-bottom: 12px;'>
304
+ πŸ”¬ Fat Segmentation Analysis
305
+ </h3>
306
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
307
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
308
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
309
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
310
+ </div>
311
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
312
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
313
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
314
+ </div>
315
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
316
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
317
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
318
+ </div>
319
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
320
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
321
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
322
+ </div>
323
+ </div>
324
+ """
325
+
326
+ if 'interpretation' in seg:
327
+ interp = seg['interpretation']
328
+ obesity_color = "#16a34a" if interp.get("obesity_risk") == "normal" else "#d97706" if interp.get("obesity_risk") == "moderate" else "#dc2626"
329
+ visceral_color = "#16a34a" if interp.get("visceral_risk") == "normal" else "#d97706" if interp.get("visceral_risk") == "moderate" else "#dc2626"
330
+
331
+ html += f"""
332
+ <div style='margin-top: 16px; padding: 16px; background: #f3f4f6; border-radius: 6px;'>
333
+ <h4 style='color: #374151 !important; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
334
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
335
+ <div>
336
+ <span style='color: #6b7280 !important; font-size: 14px;'>Obesity Risk:</span>
337
+ <span style='color: {obesity_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
338
+ </div>
339
+ <div>
340
+ <span style='color: #6b7280 !important; font-size: 14px;'>Visceral Risk:</span>
341
+ <span style='color: {visceral_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
342
+ </div>
343
+ </div>
344
+ """
345
+
346
+ if interp.get('recommendations'):
347
+ html += """
348
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e5e7eb;'>
349
+ <h5 style='color: #374151 !important; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
350
+ <ul style='margin: 0; padding-left: 20px; color: #4b5563 !important;'>
351
+ """
352
+ for rec in interp['recommendations']:
353
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
354
+ html += "</ul></div>"
355
+
356
+ html += "</div>"
357
+ html += "</div>"
358
+
359
+ # Quality Assessment
360
+ if 'quality_metrics' in results:
361
+ quality = results['quality_metrics']
362
+ quality_colors = {
363
+ 'excellent': '#16a34a',
364
+ 'good': '#16a34a',
365
+ 'fair': '#d97706',
366
+ 'poor': '#dc2626',
367
+ 'unknown': '#6b7280'
368
+ }
369
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#6b7280')
370
+
371
+ html += f"""
372
+ <div style='background: #f0f9ff;
373
+ padding: 20px;
374
+ margin: 16px 0;
375
+ border-radius: 8px;
376
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
377
+ <h3 style='color: #1e3a8a !important;
378
+ font-size: 18px;
379
+ font-weight: 600;
380
+ margin-bottom: 12px;'>
381
+ πŸ“Š Image Quality Assessment
382
+ </h3>
383
+ <div style='display: flex; align-items: center; gap: 16px;'>
384
+ <div>
385
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
386
+ <span style='color: {q_color} !important;
387
+ font-size: 18px;
388
+ font-weight: 700;
389
+ margin-left: 8px;'>
390
+ {quality.get('overall_quality', 'unknown').upper()}
391
+ </span>
392
+ </div>
393
+ </div>
394
+ """
395
+
396
+ if quality.get('issues'):
397
+ html += f"""
398
+ <div style='margin-top: 12px;
399
+ padding: 12px;
400
+ background: #fef3c7;
401
+ border-left: 3px solid #f59e0b;
402
+ border-radius: 4px;'>
403
+ <strong style='color: #92400e !important;'>Issues Detected:</strong>
404
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #92400e !important;'>
405
+ """
406
+ for issue in quality['issues']:
407
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
408
+ html += "</ul></div>"
409
+
410
+ html += "</div>"
411
+
412
+ html += "</div>"
413
+ return html
414
+
415
+ def create_demo():
416
+ with gr.Blocks(
417
+ title="Medical Image Analyzer - Enhanced Demo",
418
+ theme=gr.themes.Soft(
419
+ primary_hue="blue",
420
+ secondary_hue="blue",
421
+ neutral_hue="slate",
422
+ text_size="md",
423
+ spacing_size="md",
424
+ radius_size="md",
425
+ ).set(
426
+ # Medical blue theme colors
427
+ body_background_fill="*neutral_950",
428
+ body_background_fill_dark="*neutral_950",
429
+ block_background_fill="*neutral_900",
430
+ block_background_fill_dark="*neutral_900",
431
+ border_color_primary="*primary_600",
432
+ border_color_primary_dark="*primary_600",
433
+ # Text colors for better contrast
434
+ body_text_color="*neutral_100",
435
+ body_text_color_dark="*neutral_100",
436
+ body_text_color_subdued="*neutral_300",
437
+ body_text_color_subdued_dark="*neutral_300",
438
+ # Button colors
439
+ button_primary_background_fill="*primary_600",
440
+ button_primary_background_fill_dark="*primary_600",
441
+ button_primary_text_color="white",
442
+ button_primary_text_color_dark="white",
443
+ ),
444
+ css="""
445
+ /* Medical blue theme with high contrast */
446
+ :root {
447
+ --medical-blue: #1e40af;
448
+ --medical-blue-light: #3b82f6;
449
+ --medical-blue-dark: #1e3a8a;
450
+ --text-primary: #f9fafb;
451
+ --text-secondary: #e5e7eb;
452
+ --bg-primary: #0f172a;
453
+ --bg-secondary: #1e293b;
454
+ --bg-tertiary: #334155;
455
+ }
456
+
457
+ /* Override default text colors for medical theme */
458
+ * {
459
+ color: var(--text-primary) !important;
460
+ }
461
+
462
+ /* Style the file upload area */
463
+ .file-upload {
464
+ border: 2px dashed var(--medical-blue-light) !important;
465
+ border-radius: 8px !important;
466
+ padding: 20px !important;
467
+ text-align: center !important;
468
+ background: var(--bg-secondary) !important;
469
+ transition: all 0.3s ease !important;
470
+ color: var(--text-primary) !important;
471
+ }
472
+
473
+ .file-upload:hover {
474
+ border-color: var(--medical-blue) !important;
475
+ background: var(--bg-tertiary) !important;
476
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
477
+ }
478
+
479
+ /* Ensure report text is readable with white background */
480
+ .medical-report {
481
+ background: #ffffff !important;
482
+ border: 2px solid var(--medical-blue-light) !important;
483
+ border-radius: 8px !important;
484
+ padding: 16px !important;
485
+ color: #1a1a1a !important;
486
+ }
487
+
488
+ .medical-report * {
489
+ color: #1f2937 !important; /* Dark gray text */
490
+ }
491
+
492
+ .medical-report h2 {
493
+ color: #1e40af !important; /* Medical blue for main heading */
494
+ }
495
+
496
+ .medical-report h3, .medical-report h4 {
497
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
498
+ }
499
+
500
+ .medical-report strong {
501
+ color: #374151 !important; /* Darker gray for labels */
502
+ }
503
+
504
+ .medical-report td {
505
+ color: #1f2937 !important; /* Ensure table text is dark */
506
+ }
507
+
508
+ /* Report sections with light blue background */
509
+ .medical-report > div {
510
+ background: #f0f9ff !important;
511
+ color: #1f2937 !important;
512
+ }
513
+
514
+ /* Medical blue accents for UI elements */
515
+ .gr-button-primary {
516
+ background: var(--medical-blue) !important;
517
+ border-color: var(--medical-blue) !important;
518
+ }
519
+
520
+ .gr-button-primary:hover {
521
+ background: var(--medical-blue-dark) !important;
522
+ border-color: var(--medical-blue-dark) !important;
523
+ }
524
+
525
+ /* Tab styling */
526
+ .gr-tab-item {
527
+ border-color: var(--medical-blue-light) !important;
528
+ }
529
+
530
+ .gr-tab-item.selected {
531
+ background: var(--medical-blue) !important;
532
+ color: white !important;
533
+ }
534
+
535
+ /* Accordion styling */
536
+ .gr-accordion {
537
+ border-color: var(--medical-blue-light) !important;
538
+ }
539
+
540
+ /* Slider track in medical blue */
541
+ input[type="range"]::-webkit-slider-track {
542
+ background: var(--bg-tertiary) !important;
543
+ }
544
+
545
+ input[type="range"]::-webkit-slider-thumb {
546
+ background: var(--medical-blue) !important;
547
+ }
548
+ """
549
+ ) as demo:
550
+ gr.Markdown("""
551
+ # πŸ₯ Medical Image Analyzer
552
+
553
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
554
+ """)
555
+
556
+ with gr.Row():
557
+ with gr.Column(scale=1):
558
+ # File upload - no file type restrictions
559
+ with gr.Group():
560
+ gr.Markdown("### πŸ“€ Upload Medical Image")
561
+ file_input = gr.File(
562
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
563
+ file_count="single",
564
+ type="filepath",
565
+ elem_classes="file-upload"
566
+ # Note: NO file_types parameter = accepts ALL files
567
+ )
568
+ gr.Markdown("""
569
+ <small style='color: #666;'>
570
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
571
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
572
+ </small>
573
+ """)
574
+
575
+ # Modality selection
576
+ modality = gr.Radio(
577
+ choices=["CT", "CR", "DX", "RX", "DR"],
578
+ value="CT",
579
+ label="Modality",
580
+ info="Will be auto-detected for DICOM files"
581
+ )
582
+
583
+ # Task selection
584
+ task = gr.Dropdown(
585
+ choices=[
586
+ ("🎯 Point Analysis", "analyze_point"),
587
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
588
+ ("πŸ“Š Full Analysis", "full_analysis")
589
+ ],
590
+ value="full_analysis",
591
+ label="Analysis Task"
592
+ )
593
+
594
+ # ROI settings
595
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
596
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
597
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
598
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
599
+
600
+ # Clinical context
601
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
602
+ symptoms = gr.CheckboxGroup(
603
+ choices=[
604
+ "dyspnea", "chest_pain", "abdominal_pain",
605
+ "trauma", "obesity_screening", "routine_check"
606
+ ],
607
+ label="Symptoms/Indication"
608
+ )
609
+
610
+ # Visualization options
611
+ with gr.Accordion("🎨 Visualization Options", open=True):
612
+ show_overlay = gr.Checkbox(
613
+ label="Show ROI/Segmentation Overlay",
614
+ value=True,
615
+ info="Display ROI circle or fat segmentation info on the image"
616
+ )
617
+
618
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
619
+
620
+ with gr.Column(scale=2):
621
+ # Results with tabs for different views
622
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
623
+ image_display = gr.Image(label="Medical Image", type="numpy")
624
+
625
+ with gr.Tab("🎯 Overlay View"):
626
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
627
+
628
+ file_info = gr.Textbox(label="File Information", lines=1)
629
+
630
+ with gr.Tab("πŸ“Š Visual Report"):
631
+ report_html = gr.HTML()
632
+
633
+ with gr.Tab("πŸ”§ JSON Output"):
634
+ json_output = gr.JSON(label="Structured Data for AI Agents")
635
+
636
+ # Examples and help
637
+ with gr.Row():
638
+ gr.Markdown("""
639
+ ### πŸ“ Supported Formats
640
+ - **DICOM**: Automatic HU value extraction and modality detection
641
+ - **PNG/JPG**: Interpreted based on selected modality
642
+ - **All Formats**: Automatic grayscale conversion
643
+ - **Files without extension**: Supported (e.g., IM_0001) - will try DICOM first
644
+
645
+ ### 🎯 Usage
646
+ 1. Upload a medical image file
647
+ 2. Select modality (auto-detected for DICOM)
648
+ 3. Choose analysis task
649
+ 4. Adjust ROI position for point analysis
650
+ 5. Click "Analyze"
651
+
652
+ ### πŸ’‘ Features
653
+ - **ROI Visualization**: See the exact area being analyzed
654
+ - **Fat Segmentation**: Visual percentages for CT scans
655
+ - **Multi-format Support**: Works with any medical image format
656
+ - **AI Agent Ready**: Structured JSON output for integration
657
+ """)
658
+
659
+ # Connect the interface
660
+ analyze_btn.click(
661
+ fn=process_and_analyze,
662
+ inputs=[file_input, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
663
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
664
+ )
665
+
666
+ # Auto-update ROI limits when image is loaded
667
+ def update_roi_on_upload(file_obj):
668
+ if file_obj is None:
669
+ return gr.update(), gr.update()
670
+
671
+ try:
672
+ analyzer = MedicalImageAnalyzer()
673
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
674
+
675
+ if 'shape' in metadata:
676
+ h, w = metadata['shape']
677
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
678
+ except:
679
+ pass
680
+
681
+ return gr.update(), gr.update()
682
+
683
+ file_input.change(
684
+ fn=update_roi_on_upload,
685
+ inputs=[file_input],
686
+ outputs=[roi_x, roi_y]
687
+ )
688
+
689
+ return demo
690
+
691
+ if __name__ == "__main__":
692
+ demo = create_demo()
693
+ demo.launch()
src/demo/app_with_frontend.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Demo fΓΌr MedicalImageAnalyzer mit Frontend Component
4
+ Zeigt die Verwendung der vollstΓ€ndigen Gradio Custom Component
5
+ """
6
+
7
+ import gradio as gr
8
+ import numpy as np
9
+ import sys
10
+ import os
11
+ from pathlib import Path
12
+
13
+ # Add backend to path
14
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
15
+
16
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
17
+
18
+ # Example data for demos
19
+ EXAMPLE_DATA = [
20
+ {
21
+ "image": {"url": "examples/ct_chest.png"},
22
+ "analysis": {
23
+ "modality": "CT",
24
+ "point_analysis": {
25
+ "tissue_type": {"icon": "🟑", "type": "fat"},
26
+ "hu_value": -75.0
27
+ },
28
+ "segmentation": {
29
+ "interpretation": {
30
+ "obesity_risk": "moderate"
31
+ }
32
+ }
33
+ }
34
+ },
35
+ {
36
+ "image": {"url": "examples/xray_chest.png"},
37
+ "analysis": {
38
+ "modality": "CR",
39
+ "point_analysis": {
40
+ "tissue_type": {"icon": "🦴", "type": "bone"}
41
+ }
42
+ }
43
+ }
44
+ ]
45
+
46
+ def create_demo():
47
+ with gr.Blocks(title="Medical Image Analyzer - Component Demo") as demo:
48
+ gr.Markdown("""
49
+ # πŸ₯ Medical Image Analyzer - Frontend Component Demo
50
+
51
+ Diese Demo zeigt die vollstΓ€ndige Gradio Custom Component mit Frontend-Integration.
52
+ UnterstΓΌtzt DICOM-Dateien und alle gΓ€ngigen Bildformate.
53
+ """)
54
+
55
+ with gr.Row():
56
+ with gr.Column():
57
+ # Configuration
58
+ gr.Markdown("### βš™οΈ Konfiguration")
59
+
60
+ analysis_mode = gr.Radio(
61
+ choices=["structured", "visual"],
62
+ value="structured",
63
+ label="Analyse-Modus",
64
+ info="structured: fΓΌr AI Agents, visual: fΓΌr Menschen"
65
+ )
66
+
67
+ include_confidence = gr.Checkbox(
68
+ value=True,
69
+ label="Konfidenzwerte einschließen"
70
+ )
71
+
72
+ include_reasoning = gr.Checkbox(
73
+ value=True,
74
+ label="Reasoning einschließen"
75
+ )
76
+
77
+ with gr.Column(scale=2):
78
+ # The custom component
79
+ analyzer = MedicalImageAnalyzer(
80
+ label="Medical Image Analyzer",
81
+ analysis_mode="structured",
82
+ include_confidence=True,
83
+ include_reasoning=True,
84
+ elem_id="medical-analyzer"
85
+ )
86
+
87
+ # Examples section
88
+ gr.Markdown("### πŸ“ Beispiele")
89
+
90
+ examples = gr.Examples(
91
+ examples=EXAMPLE_DATA,
92
+ inputs=analyzer,
93
+ label="Beispiel-Analysen"
94
+ )
95
+
96
+ # Info section
97
+ gr.Markdown("""
98
+ ### πŸ“ Verwendung
99
+
100
+ 1. **Datei hochladen**: Ziehen Sie eine DICOM- oder Bilddatei in den Upload-Bereich
101
+ 2. **ModalitΓ€t wΓ€hlen**: CT, CR, DX, RX, oder DR
102
+ 3. **Analyse-Task**: Punktanalyse, Fettsegmentierung, oder vollstΓ€ndige Analyse
103
+ 4. **ROI aktivieren**: Klicken Sie auf das Bild, um einen Analysepunkt zu wΓ€hlen
104
+
105
+ ### πŸ”§ Features
106
+
107
+ - **DICOM Support**: Automatische Erkennung von ModalitΓ€t und HU-Werten
108
+ - **Multi-Tissue Segmentation**: Erkennt Knochen, Weichgewebe, Luft, Metall, Fett, FlΓΌssigkeit
109
+ - **Klinische Bewertung**: Adipositas-Risiko, Gewebeverteilung, Anomalieerkennung
110
+ - **AI-Agent Ready**: Strukturierte JSON-Ausgabe fΓΌr Integration
111
+
112
+ ### πŸ”— Integration
113
+
114
+ ```python
115
+ import gradio as gr
116
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
117
+
118
+ analyzer = MedicalImageAnalyzer(
119
+ analysis_mode="structured",
120
+ include_confidence=True
121
+ )
122
+
123
+ # Use in your Gradio app
124
+ with gr.Blocks() as app:
125
+ analyzer_component = analyzer
126
+ # ... rest of your app
127
+ ```
128
+ """)
129
+
130
+ # Event handlers
131
+ def update_config(mode, conf, reason):
132
+ # This would update the component configuration
133
+ # In real implementation, this would be handled by the component
134
+ return gr.update(
135
+ analysis_mode=mode,
136
+ include_confidence=conf,
137
+ include_reasoning=reason
138
+ )
139
+
140
+ # Connect configuration changes
141
+ for config in [analysis_mode, include_confidence, include_reasoning]:
142
+ config.change(
143
+ fn=update_config,
144
+ inputs=[analysis_mode, include_confidence, include_reasoning],
145
+ outputs=analyzer
146
+ )
147
+
148
+ # Handle analysis results
149
+ def handle_analysis_complete(data):
150
+ if data and "analysis" in data:
151
+ analysis = data["analysis"]
152
+ report = data.get("report", "")
153
+
154
+ # Log to console for debugging
155
+ print("Analysis completed:")
156
+ print(f"Modality: {analysis.get('modality', 'Unknown')}")
157
+ if "point_analysis" in analysis:
158
+ print(f"Tissue: {analysis['point_analysis'].get('tissue_type', {}).get('type', 'Unknown')}")
159
+
160
+ return data
161
+ return data
162
+
163
+ analyzer.change(
164
+ fn=handle_analysis_complete,
165
+ inputs=analyzer,
166
+ outputs=analyzer
167
+ )
168
+
169
+ return demo
170
+
171
+
172
+ def create_simple_demo():
173
+ """Einfache Demo ohne viel Konfiguration"""
174
+ with gr.Blocks(title="Medical Image Analyzer - Simple Demo") as demo:
175
+ gr.Markdown("# πŸ₯ Medical Image Analyzer")
176
+
177
+ analyzer = MedicalImageAnalyzer(
178
+ label="Laden Sie ein medizinisches Bild hoch (DICOM, PNG, JPG)",
179
+ analysis_mode="visual", # Visual mode for human-readable output
180
+ elem_id="analyzer"
181
+ )
182
+
183
+ # Auto-analyze on upload
184
+ @analyzer.upload
185
+ def auto_analyze(file_data):
186
+ # The component handles the analysis internally
187
+ return file_data
188
+
189
+ return demo
190
+
191
+
192
+ if __name__ == "__main__":
193
+ # You can switch between demos
194
+ # demo = create_demo() # Full demo with configuration
195
+ demo = create_simple_demo() # Simple demo
196
+
197
+ demo.launch()
src/demo/css.css ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ html {
2
+ font-family: Inter;
3
+ font-size: 16px;
4
+ font-weight: 400;
5
+ line-height: 1.5;
6
+ -webkit-text-size-adjust: 100%;
7
+ background: #fff;
8
+ color: #323232;
9
+ -webkit-font-smoothing: antialiased;
10
+ -moz-osx-font-smoothing: grayscale;
11
+ text-rendering: optimizeLegibility;
12
+ }
13
+
14
+ :root {
15
+ --space: 1;
16
+ --vspace: calc(var(--space) * 1rem);
17
+ --vspace-0: calc(3 * var(--space) * 1rem);
18
+ --vspace-1: calc(2 * var(--space) * 1rem);
19
+ --vspace-2: calc(1.5 * var(--space) * 1rem);
20
+ --vspace-3: calc(0.5 * var(--space) * 1rem);
21
+ }
22
+
23
+ .app {
24
+ max-width: 748px !important;
25
+ }
26
+
27
+ .prose p {
28
+ margin: var(--vspace) 0;
29
+ line-height: var(--vspace * 2);
30
+ font-size: 1rem;
31
+ }
32
+
33
+ code {
34
+ font-family: "Inconsolata", sans-serif;
35
+ font-size: 16px;
36
+ }
37
+
38
+ h1,
39
+ h1 code {
40
+ font-weight: 400;
41
+ line-height: calc(2.5 / var(--space) * var(--vspace));
42
+ }
43
+
44
+ h1 code {
45
+ background: none;
46
+ border: none;
47
+ letter-spacing: 0.05em;
48
+ padding-bottom: 5px;
49
+ position: relative;
50
+ padding: 0;
51
+ }
52
+
53
+ h2 {
54
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
55
+ line-height: 1em;
56
+ }
57
+
58
+ h3,
59
+ h3 code {
60
+ margin: var(--vspace-1) 0 var(--vspace-2) 0;
61
+ line-height: 1em;
62
+ }
63
+
64
+ h4,
65
+ h5,
66
+ h6 {
67
+ margin: var(--vspace-3) 0 var(--vspace-3) 0;
68
+ line-height: var(--vspace);
69
+ }
70
+
71
+ .bigtitle,
72
+ h1,
73
+ h1 code {
74
+ font-size: calc(8px * 4.5);
75
+ word-break: break-word;
76
+ }
77
+
78
+ .title,
79
+ h2,
80
+ h2 code {
81
+ font-size: calc(8px * 3.375);
82
+ font-weight: lighter;
83
+ word-break: break-word;
84
+ border: none;
85
+ background: none;
86
+ }
87
+
88
+ .subheading1,
89
+ h3,
90
+ h3 code {
91
+ font-size: calc(8px * 1.8);
92
+ font-weight: 600;
93
+ border: none;
94
+ background: none;
95
+ letter-spacing: 0.1em;
96
+ text-transform: uppercase;
97
+ }
98
+
99
+ h2 code {
100
+ padding: 0;
101
+ position: relative;
102
+ letter-spacing: 0.05em;
103
+ }
104
+
105
+ blockquote {
106
+ font-size: calc(8px * 1.1667);
107
+ font-style: italic;
108
+ line-height: calc(1.1667 * var(--vspace));
109
+ margin: var(--vspace-2) var(--vspace-2);
110
+ }
111
+
112
+ .subheading2,
113
+ h4 {
114
+ font-size: calc(8px * 1.4292);
115
+ text-transform: uppercase;
116
+ font-weight: 600;
117
+ }
118
+
119
+ .subheading3,
120
+ h5 {
121
+ font-size: calc(8px * 1.2917);
122
+ line-height: calc(1.2917 * var(--vspace));
123
+
124
+ font-weight: lighter;
125
+ text-transform: uppercase;
126
+ letter-spacing: 0.15em;
127
+ }
128
+
129
+ h6 {
130
+ font-size: calc(8px * 1.1667);
131
+ font-size: 1.1667em;
132
+ font-weight: normal;
133
+ font-style: italic;
134
+ font-family: "le-monde-livre-classic-byol", serif !important;
135
+ letter-spacing: 0px !important;
136
+ }
137
+
138
+ #start .md > *:first-child {
139
+ margin-top: 0;
140
+ }
141
+
142
+ h2 + h3 {
143
+ margin-top: 0;
144
+ }
145
+
146
+ .md hr {
147
+ border: none;
148
+ border-top: 1px solid var(--block-border-color);
149
+ margin: var(--vspace-2) 0 var(--vspace-2) 0;
150
+ }
151
+ .prose ul {
152
+ margin: var(--vspace-2) 0 var(--vspace-1) 0;
153
+ }
154
+
155
+ .gap {
156
+ gap: 0;
157
+ }
src/demo/space.py ADDED
@@ -0,0 +1,813 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import gradio as gr
3
+ from app import demo as app
4
+ import os
5
+
6
+ _docs = {'MedicalImageAnalyzer': {'description': 'A Gradio component for AI-agent compatible medical image analysis.\n\nProvides structured output for:\n- HU value analysis (CT only)\n- Tissue classification\n- Fat segmentation (subcutaneous, visceral)\n- Confidence scores and reasoning', 'members': {'__init__': {'value': {'type': 'typing.Optional[typing.Dict[str, typing.Any]][\n typing.Dict[str, typing.Any][str, typing.Any], None\n]', 'default': 'None', 'description': None}, 'label': {'type': 'typing.Optional[str][str, None]', 'default': 'None', 'description': None}, 'info': {'type': 'typing.Optional[str][str, None]', 'default': 'None', 'description': None}, 'every': {'type': 'typing.Optional[float][float, None]', 'default': 'None', 'description': None}, 'show_label': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'container': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'scale': {'type': 'typing.Optional[int][int, None]', 'default': 'None', 'description': None}, 'min_width': {'type': 'typing.Optional[int][int, None]', 'default': 'None', 'description': None}, 'visible': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'elem_id': {'type': 'typing.Optional[str][str, None]', 'default': 'None', 'description': None}, 'elem_classes': {'type': 'typing.Union[typing.List[str], str, NoneType][\n typing.List[str][str], str, None\n]', 'default': 'None', 'description': None}, 'render': {'type': 'typing.Optional[bool][bool, None]', 'default': 'None', 'description': None}, 'key': {'type': 'typing.Union[int, str, NoneType][int, str, None]', 'default': 'None', 'description': None}, 'analysis_mode': {'type': 'str', 'default': '"structured"', 'description': '"structured" for AI agents, "visual" for human interpretation'}, 'include_confidence': {'type': 'bool', 'default': 'True', 'description': 'Include confidence scores in results'}, 'include_reasoning': {'type': 'bool', 'default': 'True', 'description': 'Include reasoning/explanation for findings'}, 'segmentation_types': {'type': 'typing.List[str][str]', 'default': 'None', 'description': 'List of segmentation types to perform'}}, 'postprocess': {'value': {'type': 'typing.Dict[str, typing.Any][str, typing.Any]', 'description': None}}, 'preprocess': {'return': {'type': 'typing.Dict[str, typing.Any][str, typing.Any]', 'description': None}, 'value': None}}, 'events': {'change': {'type': None, 'default': None, 'description': 'Triggered when the value of the MedicalImageAnalyzer changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input.'}, 'select': {'type': None, 'default': None, 'description': 'Event listener for when the user selects or deselects the MedicalImageAnalyzer. Uses event data gradio.SelectData to carry `value` referring to the label of the MedicalImageAnalyzer, and `selected` to refer to state of the MedicalImageAnalyzer. See EventData documentation on how to use this event data'}, 'upload': {'type': None, 'default': None, 'description': 'This listener is triggered when the user uploads a file into the MedicalImageAnalyzer.'}, 'clear': {'type': None, 'default': None, 'description': 'This listener is triggered when the user clears the MedicalImageAnalyzer using the clear button for the component.'}}}, '__meta__': {'additional_interfaces': {}, 'user_fn_refs': {'MedicalImageAnalyzer': []}}}
7
+
8
+ abs_path = os.path.join(os.path.dirname(__file__), "css.css")
9
+
10
+ with gr.Blocks(
11
+ css=abs_path,
12
+ theme=gr.themes.Default(
13
+ font_mono=[
14
+ gr.themes.GoogleFont("Inconsolata"),
15
+ "monospace",
16
+ ],
17
+ ),
18
+ ) as demo:
19
+ gr.Markdown(
20
+ """
21
+ # `gradio_medical_image_analyzer`
22
+
23
+ <div style="display: flex; gap: 7px;">
24
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange"> <a href="https://github.com/yourusername/gradio-medical-image-analyzer/issues" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Issues-white?logo=github&logoColor=black"></a>
25
+ </div>
26
+
27
+ AI-agent optimized medical image analysis component for Gradio
28
+ """, elem_classes=["md-custom"], header_links=True)
29
+ app.render()
30
+ gr.Markdown(
31
+ """
32
+ ## Installation
33
+
34
+ ```bash
35
+ pip install gradio_medical_image_analyzer
36
+ ```
37
+
38
+ ## Usage
39
+
40
+ ```python
41
+ #!/usr/bin/env python3
42
+ \"\"\"
43
+ Demo for MedicalImageAnalyzer - Enhanced with file upload and overlay visualization
44
+ \"\"\"
45
+
46
+ import gradio as gr
47
+ import numpy as np
48
+ import sys
49
+ import os
50
+ import cv2
51
+ from pathlib import Path
52
+
53
+ # Add backend to path
54
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
55
+
56
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
57
+
58
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
59
+ \"\"\"Draw ROI circle on the image\"\"\"
60
+ # Convert to RGB if grayscale
61
+ if len(image.shape) == 2:
62
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
63
+ else:
64
+ image_rgb = image.copy()
65
+
66
+ # Draw ROI circle
67
+ center = (int(roi_x), int(roi_y))
68
+ radius = int(roi_radius)
69
+
70
+ # Draw outer circle (white)
71
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
72
+ # Draw inner circle (red)
73
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
74
+ # Draw center cross
75
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
76
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
77
+
78
+ return image_rgb
79
+
80
+ def create_fat_overlay(base_image, segmentation_results):
81
+ \"\"\"Create overlay image with fat segmentation highlighted\"\"\"
82
+ # Convert to RGB
83
+ if len(base_image.shape) == 2:
84
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
85
+ else:
86
+ overlay_img = base_image.copy()
87
+
88
+ # Check if we have segmentation masks
89
+ if not segmentation_results or 'segments' not in segmentation_results:
90
+ return overlay_img
91
+
92
+ segments = segmentation_results.get('segments', {})
93
+
94
+ # Apply subcutaneous fat overlay (yellow)
95
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
96
+ mask = segments['subcutaneous']['mask']
97
+ yellow_overlay = np.zeros_like(overlay_img)
98
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
99
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
100
+
101
+ # Apply visceral fat overlay (red)
102
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
103
+ mask = segments['visceral']['mask']
104
+ red_overlay = np.zeros_like(overlay_img)
105
+ red_overlay[mask > 0] = [255, 0, 0] # Red
106
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
107
+
108
+ # Add legend
109
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
110
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
111
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
112
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
113
+
114
+ return overlay_img
115
+
116
+ def process_and_analyze(file_obj, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
117
+ \"\"\"
118
+ Processes uploaded file and performs analysis
119
+ \"\"\"
120
+ if file_obj is None:
121
+ return None, "No file selected", None, {}, None
122
+
123
+ # Create analyzer instance
124
+ analyzer = MedicalImageAnalyzer(
125
+ analysis_mode="structured",
126
+ include_confidence=True,
127
+ include_reasoning=True
128
+ )
129
+
130
+ try:
131
+ # Process the file (DICOM or image)
132
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
133
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
134
+
135
+ # Update modality from file metadata if it's a DICOM
136
+ if metadata.get('file_type') == 'DICOM' and 'modality' in metadata:
137
+ modality = metadata['modality']
138
+
139
+ # Prepare analysis parameters
140
+ analysis_params = {
141
+ "image": pixel_array,
142
+ "modality": modality,
143
+ "task": task
144
+ }
145
+
146
+ # Add ROI if applicable
147
+ if task in ["analyze_point", "full_analysis"]:
148
+ # Scale ROI coordinates to image size
149
+ h, w = pixel_array.shape
150
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
151
+ roi_y_scaled = int(roi_y * h / 512)
152
+
153
+ analysis_params["roi"] = {
154
+ "x": roi_x_scaled,
155
+ "y": roi_y_scaled,
156
+ "radius": roi_radius
157
+ }
158
+
159
+ # Add clinical context
160
+ if symptoms:
161
+ analysis_params["clinical_context"] = {"symptoms": symptoms}
162
+
163
+ # Perform analysis
164
+ results = analyzer.analyze_image(**analysis_params)
165
+
166
+ # Create visual report
167
+ visual_report = create_visual_report(results, metadata)
168
+
169
+ # Add metadata info
170
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
171
+ info += f"πŸ₯ {modality} | "
172
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
173
+
174
+ if metadata.get('window_center'):
175
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
176
+
177
+ # Create overlay image if requested
178
+ overlay_image = None
179
+ if show_overlay:
180
+ # For ROI visualization
181
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
182
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x_scaled, roi_y_scaled, roi_radius)
183
+
184
+ # For fat segmentation overlay (simplified version since we don't have masks in current implementation)
185
+ elif task == "segment_fat" and 'segmentation' in results and modality == 'CT':
186
+ # For now, just draw ROI since we don't have actual masks
187
+ overlay_image = display_array.copy()
188
+ if len(overlay_image.shape) == 2:
189
+ overlay_image = cv2.cvtColor(overlay_image, cv2.COLOR_GRAY2RGB)
190
+ # Add text overlay about fat percentages
191
+ if 'statistics' in results['segmentation']:
192
+ stats = results['segmentation']['statistics']
193
+ cv2.putText(overlay_image, f"Total Fat: {stats.get('total_fat_percentage', 0):.1f}%",
194
+ (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
195
+ cv2.putText(overlay_image, f"Subcutaneous: {stats.get('subcutaneous_fat_percentage', 0):.1f}%",
196
+ (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
197
+ cv2.putText(overlay_image, f"Visceral: {stats.get('visceral_fat_percentage', 0):.1f}%",
198
+ (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
199
+
200
+ return display_array, info, visual_report, results, overlay_image
201
+
202
+ except Exception as e:
203
+ error_msg = f"Error: {str(e)}"
204
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
205
+
206
+ def create_visual_report(results, metadata):
207
+ \"\"\"Creates a visual HTML report with improved styling\"\"\"
208
+ html = f\"\"\"
209
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
210
+ padding: 24px;
211
+ background: #ffffff;
212
+ border-radius: 12px;
213
+ max-width: 100%;
214
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
215
+ color: #1a1a1a !important;'>
216
+
217
+ <h2 style='color: #1e40af !important;
218
+ border-bottom: 3px solid #3b82f6;
219
+ padding-bottom: 12px;
220
+ margin-bottom: 20px;
221
+ font-size: 24px;
222
+ font-weight: 600;'>
223
+ πŸ₯ Medical Image Analysis Report
224
+ </h2>
225
+
226
+ <div style='background: #f0f9ff;
227
+ padding: 20px;
228
+ margin: 16px 0;
229
+ border-radius: 8px;
230
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
231
+ <h3 style='color: #1e3a8a !important;
232
+ font-size: 18px;
233
+ font-weight: 600;
234
+ margin-bottom: 12px;'>
235
+ πŸ“‹ Metadata
236
+ </h3>
237
+ <table style='width: 100%; border-collapse: collapse;'>
238
+ <tr>
239
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
240
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
241
+ </tr>
242
+ <tr>
243
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
244
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
245
+ </tr>
246
+ <tr>
247
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
248
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
249
+ </tr>
250
+ <tr>
251
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
252
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
253
+ </tr>
254
+ </table>
255
+ </div>
256
+ \"\"\"
257
+
258
+ # Point Analysis
259
+ if 'point_analysis' in results:
260
+ pa = results['point_analysis']
261
+ tissue = pa.get('tissue_type', {})
262
+
263
+ html += f\"\"\"
264
+ <div style='background: #f0f9ff;
265
+ padding: 20px;
266
+ margin: 16px 0;
267
+ border-radius: 8px;
268
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
269
+ <h3 style='color: #1e3a8a !important;
270
+ font-size: 18px;
271
+ font-weight: 600;
272
+ margin-bottom: 12px;'>
273
+ 🎯 Point Analysis
274
+ </h3>
275
+ <table style='width: 100%; border-collapse: collapse;'>
276
+ <tr>
277
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
278
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
279
+ </tr>
280
+ \"\"\"
281
+
282
+ if results.get('modality') == 'CT':
283
+ html += f\"\"\"
284
+ <tr>
285
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
286
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
287
+ </tr>
288
+ \"\"\"
289
+ else:
290
+ html += f\"\"\"
291
+ <tr>
292
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
293
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
294
+ </tr>
295
+ \"\"\"
296
+
297
+ html += f\"\"\"
298
+ <tr>
299
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
300
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
301
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
302
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
303
+ </td>
304
+ </tr>
305
+ <tr>
306
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
307
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
308
+ </tr>
309
+ </table>
310
+ \"\"\"
311
+
312
+ if 'reasoning' in pa:
313
+ html += f\"\"\"
314
+ <div style='margin-top: 12px;
315
+ padding: 12px;
316
+ background: #dbeafe;
317
+ border-left: 3px solid #3b82f6;
318
+ border-radius: 4px;'>
319
+ <p style='margin: 0; color: #1e40af !important; font-style: italic;'>
320
+ πŸ’­ {pa['reasoning']}
321
+ </p>
322
+ </div>
323
+ \"\"\"
324
+
325
+ html += "</div>"
326
+
327
+ # Segmentation Results
328
+ if 'segmentation' in results and results['segmentation']:
329
+ seg = results['segmentation']
330
+
331
+ if 'statistics' in seg:
332
+ # Fat segmentation for CT
333
+ stats = seg['statistics']
334
+ html += f\"\"\"
335
+ <div style='background: #f0f9ff;
336
+ padding: 20px;
337
+ margin: 16px 0;
338
+ border-radius: 8px;
339
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
340
+ <h3 style='color: #1e3a8a !important;
341
+ font-size: 18px;
342
+ font-weight: 600;
343
+ margin-bottom: 12px;'>
344
+ πŸ”¬ Fat Segmentation Analysis
345
+ </h3>
346
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
347
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
348
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
349
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
350
+ </div>
351
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
352
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
353
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
354
+ </div>
355
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
356
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
357
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
358
+ </div>
359
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
360
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
361
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
362
+ </div>
363
+ </div>
364
+ \"\"\"
365
+
366
+ if 'interpretation' in seg:
367
+ interp = seg['interpretation']
368
+ obesity_color = "#16a34a" if interp.get("obesity_risk") == "normal" else "#d97706" if interp.get("obesity_risk") == "moderate" else "#dc2626"
369
+ visceral_color = "#16a34a" if interp.get("visceral_risk") == "normal" else "#d97706" if interp.get("visceral_risk") == "moderate" else "#dc2626"
370
+
371
+ html += f\"\"\"
372
+ <div style='margin-top: 16px; padding: 16px; background: #f3f4f6; border-radius: 6px;'>
373
+ <h4 style='color: #374151 !important; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
374
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
375
+ <div>
376
+ <span style='color: #6b7280 !important; font-size: 14px;'>Obesity Risk:</span>
377
+ <span style='color: {obesity_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
378
+ </div>
379
+ <div>
380
+ <span style='color: #6b7280 !important; font-size: 14px;'>Visceral Risk:</span>
381
+ <span style='color: {visceral_color} !important; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
382
+ </div>
383
+ </div>
384
+ \"\"\"
385
+
386
+ if interp.get('recommendations'):
387
+ html += \"\"\"
388
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e5e7eb;'>
389
+ <h5 style='color: #374151 !important; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
390
+ <ul style='margin: 0; padding-left: 20px; color: #4b5563 !important;'>
391
+ \"\"\"
392
+ for rec in interp['recommendations']:
393
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
394
+ html += "</ul></div>"
395
+
396
+ html += "</div>"
397
+ html += "</div>"
398
+
399
+ # Quality Assessment
400
+ if 'quality_metrics' in results:
401
+ quality = results['quality_metrics']
402
+ quality_colors = {
403
+ 'excellent': '#16a34a',
404
+ 'good': '#16a34a',
405
+ 'fair': '#d97706',
406
+ 'poor': '#dc2626',
407
+ 'unknown': '#6b7280'
408
+ }
409
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#6b7280')
410
+
411
+ html += f\"\"\"
412
+ <div style='background: #f0f9ff;
413
+ padding: 20px;
414
+ margin: 16px 0;
415
+ border-radius: 8px;
416
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
417
+ <h3 style='color: #1e3a8a !important;
418
+ font-size: 18px;
419
+ font-weight: 600;
420
+ margin-bottom: 12px;'>
421
+ πŸ“Š Image Quality Assessment
422
+ </h3>
423
+ <div style='display: flex; align-items: center; gap: 16px;'>
424
+ <div>
425
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
426
+ <span style='color: {q_color} !important;
427
+ font-size: 18px;
428
+ font-weight: 700;
429
+ margin-left: 8px;'>
430
+ {quality.get('overall_quality', 'unknown').upper()}
431
+ </span>
432
+ </div>
433
+ </div>
434
+ \"\"\"
435
+
436
+ if quality.get('issues'):
437
+ html += f\"\"\"
438
+ <div style='margin-top: 12px;
439
+ padding: 12px;
440
+ background: #fef3c7;
441
+ border-left: 3px solid #f59e0b;
442
+ border-radius: 4px;'>
443
+ <strong style='color: #92400e !important;'>Issues Detected:</strong>
444
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #92400e !important;'>
445
+ \"\"\"
446
+ for issue in quality['issues']:
447
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
448
+ html += "</ul></div>"
449
+
450
+ html += "</div>"
451
+
452
+ html += "</div>"
453
+ return html
454
+
455
+ def create_demo():
456
+ with gr.Blocks(
457
+ title="Medical Image Analyzer - Enhanced Demo",
458
+ theme=gr.themes.Soft(
459
+ primary_hue="blue",
460
+ secondary_hue="blue",
461
+ neutral_hue="slate",
462
+ text_size="md",
463
+ spacing_size="md",
464
+ radius_size="md",
465
+ ).set(
466
+ # Medical blue theme colors
467
+ body_background_fill="*neutral_950",
468
+ body_background_fill_dark="*neutral_950",
469
+ block_background_fill="*neutral_900",
470
+ block_background_fill_dark="*neutral_900",
471
+ border_color_primary="*primary_600",
472
+ border_color_primary_dark="*primary_600",
473
+ # Text colors for better contrast
474
+ body_text_color="*neutral_100",
475
+ body_text_color_dark="*neutral_100",
476
+ body_text_color_subdued="*neutral_300",
477
+ body_text_color_subdued_dark="*neutral_300",
478
+ # Button colors
479
+ button_primary_background_fill="*primary_600",
480
+ button_primary_background_fill_dark="*primary_600",
481
+ button_primary_text_color="white",
482
+ button_primary_text_color_dark="white",
483
+ ),
484
+ css=\"\"\"
485
+ /* Medical blue theme with high contrast */
486
+ :root {
487
+ --medical-blue: #1e40af;
488
+ --medical-blue-light: #3b82f6;
489
+ --medical-blue-dark: #1e3a8a;
490
+ --text-primary: #f9fafb;
491
+ --text-secondary: #e5e7eb;
492
+ --bg-primary: #0f172a;
493
+ --bg-secondary: #1e293b;
494
+ --bg-tertiary: #334155;
495
+ }
496
+
497
+ /* Override default text colors for medical theme */
498
+ * {
499
+ color: var(--text-primary) !important;
500
+ }
501
+
502
+ /* Style the file upload area */
503
+ .file-upload {
504
+ border: 2px dashed var(--medical-blue-light) !important;
505
+ border-radius: 8px !important;
506
+ padding: 20px !important;
507
+ text-align: center !important;
508
+ background: var(--bg-secondary) !important;
509
+ transition: all 0.3s ease !important;
510
+ color: var(--text-primary) !important;
511
+ }
512
+
513
+ .file-upload:hover {
514
+ border-color: var(--medical-blue) !important;
515
+ background: var(--bg-tertiary) !important;
516
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
517
+ }
518
+
519
+ /* Ensure report text is readable with white background */
520
+ .medical-report {
521
+ background: #ffffff !important;
522
+ border: 2px solid var(--medical-blue-light) !important;
523
+ border-radius: 8px !important;
524
+ padding: 16px !important;
525
+ color: #1a1a1a !important;
526
+ }
527
+
528
+ .medical-report * {
529
+ color: #1f2937 !important; /* Dark gray text */
530
+ }
531
+
532
+ .medical-report h2 {
533
+ color: #1e40af !important; /* Medical blue for main heading */
534
+ }
535
+
536
+ .medical-report h3, .medical-report h4 {
537
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
538
+ }
539
+
540
+ .medical-report strong {
541
+ color: #374151 !important; /* Darker gray for labels */
542
+ }
543
+
544
+ .medical-report td {
545
+ color: #1f2937 !important; /* Ensure table text is dark */
546
+ }
547
+
548
+ /* Report sections with light blue background */
549
+ .medical-report > div {
550
+ background: #f0f9ff !important;
551
+ color: #1f2937 !important;
552
+ }
553
+
554
+ /* Medical blue accents for UI elements */
555
+ .gr-button-primary {
556
+ background: var(--medical-blue) !important;
557
+ border-color: var(--medical-blue) !important;
558
+ }
559
+
560
+ .gr-button-primary:hover {
561
+ background: var(--medical-blue-dark) !important;
562
+ border-color: var(--medical-blue-dark) !important;
563
+ }
564
+
565
+ /* Tab styling */
566
+ .gr-tab-item {
567
+ border-color: var(--medical-blue-light) !important;
568
+ }
569
+
570
+ .gr-tab-item.selected {
571
+ background: var(--medical-blue) !important;
572
+ color: white !important;
573
+ }
574
+
575
+ /* Accordion styling */
576
+ .gr-accordion {
577
+ border-color: var(--medical-blue-light) !important;
578
+ }
579
+
580
+ /* Slider track in medical blue */
581
+ input[type="range"]::-webkit-slider-track {
582
+ background: var(--bg-tertiary) !important;
583
+ }
584
+
585
+ input[type="range"]::-webkit-slider-thumb {
586
+ background: var(--medical-blue) !important;
587
+ }
588
+ \"\"\"
589
+ ) as demo:
590
+ gr.Markdown(\"\"\"
591
+ # πŸ₯ Medical Image Analyzer
592
+
593
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
594
+ \"\"\")
595
+
596
+ with gr.Row():
597
+ with gr.Column(scale=1):
598
+ # File upload - no file type restrictions
599
+ with gr.Group():
600
+ gr.Markdown("### πŸ“€ Upload Medical Image")
601
+ file_input = gr.File(
602
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
603
+ file_count="single",
604
+ type="filepath",
605
+ elem_classes="file-upload"
606
+ # Note: NO file_types parameter = accepts ALL files
607
+ )
608
+ gr.Markdown(\"\"\"
609
+ <small style='color: #666;'>
610
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
611
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
612
+ </small>
613
+ \"\"\")
614
+
615
+ # Modality selection
616
+ modality = gr.Radio(
617
+ choices=["CT", "CR", "DX", "RX", "DR"],
618
+ value="CT",
619
+ label="Modality",
620
+ info="Will be auto-detected for DICOM files"
621
+ )
622
+
623
+ # Task selection
624
+ task = gr.Dropdown(
625
+ choices=[
626
+ ("🎯 Point Analysis", "analyze_point"),
627
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
628
+ ("πŸ“Š Full Analysis", "full_analysis")
629
+ ],
630
+ value="full_analysis",
631
+ label="Analysis Task"
632
+ )
633
+
634
+ # ROI settings
635
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
636
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
637
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
638
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
639
+
640
+ # Clinical context
641
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
642
+ symptoms = gr.CheckboxGroup(
643
+ choices=[
644
+ "dyspnea", "chest_pain", "abdominal_pain",
645
+ "trauma", "obesity_screening", "routine_check"
646
+ ],
647
+ label="Symptoms/Indication"
648
+ )
649
+
650
+ # Visualization options
651
+ with gr.Accordion("🎨 Visualization Options", open=True):
652
+ show_overlay = gr.Checkbox(
653
+ label="Show ROI/Segmentation Overlay",
654
+ value=True,
655
+ info="Display ROI circle or fat segmentation info on the image"
656
+ )
657
+
658
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
659
+
660
+ with gr.Column(scale=2):
661
+ # Results with tabs for different views
662
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
663
+ image_display = gr.Image(label="Medical Image", type="numpy")
664
+
665
+ with gr.Tab("🎯 Overlay View"):
666
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
667
+
668
+ file_info = gr.Textbox(label="File Information", lines=1)
669
+
670
+ with gr.Tab("πŸ“Š Visual Report"):
671
+ report_html = gr.HTML()
672
+
673
+ with gr.Tab("πŸ”§ JSON Output"):
674
+ json_output = gr.JSON(label="Structured Data for AI Agents")
675
+
676
+ # Examples and help
677
+ with gr.Row():
678
+ gr.Markdown(\"\"\"
679
+ ### πŸ“ Supported Formats
680
+ - **DICOM**: Automatic HU value extraction and modality detection
681
+ - **PNG/JPG**: Interpreted based on selected modality
682
+ - **All Formats**: Automatic grayscale conversion
683
+ - **Files without extension**: Supported (e.g., IM_0001) - will try DICOM first
684
+
685
+ ### 🎯 Usage
686
+ 1. Upload a medical image file
687
+ 2. Select modality (auto-detected for DICOM)
688
+ 3. Choose analysis task
689
+ 4. Adjust ROI position for point analysis
690
+ 5. Click "Analyze"
691
+
692
+ ### πŸ’‘ Features
693
+ - **ROI Visualization**: See the exact area being analyzed
694
+ - **Fat Segmentation**: Visual percentages for CT scans
695
+ - **Multi-format Support**: Works with any medical image format
696
+ - **AI Agent Ready**: Structured JSON output for integration
697
+ \"\"\")
698
+
699
+ # Connect the interface
700
+ analyze_btn.click(
701
+ fn=process_and_analyze,
702
+ inputs=[file_input, modality, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
703
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
704
+ )
705
+
706
+ # Auto-update ROI limits when image is loaded
707
+ def update_roi_on_upload(file_obj):
708
+ if file_obj is None:
709
+ return gr.update(), gr.update()
710
+
711
+ try:
712
+ analyzer = MedicalImageAnalyzer()
713
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
714
+
715
+ if 'shape' in metadata:
716
+ h, w = metadata['shape']
717
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
718
+ except:
719
+ pass
720
+
721
+ return gr.update(), gr.update()
722
+
723
+ file_input.change(
724
+ fn=update_roi_on_upload,
725
+ inputs=[file_input],
726
+ outputs=[roi_x, roi_y]
727
+ )
728
+
729
+ return demo
730
+
731
+ if __name__ == "__main__":
732
+ demo = create_demo()
733
+ demo.launch()
734
+ ```
735
+ """, elem_classes=["md-custom"], header_links=True)
736
+
737
+
738
+ gr.Markdown("""
739
+ ## `MedicalImageAnalyzer`
740
+
741
+ ### Initialization
742
+ """, elem_classes=["md-custom"], header_links=True)
743
+
744
+ gr.ParamViewer(value=_docs["MedicalImageAnalyzer"]["members"]["__init__"], linkify=[])
745
+
746
+
747
+ gr.Markdown("### Events")
748
+ gr.ParamViewer(value=_docs["MedicalImageAnalyzer"]["events"], linkify=['Event'])
749
+
750
+
751
+
752
+
753
+ gr.Markdown("""
754
+
755
+ ### User function
756
+
757
+ The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).
758
+
759
+ - When used as an Input, the component only impacts the input signature of the user function.
760
+ - When used as an output, the component only impacts the return signature of the user function.
761
+
762
+ The code snippet below is accurate in cases where the component is used as both an input and an output.
763
+
764
+
765
+
766
+ ```python
767
+ def predict(
768
+ value: typing.Dict[str, typing.Any][str, typing.Any]
769
+ ) -> typing.Dict[str, typing.Any][str, typing.Any]:
770
+ return value
771
+ ```
772
+ """, elem_classes=["md-custom", "MedicalImageAnalyzer-user-fn"], header_links=True)
773
+
774
+
775
+
776
+
777
+ demo.load(None, js=r"""function() {
778
+ const refs = {};
779
+ const user_fn_refs = {
780
+ MedicalImageAnalyzer: [], };
781
+ requestAnimationFrame(() => {
782
+
783
+ Object.entries(user_fn_refs).forEach(([key, refs]) => {
784
+ if (refs.length > 0) {
785
+ const el = document.querySelector(`.${key}-user-fn`);
786
+ if (!el) return;
787
+ refs.forEach(ref => {
788
+ el.innerHTML = el.innerHTML.replace(
789
+ new RegExp("\\b"+ref+"\\b", "g"),
790
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
791
+ );
792
+ })
793
+ }
794
+ })
795
+
796
+ Object.entries(refs).forEach(([key, refs]) => {
797
+ if (refs.length > 0) {
798
+ const el = document.querySelector(`.${key}`);
799
+ if (!el) return;
800
+ refs.forEach(ref => {
801
+ el.innerHTML = el.innerHTML.replace(
802
+ new RegExp("\\b"+ref+"\\b", "g"),
803
+ `<a href="#h-${ref.toLowerCase()}">${ref}</a>`
804
+ );
805
+ })
806
+ }
807
+ })
808
+ })
809
+ }
810
+
811
+ """)
812
+
813
+ demo.launch()
src/demo/wrapper_test.py ADDED
@@ -0,0 +1,835 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Frontend Wrapper Test fΓΌr Medical Image Analyzer
4
+ Nutzt Standard Gradio Komponenten um die Backend-FunktionalitΓ€t zu testen
5
+ """
6
+
7
+ import gradio as gr
8
+ import numpy as np
9
+ import sys
10
+ import os
11
+ from pathlib import Path
12
+
13
+ # Add backend to path
14
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
15
+
16
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
17
+ import cv2
18
+
19
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
20
+ """Draw ROI circle on the image"""
21
+ # Convert to RGB if grayscale
22
+ if len(image.shape) == 2:
23
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
24
+ else:
25
+ image_rgb = image.copy()
26
+
27
+ # Draw ROI circle
28
+ center = (int(roi_x), int(roi_y))
29
+ radius = int(roi_radius)
30
+
31
+ # Draw outer circle (white)
32
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
33
+ # Draw inner circle (red)
34
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
35
+ # Draw center cross
36
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
37
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
38
+
39
+ return image_rgb
40
+
41
+ def create_fat_overlay(base_image, segmentation_results):
42
+ """Create overlay image with fat segmentation highlighted"""
43
+ # Convert to RGB
44
+ if len(base_image.shape) == 2:
45
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
46
+ else:
47
+ overlay_img = base_image.copy()
48
+
49
+ # Check if we have segmentation masks
50
+ if not segmentation_results or 'segments' not in segmentation_results:
51
+ return overlay_img
52
+
53
+ segments = segmentation_results.get('segments', {})
54
+
55
+ # Apply subcutaneous fat overlay (yellow)
56
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
57
+ mask = segments['subcutaneous']['mask']
58
+ yellow_overlay = np.zeros_like(overlay_img)
59
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
60
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
61
+
62
+ # Apply visceral fat overlay (red)
63
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
64
+ mask = segments['visceral']['mask']
65
+ red_overlay = np.zeros_like(overlay_img)
66
+ red_overlay[mask > 0] = [255, 0, 0] # Red
67
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
68
+
69
+ # Add legend
70
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
71
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
72
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
73
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
74
+
75
+ return overlay_img
76
+
77
+ def process_and_analyze(file_obj, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
78
+ """
79
+ Processes uploaded file and performs analysis
80
+ """
81
+ if file_obj is None:
82
+ return None, "No file selected", "", {}, None
83
+
84
+ # Create analyzer instance
85
+ analyzer = MedicalImageAnalyzer(
86
+ analysis_mode="structured",
87
+ include_confidence=True,
88
+ include_reasoning=True
89
+ )
90
+
91
+ try:
92
+ # Process the file (DICOM or image)
93
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
94
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
95
+
96
+ # Update modality from file metadata
97
+ modality = metadata.get('modality', 'CR')
98
+
99
+ # Prepare analysis parameters
100
+ analysis_params = {
101
+ "image": pixel_array,
102
+ "modality": modality,
103
+ "task": task
104
+ }
105
+
106
+ # Add ROI if applicable
107
+ if task in ["analyze_point", "full_analysis"]:
108
+ # Scale ROI coordinates to image size
109
+ h, w = pixel_array.shape
110
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
111
+ roi_y_scaled = int(roi_y * h / 512)
112
+
113
+ analysis_params["roi"] = {
114
+ "x": roi_x_scaled,
115
+ "y": roi_y_scaled,
116
+ "radius": roi_radius
117
+ }
118
+
119
+ # Add clinical context
120
+ if symptoms:
121
+ analysis_params["clinical_context"] = ", ".join(symptoms)
122
+
123
+ # Perform analysis
124
+ results = analyzer.analyze_image(**analysis_params)
125
+
126
+ # Create visual report
127
+ if analyzer.analysis_mode == "visual":
128
+ visual_report = analyzer._create_html_report(results)
129
+ else:
130
+ # Create our own visual report
131
+ visual_report = create_visual_report(results, metadata)
132
+
133
+ # Add metadata info
134
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
135
+ info += f"πŸ₯ {modality} | "
136
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
137
+
138
+ if metadata.get('window_center'):
139
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
140
+
141
+ # Create overlay image if requested
142
+ overlay_image = None
143
+ if show_overlay:
144
+ # For ROI visualization
145
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
146
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x, roi_y, roi_radius)
147
+
148
+ # For fat segmentation overlay
149
+ if task == "segment_fat" and 'segmentation' in results and modality == 'CT':
150
+ # Get segmentation masks from results
151
+ seg_results = {
152
+ 'segments': {
153
+ 'subcutaneous': {'mask': None},
154
+ 'visceral': {'mask': None}
155
+ }
156
+ }
157
+
158
+ # Check if we have actual mask data
159
+ if 'segments' in results['segmentation']:
160
+ seg_results = results['segmentation']
161
+
162
+ overlay_image = create_fat_overlay(display_array.copy(), seg_results)
163
+
164
+ return display_array, info, visual_report, results, overlay_image
165
+
166
+ except Exception as e:
167
+ error_msg = f"Error: {str(e)}"
168
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
169
+
170
+
171
+ def create_visual_report(results, metadata):
172
+ """Creates a visual HTML report with improved styling"""
173
+ html = f"""
174
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
175
+ padding: 24px;
176
+ background: #ffffff;
177
+ border-radius: 12px;
178
+ max-width: 100%;
179
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
180
+ color: #1a1a1a !important;'>
181
+
182
+ <h2 style='color: #1e40af !important;
183
+ border-bottom: 3px solid #3b82f6;
184
+ padding-bottom: 12px;
185
+ margin-bottom: 20px;
186
+ font-size: 24px;
187
+ font-weight: 600;'>
188
+ πŸ₯ Medical Image Analysis Report
189
+ </h2>
190
+
191
+ <div style='background: #f0f9ff;
192
+ padding: 20px;
193
+ margin: 16px 0;
194
+ border-radius: 8px;
195
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
196
+ <h3 style='color: #1e3a8a !important;
197
+ font-size: 18px;
198
+ font-weight: 600;
199
+ margin-bottom: 12px;'>
200
+ πŸ“‹ Metadata
201
+ </h3>
202
+ <table style='width: 100%; border-collapse: collapse;'>
203
+ <tr>
204
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
205
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
206
+ </tr>
207
+ <tr>
208
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
209
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
210
+ </tr>
211
+ <tr>
212
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
213
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
214
+ </tr>
215
+ <tr>
216
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
217
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
218
+ </tr>
219
+ </table>
220
+ </div>
221
+ """
222
+
223
+ # Point Analysis
224
+ if 'point_analysis' in results:
225
+ pa = results['point_analysis']
226
+ tissue = pa.get('tissue_type', {})
227
+
228
+ html += f"""
229
+ <div style='background: #f0f9ff;
230
+ padding: 20px;
231
+ margin: 16px 0;
232
+ border-radius: 8px;
233
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
234
+ <h3 style='color: #1e3a8a !important;
235
+ font-size: 18px;
236
+ font-weight: 600;
237
+ margin-bottom: 12px;'>
238
+ 🎯 Point Analysis
239
+ </h3>
240
+ <table style='width: 100%; border-collapse: collapse;'>
241
+ <tr>
242
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
243
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
244
+ </tr>
245
+ """
246
+
247
+ if results.get('modality') == 'CT':
248
+ html += f"""
249
+ <tr>
250
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
251
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
252
+ </tr>
253
+ """
254
+ else:
255
+ html += f"""
256
+ <tr>
257
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
258
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
259
+ </tr>
260
+ """
261
+
262
+ html += f"""
263
+ <tr>
264
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
265
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
266
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
267
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
268
+ </td>
269
+ </tr>
270
+ <tr>
271
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
272
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
273
+ </tr>
274
+ </table>
275
+ """
276
+
277
+ if 'reasoning' in pa:
278
+ html += f"""
279
+ <div style='margin-top: 12px;
280
+ padding: 12px;
281
+ background: #f0f7ff;
282
+ border-left: 3px solid #0066cc;
283
+ border-radius: 4px;'>
284
+ <p style='margin: 0; color: #4b5563 !important; font-style: italic;'>
285
+ πŸ’­ {pa['reasoning']}
286
+ </p>
287
+ </div>
288
+ """
289
+
290
+ html += "</div>"
291
+
292
+ # Segmentation Results
293
+ if 'segmentation' in results and results['segmentation']:
294
+ seg = results['segmentation']
295
+
296
+ if 'statistics' in seg:
297
+ # Fat segmentation for CT
298
+ stats = seg['statistics']
299
+ html += f"""
300
+ <div style='background: #f0f9ff;
301
+ padding: 20px;
302
+ margin: 16px 0;
303
+ border-radius: 8px;
304
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
305
+ <h3 style='color: #1e3a8a !important;
306
+ font-size: 18px;
307
+ font-weight: 600;
308
+ margin-bottom: 12px;'>
309
+ πŸ”¬ Fat Segmentation Analysis
310
+ </h3>
311
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
312
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
313
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
314
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
315
+ </div>
316
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
317
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
318
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
319
+ </div>
320
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
321
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
322
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
323
+ </div>
324
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
325
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
326
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
327
+ </div>
328
+ </div>
329
+ """
330
+
331
+ if 'interpretation' in seg:
332
+ interp = seg['interpretation']
333
+ obesity_color = "#27ae60" if interp.get("obesity_risk") == "normal" else "#f39c12" if interp.get("obesity_risk") == "moderate" else "#e74c3c"
334
+ visceral_color = "#27ae60" if interp.get("visceral_risk") == "normal" else "#f39c12" if interp.get("visceral_risk") == "moderate" else "#e74c3c"
335
+
336
+ html += f"""
337
+ <div style='margin-top: 16px; padding: 16px; background: #f8f9fa; border-radius: 6px;'>
338
+ <h4 style='color: #333; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
339
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
340
+ <div>
341
+ <span style='color: #666; font-size: 14px;'>Obesity Risk:</span>
342
+ <span style='color: {obesity_color}; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
343
+ </div>
344
+ <div>
345
+ <span style='color: #666; font-size: 14px;'>Visceral Risk:</span>
346
+ <span style='color: {visceral_color}; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
347
+ </div>
348
+ </div>
349
+ """
350
+
351
+ if interp.get('recommendations'):
352
+ html += """
353
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e0e0e0;'>
354
+ <h5 style='color: #333; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
355
+ <ul style='margin: 0; padding-left: 20px; color: #555;'>
356
+ """
357
+ for rec in interp['recommendations']:
358
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
359
+ html += "</ul></div>"
360
+
361
+ html += "</div>"
362
+ html += "</div>"
363
+
364
+ elif 'tissue_distribution' in seg:
365
+ # X-ray tissue distribution
366
+ html += f"""
367
+ <div style='background: white;
368
+ padding: 20px;
369
+ margin: 16px 0;
370
+ border-radius: 8px;
371
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
372
+ <h3 style='color: #333;
373
+ font-size: 18px;
374
+ font-weight: 600;
375
+ margin-bottom: 12px;'>
376
+ 🦴 Tissue Distribution
377
+ </h3>
378
+ <div style='display: grid; grid-template-columns: repeat(auto-fit, minmax(120px, 1fr)); gap: 12px;'>
379
+ """
380
+
381
+ tissue_dist = seg['tissue_distribution']
382
+ tissue_icons = {
383
+ 'bone': '🦴', 'soft_tissue': 'πŸ”΄', 'air': '🌫️',
384
+ 'metal': 'βš™οΈ', 'fat': '🟑', 'fluid': 'πŸ’§'
385
+ }
386
+
387
+ tissue_colors = {
388
+ 'bone': '#fff7e6',
389
+ 'soft_tissue': '#fee',
390
+ 'air': '#e6f3ff',
391
+ 'metal': '#f0f0f0',
392
+ 'fat': '#fffbe6',
393
+ 'fluid': '#e6f7ff'
394
+ }
395
+
396
+ for tissue, percentage in tissue_dist.items():
397
+ if percentage > 0:
398
+ icon = tissue_icons.get(tissue, 'πŸ“')
399
+ bg_color = tissue_colors.get(tissue, '#f8f9fa')
400
+ html += f"""
401
+ <div style='text-align: center;
402
+ padding: 16px;
403
+ background: {bg_color};
404
+ border-radius: 8px;
405
+ transition: transform 0.2s;'>
406
+ <div style='font-size: 2.5em; margin-bottom: 8px;'>{icon}</div>
407
+ <div style='font-weight: 600;
408
+ color: #333;
409
+ font-size: 14px;
410
+ margin-bottom: 4px;'>
411
+ {tissue.replace('_', ' ').title()}
412
+ </div>
413
+ <div style='color: #0066cc;
414
+ font-size: 20px;
415
+ font-weight: 700;'>
416
+ {percentage:.1f}%
417
+ </div>
418
+ </div>
419
+ """
420
+
421
+ html += "</div>"
422
+
423
+ if seg.get('clinical_findings'):
424
+ html += """
425
+ <div style='margin-top: 16px;
426
+ padding: 16px;
427
+ background: #fff3cd;
428
+ border-left: 4px solid #ffc107;
429
+ border-radius: 4px;'>
430
+ <h4 style='color: #856404;
431
+ font-size: 16px;
432
+ font-weight: 600;
433
+ margin: 0 0 8px 0;'>
434
+ ⚠️ Clinical Findings
435
+ </h4>
436
+ <ul style='margin: 0; padding-left: 20px; color: #856404;'>
437
+ """
438
+ for finding in seg['clinical_findings']:
439
+ html += f"<li style='margin: 4px 0;'>{finding.get('description', 'Unknown finding')}</li>"
440
+ html += "</ul></div>"
441
+
442
+ html += "</div>"
443
+
444
+ # Quality Assessment
445
+ if 'quality_metrics' in results:
446
+ quality = results['quality_metrics']
447
+ quality_colors = {
448
+ 'excellent': '#27ae60',
449
+ 'good': '#27ae60',
450
+ 'fair': '#f39c12',
451
+ 'poor': '#e74c3c',
452
+ 'unknown': '#95a5a6'
453
+ }
454
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#95a5a6')
455
+
456
+ html += f"""
457
+ <div style='background: #f0f9ff;
458
+ padding: 20px;
459
+ margin: 16px 0;
460
+ border-radius: 8px;
461
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
462
+ <h3 style='color: #1e3a8a !important;
463
+ font-size: 18px;
464
+ font-weight: 600;
465
+ margin-bottom: 12px;'>
466
+ πŸ“Š Image Quality Assessment
467
+ </h3>
468
+ <div style='display: flex; align-items: center; gap: 16px;'>
469
+ <div>
470
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
471
+ <span style='color: {q_color} !important;
472
+ font-size: 18px;
473
+ font-weight: 700;
474
+ margin-left: 8px;'>
475
+ {quality.get('overall_quality', 'unknown').upper()}
476
+ </span>
477
+ </div>
478
+ </div>
479
+ """
480
+
481
+ if quality.get('issues'):
482
+ html += f"""
483
+ <div style='margin-top: 12px;
484
+ padding: 12px;
485
+ background: #fff3cd;
486
+ border-left: 3px solid #ffc107;
487
+ border-radius: 4px;'>
488
+ <strong style='color: #856404;'>Issues Detected:</strong>
489
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #856404;'>
490
+ """
491
+ for issue in quality['issues']:
492
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
493
+ html += "</ul></div>"
494
+
495
+ html += "</div>"
496
+
497
+ html += "</div>"
498
+ return html
499
+
500
+
501
+ # Create Gradio interface
502
+ with gr.Blocks(
503
+ title="Medical Image Analyzer - Wrapper Test",
504
+ theme=gr.themes.Soft(
505
+ primary_hue="blue",
506
+ secondary_hue="blue",
507
+ neutral_hue="slate",
508
+ text_size="md",
509
+ spacing_size="md",
510
+ radius_size="md",
511
+ ).set(
512
+ # Medical blue theme colors
513
+ body_background_fill="*neutral_950",
514
+ body_background_fill_dark="*neutral_950",
515
+ block_background_fill="*neutral_900",
516
+ block_background_fill_dark="*neutral_900",
517
+ border_color_primary="*primary_600",
518
+ border_color_primary_dark="*primary_600",
519
+ # Text colors for better contrast
520
+ body_text_color="*neutral_100",
521
+ body_text_color_dark="*neutral_100",
522
+ body_text_color_subdued="*neutral_300",
523
+ body_text_color_subdued_dark="*neutral_300",
524
+ # Button colors
525
+ button_primary_background_fill="*primary_600",
526
+ button_primary_background_fill_dark="*primary_600",
527
+ button_primary_text_color="white",
528
+ button_primary_text_color_dark="white",
529
+ ),
530
+ css="""
531
+ /* Medical blue theme with high contrast */
532
+ :root {
533
+ --medical-blue: #1e40af;
534
+ --medical-blue-light: #3b82f6;
535
+ --medical-blue-dark: #1e3a8a;
536
+ --text-primary: #f9fafb;
537
+ --text-secondary: #e5e7eb;
538
+ --bg-primary: #0f172a;
539
+ --bg-secondary: #1e293b;
540
+ --bg-tertiary: #334155;
541
+ }
542
+
543
+ /* Override default text colors for medical theme */
544
+ * {
545
+ color: var(--text-primary) !important;
546
+ }
547
+
548
+ /* Fix text contrast issues */
549
+ .svelte-12ioyct { color: var(--text-primary) !important; }
550
+
551
+ /* Override table cell colors for better readability */
552
+ td {
553
+ color: var(--text-primary) !important;
554
+ padding: 8px 12px !important;
555
+ }
556
+ td strong {
557
+ color: var(--text-primary) !important;
558
+ font-weight: 600 !important;
559
+ }
560
+
561
+ /* Fix upload component text and translate */
562
+ .wrap.svelte-12ioyct::before {
563
+ content: 'Drop file here' !important;
564
+ color: var(--text-primary) !important;
565
+ }
566
+ .wrap.svelte-12ioyct::after {
567
+ content: 'Click to upload' !important;
568
+ color: var(--text-secondary) !important;
569
+ }
570
+ .wrap.svelte-12ioyct span {
571
+ display: none !important; /* Hide German text */
572
+ }
573
+
574
+ /* Style the file upload area */
575
+ .file-upload {
576
+ border: 2px dashed var(--medical-blue-light) !important;
577
+ border-radius: 8px !important;
578
+ padding: 20px !important;
579
+ text-align: center !important;
580
+ background: var(--bg-secondary) !important;
581
+ transition: all 0.3s ease !important;
582
+ color: var(--text-primary) !important;
583
+ }
584
+
585
+ .file-upload:hover {
586
+ border-color: var(--medical-blue) !important;
587
+ background: var(--bg-tertiary) !important;
588
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
589
+ }
590
+
591
+ /* Make sure all text in tables is readable */
592
+ table {
593
+ width: 100%;
594
+ border-collapse: collapse;
595
+ }
596
+ th {
597
+ font-weight: 600;
598
+ background: var(--bg-tertiary);
599
+ padding: 8px 12px;
600
+ }
601
+
602
+ /* Ensure report text is readable with white background */
603
+ .medical-report {
604
+ background: #ffffff !important;
605
+ border: 2px solid var(--medical-blue-light) !important;
606
+ border-radius: 8px !important;
607
+ padding: 16px !important;
608
+ color: #1a1a1a !important;
609
+ }
610
+
611
+ .medical-report * {
612
+ color: #1f2937 !important; /* Dark gray text */
613
+ }
614
+
615
+ .medical-report h2 {
616
+ color: #1e40af !important; /* Medical blue for main heading */
617
+ }
618
+
619
+ .medical-report h3, .medical-report h4 {
620
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
621
+ }
622
+
623
+ .medical-report strong {
624
+ color: #374151 !important; /* Darker gray for labels */
625
+ }
626
+
627
+ .medical-report td {
628
+ color: #1f2937 !important; /* Ensure table text is dark */
629
+ }
630
+
631
+ /* Report sections with light blue background */
632
+ .medical-report > div {
633
+ background: #f0f9ff !important;
634
+ color: #1f2937 !important;
635
+ }
636
+
637
+ /* Medical blue accents for UI elements */
638
+ .gr-button-primary {
639
+ background: var(--medical-blue) !important;
640
+ border-color: var(--medical-blue) !important;
641
+ }
642
+
643
+ .gr-button-primary:hover {
644
+ background: var(--medical-blue-dark) !important;
645
+ border-color: var(--medical-blue-dark) !important;
646
+ }
647
+
648
+ /* Accordion styling with medical theme */
649
+ .gr-accordion {
650
+ border-color: var(--medical-blue-light) !important;
651
+ }
652
+
653
+ /* Slider track in medical blue */
654
+ input[type="range"]::-webkit-slider-track {
655
+ background: var(--bg-tertiary) !important;
656
+ }
657
+
658
+ input[type="range"]::-webkit-slider-thumb {
659
+ background: var(--medical-blue) !important;
660
+ }
661
+
662
+ /* Tab styling */
663
+ .gr-tab-item {
664
+ border-color: var(--medical-blue-light) !important;
665
+ }
666
+
667
+ .gr-tab-item.selected {
668
+ background: var(--medical-blue) !important;
669
+ color: white !important;
670
+ }
671
+ """
672
+ ) as demo:
673
+ gr.Markdown("""
674
+ # πŸ₯ Medical Image Analyzer
675
+
676
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
677
+ """)
678
+
679
+ with gr.Row():
680
+ with gr.Column(scale=1):
681
+ # File upload with custom styling - no file type restrictions
682
+ with gr.Group():
683
+ gr.Markdown("### πŸ“€ Upload Medical Image")
684
+ file_input = gr.File(
685
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
686
+ file_count="single",
687
+ type="filepath",
688
+ elem_classes="file-upload"
689
+ # Note: NO file_types parameter = accepts ALL files
690
+ )
691
+ gr.Markdown("""
692
+ <small style='color: #666;'>
693
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
694
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
695
+ </small>
696
+ """)
697
+
698
+ # Task selection
699
+ task = gr.Dropdown(
700
+ choices=[
701
+ ("🎯 Point Analysis", "analyze_point"),
702
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
703
+ ("πŸ“Š Full Analysis", "full_analysis")
704
+ ],
705
+ value="full_analysis",
706
+ label="Analysis Task"
707
+ )
708
+
709
+ # ROI settings
710
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
711
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
712
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
713
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
714
+
715
+ # Clinical context
716
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
717
+ symptoms = gr.CheckboxGroup(
718
+ choices=[
719
+ "Dyspnea", "Chest Pain", "Abdominal Pain",
720
+ "Trauma", "Obesity Screening", "Routine Check"
721
+ ],
722
+ label="Symptoms/Indication"
723
+ )
724
+
725
+ # Visualization options
726
+ with gr.Accordion("🎨 Visualization Options", open=True):
727
+ show_overlay = gr.Checkbox(
728
+ label="Show ROI/Segmentation Overlay",
729
+ value=True,
730
+ info="Display ROI circle or fat segmentation overlay on the image"
731
+ )
732
+
733
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
734
+
735
+ with gr.Column(scale=2):
736
+ # Results with tabs for different views
737
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
738
+ image_display = gr.Image(label="Medical Image", type="numpy")
739
+
740
+ with gr.Tab("🎯 Overlay View"):
741
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
742
+
743
+ file_info = gr.Textbox(label="File Information", lines=1)
744
+
745
+ with gr.Tab("πŸ“Š Visual Report"):
746
+ report_html = gr.HTML()
747
+
748
+ with gr.Tab("πŸ”§ JSON Output"):
749
+ json_output = gr.JSON(label="Structured Data for AI Agents")
750
+
751
+ # Examples and help
752
+ with gr.Row():
753
+ gr.Markdown("""
754
+ ### πŸ“ Supported Formats
755
+ - **DICOM**: Automatic HU value extraction and modality detection
756
+ - **PNG/JPG**: Interpreted as X-ray (CR) unless filename contains 'CT'
757
+ - **All Formats**: Automatic grayscale conversion
758
+ - **Files without extension**: Supported (e.g., IM_0001)
759
+
760
+ ### 🎯 Usage
761
+ 1. Upload a medical image file
762
+ 2. Select the desired analysis task
763
+ 3. For point analysis: Adjust the ROI position
764
+ 4. Click "Analyze"
765
+
766
+ ### πŸ’‘ Tips
767
+ - The ROI sliders will automatically adjust to your image size
768
+ - CT images show HU values, X-rays show intensity values
769
+ - Fat segmentation is only available for CT images
770
+ """)
771
+
772
+ # Connect the interface
773
+ analyze_btn.click(
774
+ fn=process_and_analyze,
775
+ inputs=[file_input, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
776
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
777
+ )
778
+
779
+ # Auto-update ROI limits when image is loaded
780
+ def update_roi_on_upload(file_obj):
781
+ if file_obj is None:
782
+ return gr.update(), gr.update()
783
+
784
+ try:
785
+ analyzer = MedicalImageAnalyzer()
786
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
787
+
788
+ if 'shape' in metadata:
789
+ h, w = metadata['shape']
790
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
791
+ except:
792
+ pass
793
+
794
+ return gr.update(), gr.update()
795
+
796
+ file_input.change(
797
+ fn=update_roi_on_upload,
798
+ inputs=[file_input],
799
+ outputs=[roi_x, roi_y]
800
+ )
801
+
802
+ # Add custom JavaScript to translate upload text
803
+ demo.load(
804
+ None,
805
+ None,
806
+ None,
807
+ js="""
808
+ () => {
809
+ // Wait for the page to load
810
+ setTimeout(() => {
811
+ // Find and replace German text in upload component
812
+ const uploadElements = document.querySelectorAll('.wrap.svelte-12ioyct');
813
+ uploadElements.forEach(el => {
814
+ if (el.textContent.includes('Datei hier ablegen')) {
815
+ el.innerHTML = el.innerHTML
816
+ .replace('Datei hier ablegen', 'Drop file here')
817
+ .replace('oder', 'or')
818
+ .replace('Hochladen', 'Click to upload');
819
+ }
820
+ });
821
+
822
+ // Also update the button text if it exists
823
+ const uploadButtons = document.querySelectorAll('button');
824
+ uploadButtons.forEach(btn => {
825
+ if (btn.textContent.includes('Hochladen')) {
826
+ btn.textContent = 'Upload';
827
+ }
828
+ });
829
+ }, 100);
830
+ }
831
+ """
832
+ )
833
+
834
+ if __name__ == "__main__":
835
+ demo.launch()
src/frontend/Example.svelte ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ export let value: {
3
+ image?: any;
4
+ analysis?: any;
5
+ report?: string;
6
+ } | null;
7
+ export let type: "gallery" | "table";
8
+ export let selected = false;
9
+ </script>
10
+
11
+ <div
12
+ class="example-container"
13
+ class:table={type === "table"}
14
+ class:gallery={type === "gallery"}
15
+ class:selected
16
+ >
17
+ {#if value}
18
+ <div class="example-content">
19
+ {#if value.image}
20
+ <div class="image-preview">
21
+ {#if typeof value.image === 'string'}
22
+ <img src={value.image} alt="Medical scan example" />
23
+ {:else if value.image.url}
24
+ <img src={value.image.url} alt="Medical scan example" />
25
+ {:else}
26
+ <div class="placeholder">πŸ“· Image</div>
27
+ {/if}
28
+ </div>
29
+ {/if}
30
+
31
+ {#if value.analysis}
32
+ <div class="analysis-preview">
33
+ {#if value.analysis.modality}
34
+ <span class="modality-badge">{value.analysis.modality}</span>
35
+ {/if}
36
+
37
+ {#if value.analysis.point_analysis?.tissue_type}
38
+ <span class="tissue-type">
39
+ {value.analysis.point_analysis.tissue_type.icon || ''}
40
+ {value.analysis.point_analysis.tissue_type.type || 'Unknown'}
41
+ </span>
42
+ {/if}
43
+
44
+ {#if value.analysis.segmentation?.interpretation?.obesity_risk}
45
+ <span class="risk-badge risk-{value.analysis.segmentation.interpretation.obesity_risk}">
46
+ Risk: {value.analysis.segmentation.interpretation.obesity_risk}
47
+ </span>
48
+ {/if}
49
+ </div>
50
+ {/if}
51
+ </div>
52
+ {:else}
53
+ <div class="empty-example">No example</div>
54
+ {/if}
55
+ </div>
56
+
57
+ <style>
58
+ .example-container {
59
+ overflow: hidden;
60
+ border-radius: var(--radius-sm);
61
+ background: var(--background-fill-secondary);
62
+ position: relative;
63
+ transition: all 0.2s ease;
64
+ }
65
+
66
+ .example-container:hover {
67
+ transform: translateY(-2px);
68
+ box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
69
+ }
70
+
71
+ .example-container.selected {
72
+ border: 2px solid var(--color-accent);
73
+ }
74
+
75
+ .example-container.table {
76
+ display: flex;
77
+ align-items: center;
78
+ padding: 0.5rem;
79
+ gap: 0.5rem;
80
+ }
81
+
82
+ .example-container.gallery {
83
+ aspect-ratio: 1;
84
+ }
85
+
86
+ .example-content {
87
+ display: flex;
88
+ flex-direction: column;
89
+ height: 100%;
90
+ }
91
+
92
+ .table .example-content {
93
+ flex-direction: row;
94
+ align-items: center;
95
+ gap: 0.5rem;
96
+ }
97
+
98
+ .image-preview {
99
+ flex: 1;
100
+ overflow: hidden;
101
+ display: flex;
102
+ align-items: center;
103
+ justify-content: center;
104
+ background: var(--background-fill-primary);
105
+ }
106
+
107
+ .gallery .image-preview {
108
+ height: 70%;
109
+ }
110
+
111
+ .table .image-preview {
112
+ width: 60px;
113
+ height: 60px;
114
+ flex: 0 0 60px;
115
+ border-radius: var(--radius-sm);
116
+ }
117
+
118
+ .image-preview img {
119
+ width: 100%;
120
+ height: 100%;
121
+ object-fit: cover;
122
+ }
123
+
124
+ .placeholder {
125
+ color: var(--body-text-color-subdued);
126
+ font-size: 2rem;
127
+ opacity: 0.5;
128
+ }
129
+
130
+ .analysis-preview {
131
+ padding: 0.5rem;
132
+ display: flex;
133
+ flex-wrap: wrap;
134
+ gap: 0.25rem;
135
+ align-items: center;
136
+ font-size: 0.875rem;
137
+ }
138
+
139
+ .gallery .analysis-preview {
140
+ background: var(--background-fill-primary);
141
+ border-top: 1px solid var(--border-color-primary);
142
+ }
143
+
144
+ .modality-badge {
145
+ background: var(--color-accent);
146
+ color: white;
147
+ padding: 0.125rem 0.5rem;
148
+ border-radius: var(--radius-sm);
149
+ font-weight: bold;
150
+ font-size: 0.75rem;
151
+ }
152
+
153
+ .tissue-type {
154
+ background: var(--background-fill-secondary);
155
+ padding: 0.125rem 0.5rem;
156
+ border-radius: var(--radius-sm);
157
+ border: 1px solid var(--border-color-primary);
158
+ }
159
+
160
+ .risk-badge {
161
+ padding: 0.125rem 0.5rem;
162
+ border-radius: var(--radius-sm);
163
+ font-weight: bold;
164
+ font-size: 0.75rem;
165
+ }
166
+
167
+ .risk-normal {
168
+ background: #d4edda;
169
+ color: #155724;
170
+ }
171
+
172
+ .risk-moderate {
173
+ background: #fff3cd;
174
+ color: #856404;
175
+ }
176
+
177
+ .risk-high, .risk-severe {
178
+ background: #f8d7da;
179
+ color: #721c24;
180
+ }
181
+
182
+ .empty-example {
183
+ display: flex;
184
+ align-items: center;
185
+ justify-content: center;
186
+ height: 100%;
187
+ color: var(--body-text-color-subdued);
188
+ font-style: italic;
189
+ }
190
+ </style>
src/frontend/Index.svelte ADDED
@@ -0,0 +1,526 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { Block, BlockLabel, Empty, IconButton, Upload, UploadText } from "@gradio/atoms";
3
+ import { Image } from "@gradio/icons";
4
+ import { StatusTracker } from "@gradio/statustracker";
5
+ import type { LoadingStatus } from "@gradio/statustracker";
6
+ import { _ } from "svelte-i18n";
7
+ import { tick } from "svelte";
8
+ import { Upload as UploadIcon } from "@gradio/icons";
9
+
10
+ export let elem_id = "";
11
+ export let elem_classes: string[] = [];
12
+ export let visible = true;
13
+ export let value: {
14
+ image?: any;
15
+ analysis?: any;
16
+ report?: string;
17
+ } | null = null;
18
+ export let label: string;
19
+ export let show_label: boolean;
20
+ export let show_download_button: boolean;
21
+ export let root: string;
22
+ export let proxy_url: null | string;
23
+ export let loading_status: LoadingStatus;
24
+ export let container = true;
25
+ export let scale: number | null = null;
26
+ export let min_width: number | undefined = undefined;
27
+ export let gradio: any;
28
+
29
+ // Analysis parameters
30
+ export let analysis_mode: "structured" | "visual" = "structured";
31
+ export let include_confidence = true;
32
+ export let include_reasoning = true;
33
+ export let modality: "CT" | "CR" | "DX" | "RX" | "DR" = "CT";
34
+ export let task: "analyze_point" | "segment_fat" | "full_analysis" = "full_analysis";
35
+
36
+ let dragging = false;
37
+ let pending_upload = false;
38
+ let uploaded_file: File | null = null;
39
+ let roi = { x: 256, y: 256, radius: 10 };
40
+ let show_roi = false;
41
+ let analysis_results: any = null;
42
+ let visual_report = "";
43
+
44
+ $: value = {
45
+ image: uploaded_file,
46
+ analysis: analysis_results,
47
+ report: visual_report
48
+ };
49
+
50
+ // DICOM and image loading
51
+ async function load_file(file: File) {
52
+ const file_url = URL.createObjectURL(file);
53
+ const file_ext = file.name.split('.').pop()?.toLowerCase() || '';
54
+
55
+ try {
56
+ // Always try DICOM first for files without extensions or with DICOM extensions
57
+ // This matches the backend behavior with force=True
58
+ if (!file_ext || file_ext === 'dcm' || file_ext === 'dicom' ||
59
+ file.type === 'application/dicom' || file.name.startsWith('IM_')) {
60
+ // For DICOM, we need server-side processing
61
+ // Send to backend for processing
62
+ const formData = new FormData();
63
+ formData.append('file', file);
64
+
65
+ // This would call the backend to process DICOM
66
+ const response = await fetch(`${root}/process_dicom`, {
67
+ method: 'POST',
68
+ body: formData
69
+ });
70
+
71
+ if (response.ok) {
72
+ const data = await response.json();
73
+ return data;
74
+ }
75
+ }
76
+
77
+ // Fallback to regular image handling
78
+ return {
79
+ url: file_url,
80
+ name: file.name,
81
+ size: file.size,
82
+ type: file.type || 'application/octet-stream'
83
+ };
84
+ } catch (error) {
85
+ console.error("Error loading file:", error);
86
+ throw error;
87
+ }
88
+ }
89
+
90
+ function handle_upload({ detail }: CustomEvent<File>) {
91
+ pending_upload = true;
92
+ const file = detail;
93
+
94
+ load_file(file).then((data) => {
95
+ uploaded_file = file;
96
+ pending_upload = false;
97
+
98
+ // Trigger analysis
99
+ if (gradio.dispatch) {
100
+ gradio.dispatch("upload", {
101
+ file: file,
102
+ data: data
103
+ });
104
+ }
105
+ }).catch((error) => {
106
+ console.error("Upload error:", error);
107
+ pending_upload = false;
108
+ });
109
+ }
110
+
111
+ function handle_clear() {
112
+ value = null;
113
+ uploaded_file = null;
114
+ analysis_results = null;
115
+ visual_report = "";
116
+ gradio.dispatch("clear");
117
+ }
118
+
119
+ function handle_roi_click(event: MouseEvent) {
120
+ if (!show_roi) return;
121
+
122
+ const rect = (event.target as HTMLElement).getBoundingClientRect();
123
+ roi.x = Math.round(event.clientX - rect.left);
124
+ roi.y = Math.round(event.clientY - rect.top);
125
+
126
+ // Use change event for ROI updates
127
+ if (gradio.dispatch) {
128
+ gradio.dispatch("change", { roi });
129
+ }
130
+ }
131
+
132
+ function create_visual_report(results: any) {
133
+ if (!results) return "";
134
+
135
+ let html = `<div class="medical-report">`;
136
+ html += `<h3>πŸ₯ Medical Image Analysis Report</h3>`;
137
+
138
+ // Basic info
139
+ html += `<div class="report-section">`;
140
+ html += `<h4>πŸ“‹ Basic Information</h4>`;
141
+ html += `<p><strong>Modality:</strong> ${results.modality || 'Unknown'}</p>`;
142
+ html += `<p><strong>Timestamp:</strong> ${results.timestamp || 'N/A'}</p>`;
143
+ html += `</div>`;
144
+
145
+ // Point analysis
146
+ if (results.point_analysis) {
147
+ const pa = results.point_analysis;
148
+ html += `<div class="report-section">`;
149
+ html += `<h4>🎯 Point Analysis</h4>`;
150
+ html += `<p><strong>Location:</strong> (${pa.location?.x}, ${pa.location?.y})</p>`;
151
+
152
+ if (results.modality === 'CT') {
153
+ html += `<p><strong>HU Value:</strong> ${pa.hu_value?.toFixed(1) || 'N/A'}</p>`;
154
+ } else {
155
+ html += `<p><strong>Intensity:</strong> ${pa.intensity?.toFixed(3) || 'N/A'}</p>`;
156
+ }
157
+
158
+ if (pa.tissue_type) {
159
+ html += `<p><strong>Tissue Type:</strong> ${pa.tissue_type.icon || ''} ${pa.tissue_type.type || 'Unknown'}</p>`;
160
+ }
161
+
162
+ if (include_confidence && pa.confidence !== undefined) {
163
+ html += `<p><strong>Confidence:</strong> ${pa.confidence}</p>`;
164
+ }
165
+
166
+ if (include_reasoning && pa.reasoning) {
167
+ html += `<p class="reasoning">πŸ’­ ${pa.reasoning}</p>`;
168
+ }
169
+
170
+ html += `</div>`;
171
+ }
172
+
173
+ // Segmentation results
174
+ if (results.segmentation?.statistics) {
175
+ const stats = results.segmentation.statistics;
176
+
177
+ if (results.modality === 'CT' && stats.total_fat_percentage !== undefined) {
178
+ html += `<div class="report-section">`;
179
+ html += `<h4>πŸ”¬ Fat Segmentation</h4>`;
180
+ html += `<div class="stats-grid">`;
181
+ html += `<div><strong>Total Fat:</strong> ${stats.total_fat_percentage.toFixed(1)}%</div>`;
182
+ html += `<div><strong>Subcutaneous:</strong> ${stats.subcutaneous_fat_percentage.toFixed(1)}%</div>`;
183
+ html += `<div><strong>Visceral:</strong> ${stats.visceral_fat_percentage.toFixed(1)}%</div>`;
184
+ html += `<div><strong>V/S Ratio:</strong> ${stats.visceral_subcutaneous_ratio.toFixed(2)}</div>`;
185
+ html += `</div>`;
186
+
187
+ if (results.segmentation.interpretation) {
188
+ const interp = results.segmentation.interpretation;
189
+ html += `<div class="interpretation">`;
190
+ html += `<p><strong>Obesity Risk:</strong> <span class="risk-${interp.obesity_risk}">${interp.obesity_risk.toUpperCase()}</span></p>`;
191
+ html += `<p><strong>Visceral Risk:</strong> <span class="risk-${interp.visceral_risk}">${interp.visceral_risk.toUpperCase()}</span></p>`;
192
+
193
+ if (interp.recommendations?.length > 0) {
194
+ html += `<p><strong>Recommendations:</strong></p>`;
195
+ html += `<ul>`;
196
+ interp.recommendations.forEach((rec: string) => {
197
+ html += `<li>${rec}</li>`;
198
+ });
199
+ html += `</ul>`;
200
+ }
201
+ html += `</div>`;
202
+ }
203
+ html += `</div>`;
204
+ } else if (results.segmentation.tissue_distribution) {
205
+ html += `<div class="report-section">`;
206
+ html += `<h4>🦴 Tissue Distribution</h4>`;
207
+ html += `<div class="tissue-grid">`;
208
+
209
+ const tissues = results.segmentation.tissue_distribution;
210
+ const icons: Record<string, string> = {
211
+ bone: '🦴',
212
+ soft_tissue: 'πŸ”΄',
213
+ air: '🌫️',
214
+ metal: 'βš™οΈ',
215
+ fat: '🟑',
216
+ fluid: 'πŸ’§'
217
+ };
218
+
219
+ Object.entries(tissues).forEach(([tissue, percentage]) => {
220
+ if (percentage as number > 0) {
221
+ html += `<div class="tissue-item">`;
222
+ html += `<div class="tissue-icon">${icons[tissue] || 'πŸ“'}</div>`;
223
+ html += `<div class="tissue-name">${tissue.replace('_', ' ')}</div>`;
224
+ html += `<div class="tissue-percentage">${(percentage as number).toFixed(1)}%</div>`;
225
+ html += `</div>`;
226
+ }
227
+ });
228
+
229
+ html += `</div>`;
230
+
231
+ if (results.segmentation.clinical_findings?.length > 0) {
232
+ html += `<div class="clinical-findings">`;
233
+ html += `<p><strong>⚠️ Clinical Findings:</strong></p>`;
234
+ html += `<ul>`;
235
+ results.segmentation.clinical_findings.forEach((finding: any) => {
236
+ html += `<li>${finding.description} (Confidence: ${finding.confidence})</li>`;
237
+ });
238
+ html += `</ul>`;
239
+ html += `</div>`;
240
+ }
241
+
242
+ html += `</div>`;
243
+ }
244
+ }
245
+
246
+ // Quality metrics
247
+ if (results.quality_metrics) {
248
+ const quality = results.quality_metrics;
249
+ html += `<div class="report-section">`;
250
+ html += `<h4>πŸ“Š Image Quality</h4>`;
251
+ html += `<p><strong>Overall Quality:</strong> <span class="quality-${quality.overall_quality}">${quality.overall_quality?.toUpperCase() || 'UNKNOWN'}</span></p>`;
252
+
253
+ if (quality.issues?.length > 0) {
254
+ html += `<p><strong>Issues:</strong> ${quality.issues.join(', ')}</p>`;
255
+ }
256
+
257
+ html += `</div>`;
258
+ }
259
+
260
+ html += `</div>`;
261
+
262
+ return html;
263
+ }
264
+
265
+ // Update visual report when analysis changes
266
+ $: if (analysis_results) {
267
+ visual_report = create_visual_report(analysis_results);
268
+ }
269
+ </script>
270
+
271
+ <Block
272
+ {visible}
273
+ {elem_id}
274
+ {elem_classes}
275
+ {container}
276
+ {scale}
277
+ {min_width}
278
+ allow_overflow={false}
279
+ padding={true}
280
+ >
281
+ <StatusTracker
282
+ autoscroll={gradio.autoscroll}
283
+ i18n={gradio.i18n}
284
+ {...loading_status}
285
+ />
286
+
287
+ <BlockLabel
288
+ {show_label}
289
+ Icon={Image}
290
+ label={label || "Medical Image Analyzer"}
291
+ />
292
+
293
+ {#if value === null || !uploaded_file}
294
+ <Upload
295
+ on:load={handle_upload}
296
+ filetype="*"
297
+ {root}
298
+ {dragging}
299
+ >
300
+ <UploadText i18n={gradio.i18n} type="file">
301
+ Drop Medical Image File Here - or - Click to Upload<br/>
302
+ <span style="font-size: 0.9em; color: var(--body-text-color-subdued);">
303
+ Supports: DICOM (.dcm), Images (.png, .jpg), and files without extensions (IM_0001, etc.)
304
+ </span>
305
+ </UploadText>
306
+ </Upload>
307
+ {:else}
308
+ <div class="analyzer-container">
309
+ <div class="controls">
310
+ <IconButton Icon={UploadIcon} on:click={handle_clear} />
311
+
312
+ <select bind:value={modality} class="modality-select">
313
+ <option value="CT">CT</option>
314
+ <option value="CR">CR (X-Ray)</option>
315
+ <option value="DX">DX (X-Ray)</option>
316
+ <option value="RX">RX (X-Ray)</option>
317
+ <option value="DR">DR (X-Ray)</option>
318
+ </select>
319
+
320
+ <select bind:value={task} class="task-select">
321
+ <option value="analyze_point">Point Analysis</option>
322
+ <option value="segment_fat">Fat Segmentation (CT)</option>
323
+ <option value="full_analysis">Full Analysis</option>
324
+ </select>
325
+
326
+ <label class="roi-toggle">
327
+ <input type="checkbox" bind:checked={show_roi} />
328
+ Show ROI
329
+ </label>
330
+ </div>
331
+
332
+ <div class="image-container" on:click={handle_roi_click}>
333
+ {#if uploaded_file}
334
+ <img src={URL.createObjectURL(uploaded_file)} alt="Medical scan" />
335
+
336
+ {#if show_roi}
337
+ <div
338
+ class="roi-marker"
339
+ style="left: {roi.x}px; top: {roi.y}px; width: {roi.radius * 2}px; height: {roi.radius * 2}px;"
340
+ />
341
+ {/if}
342
+ {/if}
343
+ </div>
344
+
345
+ {#if visual_report}
346
+ <div class="report-container">
347
+ {@html visual_report}
348
+ </div>
349
+ {/if}
350
+
351
+ {#if analysis_mode === "structured" && analysis_results}
352
+ <details class="json-output">
353
+ <summary>JSON Output (for AI Agents)</summary>
354
+ <pre>{JSON.stringify(analysis_results, null, 2)}</pre>
355
+ </details>
356
+ {/if}
357
+ </div>
358
+ {/if}
359
+ </Block>
360
+
361
+ <style>
362
+ .analyzer-container {
363
+ display: flex;
364
+ flex-direction: column;
365
+ gap: 1rem;
366
+ }
367
+
368
+ .controls {
369
+ display: flex;
370
+ gap: 0.5rem;
371
+ align-items: center;
372
+ flex-wrap: wrap;
373
+ }
374
+
375
+ .modality-select, .task-select {
376
+ padding: 0.5rem;
377
+ border: 1px solid var(--border-color-primary);
378
+ border-radius: var(--radius-sm);
379
+ background: var(--background-fill-primary);
380
+ }
381
+
382
+ .roi-toggle {
383
+ display: flex;
384
+ align-items: center;
385
+ gap: 0.5rem;
386
+ cursor: pointer;
387
+ }
388
+
389
+ .image-container {
390
+ position: relative;
391
+ overflow: hidden;
392
+ border: 1px solid var(--border-color-primary);
393
+ border-radius: var(--radius-sm);
394
+ cursor: crosshair;
395
+ }
396
+
397
+ .image-container img {
398
+ width: 100%;
399
+ height: auto;
400
+ display: block;
401
+ }
402
+
403
+ .roi-marker {
404
+ position: absolute;
405
+ border: 2px solid #ff0000;
406
+ border-radius: 50%;
407
+ pointer-events: none;
408
+ transform: translate(-50%, -50%);
409
+ box-shadow: 0 0 0 1px rgba(255, 255, 255, 0.5);
410
+ }
411
+
412
+ .report-container {
413
+ background: var(--background-fill-secondary);
414
+ border: 1px solid var(--border-color-primary);
415
+ border-radius: var(--radius-sm);
416
+ padding: 1rem;
417
+ overflow-x: auto;
418
+ }
419
+
420
+ :global(.medical-report) {
421
+ font-family: var(--font);
422
+ color: var(--body-text-color);
423
+ }
424
+
425
+ :global(.medical-report h3) {
426
+ color: var(--body-text-color);
427
+ border-bottom: 2px solid var(--color-accent);
428
+ padding-bottom: 0.5rem;
429
+ margin-bottom: 1rem;
430
+ }
431
+
432
+ :global(.medical-report h4) {
433
+ color: var(--body-text-color);
434
+ margin-top: 1rem;
435
+ margin-bottom: 0.5rem;
436
+ }
437
+
438
+ :global(.report-section) {
439
+ background: var(--background-fill-primary);
440
+ padding: 1rem;
441
+ border-radius: var(--radius-sm);
442
+ margin-bottom: 1rem;
443
+ }
444
+
445
+ :global(.stats-grid), :global(.tissue-grid) {
446
+ display: grid;
447
+ grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
448
+ gap: 0.5rem;
449
+ margin-top: 0.5rem;
450
+ }
451
+
452
+ :global(.tissue-item) {
453
+ text-align: center;
454
+ padding: 0.5rem;
455
+ background: var(--background-fill-secondary);
456
+ border-radius: var(--radius-sm);
457
+ }
458
+
459
+ :global(.tissue-icon) {
460
+ font-size: 2rem;
461
+ margin-bottom: 0.25rem;
462
+ }
463
+
464
+ :global(.tissue-name) {
465
+ font-weight: bold;
466
+ text-transform: capitalize;
467
+ }
468
+
469
+ :global(.tissue-percentage) {
470
+ color: var(--color-accent);
471
+ font-size: 1.2rem;
472
+ font-weight: bold;
473
+ }
474
+
475
+ :global(.reasoning) {
476
+ font-style: italic;
477
+ color: var(--body-text-color-subdued);
478
+ margin-top: 0.5rem;
479
+ }
480
+
481
+ :global(.interpretation) {
482
+ margin-top: 1rem;
483
+ padding: 0.5rem;
484
+ background: var(--background-fill-secondary);
485
+ border-radius: var(--radius-sm);
486
+ }
487
+
488
+ :global(.risk-normal) { color: #27ae60; }
489
+ :global(.risk-moderate) { color: #f39c12; }
490
+ :global(.risk-high), :global(.risk-severe) { color: #e74c3c; }
491
+
492
+ :global(.quality-excellent), :global(.quality-good) { color: #27ae60; }
493
+ :global(.quality-fair) { color: #f39c12; }
494
+ :global(.quality-poor) { color: #e74c3c; }
495
+
496
+ :global(.clinical-findings) {
497
+ margin-top: 1rem;
498
+ padding: 0.5rem;
499
+ background: #fff3cd;
500
+ border-left: 4px solid #ffc107;
501
+ border-radius: var(--radius-sm);
502
+ }
503
+
504
+ .json-output {
505
+ margin-top: 1rem;
506
+ background: var(--background-fill-secondary);
507
+ border: 1px solid var(--border-color-primary);
508
+ border-radius: var(--radius-sm);
509
+ padding: 1rem;
510
+ }
511
+
512
+ .json-output summary {
513
+ cursor: pointer;
514
+ font-weight: bold;
515
+ margin-bottom: 0.5rem;
516
+ }
517
+
518
+ .json-output pre {
519
+ margin: 0;
520
+ overflow-x: auto;
521
+ font-size: 0.875rem;
522
+ background: var(--background-fill-primary);
523
+ padding: 0.5rem;
524
+ border-radius: var(--radius-sm);
525
+ }
526
+ </style>
src/frontend/gradio.config.js ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ export default {
2
+ plugins: [],
3
+ svelte: {
4
+ preprocess: [],
5
+ },
6
+ build: {
7
+ target: "modules",
8
+ },
9
+ };
src/frontend/package-lock.json ADDED
The diff for this file is too large to render. See raw diff
 
src/frontend/package.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "gradio_medical_image_analyzer",
3
+ "version": "0.0.1",
4
+ "description": "Gradio Medical Image Analyzer custom component",
5
+ "type": "module",
6
+ "author": "",
7
+ "license": "MIT",
8
+ "private": false,
9
+ "exports": {
10
+ ".": {
11
+ "gradio": "./Index.svelte",
12
+ "svelte": "./dist/Index.svelte",
13
+ "types": "./dist/Index.svelte.d.ts"
14
+ },
15
+ "./example": {
16
+ "gradio": "./Example.svelte",
17
+ "svelte": "./dist/Example.svelte",
18
+ "types": "./dist/Example.svelte.d.ts"
19
+ },
20
+ "./package.json": "./package.json"
21
+ },
22
+ "dependencies": {
23
+ "@gradio/atoms": "0.16.1",
24
+ "@gradio/icons": "0.12.0",
25
+ "@gradio/statustracker": "0.10.12",
26
+ "@gradio/utils": "0.10.2"
27
+ },
28
+ "devDependencies": {
29
+ "@gradio/preview": "^0.13.1"
30
+ },
31
+ "peerDependencies": {
32
+ "svelte": "^4.0.0"
33
+ }
34
+ }
src/frontend/tsconfig.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "compilerOptions": {
3
+ "target": "ES2020",
4
+ "module": "ES2020",
5
+ "moduleResolution": "node",
6
+ "esModuleInterop": true,
7
+ "skipLibCheck": true,
8
+ "strict": true,
9
+ "types": ["svelte"]
10
+ },
11
+ "include": ["**/*.ts", "**/*.js", "**/*.svelte"],
12
+ "exclude": ["node_modules"]
13
+ }
src/pyproject.toml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = [
3
+ "hatchling",
4
+ "hatch-requirements-txt",
5
+ "hatch-fancy-pypi-readme>=22.5.0",
6
+ ]
7
+ build-backend = "hatchling.build"
8
+
9
+ [project]
10
+ name = "gradio_medical_image_analyzer"
11
+ version = "0.0.1"
12
+ description = "AI-agent optimized medical image analysis component for Gradio"
13
+ readme = "README.md"
14
+ license = "Apache-2.0"
15
+ requires-python = ">=3.8"
16
+ authors = [{ name = "Markus Clauss Vetsuisse Uni Zurich", email = "markus@data-and-ai-dude.ch" }]
17
+ keywords = [
18
+ "gradio-custom-component",
19
+ "medical-imaging",
20
+ "ai-agents",
21
+ "image-analysis",
22
+ "gradio-template-MedicalImageAnalyzer"
23
+ ]
24
+ # Add dependencies here
25
+ dependencies = [
26
+ "gradio>=4.0,<6.0",
27
+ "numpy>=1.21.0",
28
+ "pillow>=10.0.0",
29
+ "scikit-image>=0.19.0",
30
+ "scipy>=1.7.0",
31
+ "opencv-python>=4.5.0",
32
+ "pydicom>=2.3.0"
33
+ ]
34
+ classifiers = [
35
+ 'Development Status :: 3 - Alpha',
36
+ 'Operating System :: OS Independent',
37
+ 'Programming Language :: Python :: 3',
38
+ 'Programming Language :: Python :: 3 :: Only',
39
+ 'Programming Language :: Python :: 3.8',
40
+ 'Programming Language :: Python :: 3.9',
41
+ 'Programming Language :: Python :: 3.10',
42
+ 'Programming Language :: Python :: 3.11',
43
+ 'Topic :: Scientific/Engineering :: Medical Science Apps.',
44
+ 'Topic :: Scientific/Engineering :: Artificial Intelligence',
45
+ ]
46
+
47
+ [project.urls]
48
+ repository = "https://github.com/datadudech/gradio-medical-image-analyzer"
49
+ documentation = "https://github.com/datadudech/gradio-medical-image-analyzer#readme"
50
+ issues = "https://github.com/datadudech/gradio-medical-image-analyzer/issues"
51
+ space = "https://huggingface.co/spaces/abdullahisamarkus/medical-image-analyzer"
52
+
53
+ [project.optional-dependencies]
54
+ dev = ["build", "twine"]
55
+
56
+ [tool.hatch.build]
57
+ artifacts = ["/backend/gradio_medical_image_analyzer/templates", "*.pyi"]
58
+
59
+ [tool.hatch.build.targets.wheel]
60
+ packages = ["/backend/gradio_medical_image_analyzer"]
src/test_im_files.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Test handling of IM_0001 and other files without extensions"""
3
+
4
+ import sys
5
+ import os
6
+ import tempfile
7
+ import shutil
8
+
9
+ # Add backend to path
10
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'backend'))
11
+
12
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
13
+
14
+ def test_im_files():
15
+ """Test files without extensions like IM_0001"""
16
+
17
+ print("πŸ₯ Testing Medical Image Analyzer with IM_0001 Files")
18
+ print("=" * 50)
19
+
20
+ analyzer = MedicalImageAnalyzer()
21
+
22
+ # Create test directory
23
+ temp_dir = tempfile.mkdtemp()
24
+
25
+ try:
26
+ # Look for a test DICOM file
27
+ test_dicom = None
28
+ possible_paths = [
29
+ "../vetdicomviewer/tools/data/CTImage.dcm",
30
+ "tools/data/CTImage.dcm",
31
+ "../tests/data/CTImage.dcm"
32
+ ]
33
+
34
+ for path in possible_paths:
35
+ if os.path.exists(path):
36
+ test_dicom = path
37
+ break
38
+
39
+ if test_dicom:
40
+ # Create IM_0001 file (copy DICOM without extension)
41
+ im_file = os.path.join(temp_dir, "IM_0001")
42
+ shutil.copy(test_dicom, im_file)
43
+
44
+ print(f"πŸ“ Created test file: IM_0001 (from {os.path.basename(test_dicom)})")
45
+ print(f"πŸ“ Location: {im_file}")
46
+ print(f"πŸ“ Size: {os.path.getsize(im_file)} bytes")
47
+ print()
48
+
49
+ # Test processing
50
+ print("πŸ”¬ Processing IM_0001 file...")
51
+ try:
52
+ pixel_array, display_array, metadata = analyzer.process_file(im_file)
53
+
54
+ print("βœ… Successfully processed IM_0001!")
55
+ print(f" - Shape: {pixel_array.shape}")
56
+ print(f" - Modality: {metadata.get('modality', 'Unknown')}")
57
+ print(f" - File type: {metadata.get('file_type', 'Unknown')}")
58
+
59
+ if metadata.get('file_type') == 'DICOM':
60
+ print(f" - Patient: {metadata.get('patient_name', 'Anonymous')}")
61
+ print(f" - Study date: {metadata.get('study_date', 'N/A')}")
62
+
63
+ if 'window_center' in metadata:
64
+ print(f" - Window Center: {metadata['window_center']:.0f}")
65
+ print(f" - Window Width: {metadata['window_width']:.0f}")
66
+
67
+ # Perform analysis
68
+ print("\n🎯 Performing point analysis...")
69
+ result = analyzer.analyze_image(
70
+ image=pixel_array,
71
+ modality=metadata.get('modality', 'CT'),
72
+ task="analyze_point",
73
+ roi={"x": pixel_array.shape[1]//2, "y": pixel_array.shape[0]//2, "radius": 10}
74
+ )
75
+
76
+ if 'point_analysis' in result:
77
+ pa = result['point_analysis']
78
+ print(f" - HU Value: {pa.get('hu_value', 'N/A')}")
79
+ tissue = pa.get('tissue_type', {})
80
+ print(f" - Tissue: {tissue.get('icon', '')} {tissue.get('type', 'Unknown')}")
81
+ print(f" - Confidence: {pa.get('confidence', 'N/A')}")
82
+
83
+ except Exception as e:
84
+ print(f"❌ Error processing IM_0001: {str(e)}")
85
+ import traceback
86
+ traceback.print_exc()
87
+ else:
88
+ print("⚠️ No test DICOM file found to create IM_0001")
89
+ print(" You can manually test by renaming any DICOM file to IM_0001")
90
+
91
+ print("\n" + "=" * 50)
92
+ print("πŸ“ Implementation Details:")
93
+ print("1. Backend uses: pydicom.dcmread(file_path, force=True)")
94
+ print("2. This allows reading files without extensions")
95
+ print("3. The force=True parameter tells pydicom to try reading as DICOM")
96
+ print("4. If DICOM reading fails, it falls back to regular image processing")
97
+ print("5. Frontend accepts all file types (no restrictions)")
98
+
99
+ finally:
100
+ # Cleanup
101
+ shutil.rmtree(temp_dir)
102
+
103
+ print("\nβœ… The medical_image_analyzer fully supports IM_0001 files!")
104
+
105
+ if __name__ == "__main__":
106
+ test_im_files()
src/test_integration.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Test script for Medical Image Analyzer integration
4
+ """
5
+
6
+ import numpy as np
7
+ import sys
8
+ import os
9
+
10
+ # Add backend to path
11
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'backend'))
12
+
13
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
14
+
15
+ def test_ct_analysis():
16
+ """Test CT image analysis with fat segmentation"""
17
+ print("Testing CT Analysis...")
18
+
19
+ # Create a simulated CT image with different tissue types
20
+ ct_image = np.zeros((512, 512), dtype=np.float32)
21
+
22
+ # Add different tissue regions based on HU values
23
+ # Air region (-1000 HU)
24
+ ct_image[50:150, 50:150] = -1000
25
+
26
+ # Fat region (-75 HU)
27
+ ct_image[200:300, 200:300] = -75
28
+
29
+ # Soft tissue region (40 HU)
30
+ ct_image[350:450, 350:450] = 40
31
+
32
+ # Bone region (300 HU)
33
+ ct_image[100:200, 350:450] = 300
34
+
35
+ # Create analyzer
36
+ analyzer = MedicalImageAnalyzer(
37
+ analysis_mode="structured",
38
+ include_confidence=True,
39
+ include_reasoning=True
40
+ )
41
+
42
+ # Test point analysis
43
+ print("\n1. Testing point analysis...")
44
+ result = analyzer.process_image(
45
+ image=ct_image,
46
+ modality="CT",
47
+ task="analyze_point",
48
+ roi={"x": 250, "y": 250, "radius": 10}
49
+ )
50
+
51
+ print(f" HU Value: {result.get('point_analysis', {}).get('hu_value', 'N/A')}")
52
+ print(f" Tissue Type: {result.get('point_analysis', {}).get('tissue_type', {}).get('type', 'N/A')}")
53
+ print(f" Confidence: {result.get('point_analysis', {}).get('confidence', 'N/A')}")
54
+
55
+ # Test fat segmentation
56
+ print("\n2. Testing fat segmentation...")
57
+ result = analyzer.process_image(
58
+ image=ct_image,
59
+ modality="CT",
60
+ task="segment_fat"
61
+ )
62
+
63
+ if "error" in result.get("segmentation", {}):
64
+ print(f" Error: {result['segmentation']['error']}")
65
+ else:
66
+ stats = result.get("segmentation", {}).get("statistics", {})
67
+ print(f" Total Fat %: {stats.get('total_fat_percentage', 'N/A')}")
68
+ print(f" Subcutaneous Fat %: {stats.get('subcutaneous_fat_percentage', 'N/A')}")
69
+ print(f" Visceral Fat %: {stats.get('visceral_fat_percentage', 'N/A')}")
70
+
71
+ # Test full analysis
72
+ print("\n3. Testing full analysis...")
73
+ result = analyzer.process_image(
74
+ image=ct_image,
75
+ modality="CT",
76
+ task="full_analysis",
77
+ clinical_context="Patient with obesity concerns"
78
+ )
79
+
80
+ print(f" Modality: {result.get('modality', 'N/A')}")
81
+ print(f" Quality: {result.get('quality_metrics', {}).get('overall_quality', 'N/A')}")
82
+ if "clinical_correlation" in result:
83
+ print(f" Clinical Note: {result['clinical_correlation'].get('summary', 'N/A')}")
84
+
85
+
86
+ def test_xray_analysis():
87
+ """Test X-ray image analysis"""
88
+ print("\n\nTesting X-Ray Analysis...")
89
+
90
+ # Create a simulated X-ray image with different intensities
91
+ xray_image = np.zeros((512, 512), dtype=np.float32)
92
+
93
+ # Background soft tissue
94
+ xray_image[:, :] = 0.4
95
+
96
+ # Bone region (high intensity)
97
+ xray_image[100:400, 200:250] = 0.8 # Femur-like structure
98
+
99
+ # Air/lung region (low intensity)
100
+ xray_image[50:200, 50:200] = 0.1
101
+ xray_image[50:200, 312:462] = 0.1
102
+
103
+ # Metal implant (very high intensity)
104
+ xray_image[250:280, 220:230] = 0.95
105
+
106
+ # Create analyzer
107
+ analyzer = MedicalImageAnalyzer(
108
+ analysis_mode="structured",
109
+ include_confidence=True,
110
+ include_reasoning=True
111
+ )
112
+
113
+ # Test full X-ray analysis
114
+ print("\n1. Testing full X-ray analysis...")
115
+ result = analyzer.process_image(
116
+ image=xray_image,
117
+ modality="X-Ray",
118
+ task="full_analysis"
119
+ )
120
+
121
+ if "segmentation" in result:
122
+ segments = result["segmentation"].get("segments", {})
123
+ print(" Detected tissues:")
124
+ for tissue, info in segments.items():
125
+ if info.get("present", False):
126
+ print(f" - {tissue}: {info.get('percentage', 0):.1f}%")
127
+
128
+ # Check for findings
129
+ findings = result["segmentation"].get("clinical_findings", [])
130
+ if findings:
131
+ print(" Clinical findings:")
132
+ for finding in findings:
133
+ print(f" - {finding.get('description', 'Unknown finding')}")
134
+
135
+ # Test point analysis on X-ray
136
+ print("\n2. Testing X-ray point analysis...")
137
+ result = analyzer.process_image(
138
+ image=xray_image,
139
+ modality="X-Ray",
140
+ task="analyze_point",
141
+ roi={"x": 225, "y": 265, "radius": 5} # Metal implant region
142
+ )
143
+
144
+ point_analysis = result.get("point_analysis", {})
145
+ print(f" Tissue Type: {point_analysis.get('tissue_type', {}).get('type', 'N/A')}")
146
+ print(f" Intensity: {point_analysis.get('intensity', 'N/A')}")
147
+
148
+
149
+ def test_error_handling():
150
+ """Test error handling"""
151
+ print("\n\nTesting Error Handling...")
152
+
153
+ analyzer = MedicalImageAnalyzer()
154
+
155
+ # Test with invalid image
156
+ print("\n1. Testing with None image...")
157
+ result = analyzer.process_image(
158
+ image=None,
159
+ modality="CT",
160
+ task="full_analysis"
161
+ )
162
+ print(f" Error handled: {'error' in result}")
163
+
164
+ # Test with invalid modality
165
+ print("\n2. Testing with invalid modality...")
166
+ result = analyzer.process_image(
167
+ image=np.zeros((100, 100)),
168
+ modality="MRI", # Not supported
169
+ task="full_analysis"
170
+ )
171
+ print(f" Processed as: {result.get('modality', 'Unknown')}")
172
+
173
+
174
+ if __name__ == "__main__":
175
+ print("Medical Image Analyzer Integration Test")
176
+ print("=" * 50)
177
+
178
+ test_ct_analysis()
179
+ test_xray_analysis()
180
+ test_error_handling()
181
+
182
+ print("\n" + "=" * 50)
183
+ print("Integration test completed!")
184
+ print("\nNote: This test uses simulated data.")
185
+ print("For real medical images, results will be more accurate.")
wrapper_test.py ADDED
@@ -0,0 +1,835 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Frontend Wrapper Test fΓΌr Medical Image Analyzer
4
+ Nutzt Standard Gradio Komponenten um die Backend-FunktionalitΓ€t zu testen
5
+ """
6
+
7
+ import gradio as gr
8
+ import numpy as np
9
+ import sys
10
+ import os
11
+ from pathlib import Path
12
+
13
+ # Add backend to path
14
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'backend'))
15
+
16
+ from gradio_medical_image_analyzer import MedicalImageAnalyzer
17
+ import cv2
18
+
19
+ def draw_roi_on_image(image, roi_x, roi_y, roi_radius):
20
+ """Draw ROI circle on the image"""
21
+ # Convert to RGB if grayscale
22
+ if len(image.shape) == 2:
23
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
24
+ else:
25
+ image_rgb = image.copy()
26
+
27
+ # Draw ROI circle
28
+ center = (int(roi_x), int(roi_y))
29
+ radius = int(roi_radius)
30
+
31
+ # Draw outer circle (white)
32
+ cv2.circle(image_rgb, center, radius, (255, 255, 255), 2)
33
+ # Draw inner circle (red)
34
+ cv2.circle(image_rgb, center, radius-1, (255, 0, 0), 2)
35
+ # Draw center cross
36
+ cv2.line(image_rgb, (center[0]-5, center[1]), (center[0]+5, center[1]), (255, 0, 0), 2)
37
+ cv2.line(image_rgb, (center[0], center[1]-5), (center[0], center[1]+5), (255, 0, 0), 2)
38
+
39
+ return image_rgb
40
+
41
+ def create_fat_overlay(base_image, segmentation_results):
42
+ """Create overlay image with fat segmentation highlighted"""
43
+ # Convert to RGB
44
+ if len(base_image.shape) == 2:
45
+ overlay_img = cv2.cvtColor(base_image, cv2.COLOR_GRAY2RGB)
46
+ else:
47
+ overlay_img = base_image.copy()
48
+
49
+ # Check if we have segmentation masks
50
+ if not segmentation_results or 'segments' not in segmentation_results:
51
+ return overlay_img
52
+
53
+ segments = segmentation_results.get('segments', {})
54
+
55
+ # Apply subcutaneous fat overlay (yellow)
56
+ if 'subcutaneous' in segments and segments['subcutaneous'].get('mask') is not None:
57
+ mask = segments['subcutaneous']['mask']
58
+ yellow_overlay = np.zeros_like(overlay_img)
59
+ yellow_overlay[mask > 0] = [255, 255, 0] # Yellow
60
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, yellow_overlay, 0.3, 0)
61
+
62
+ # Apply visceral fat overlay (red)
63
+ if 'visceral' in segments and segments['visceral'].get('mask') is not None:
64
+ mask = segments['visceral']['mask']
65
+ red_overlay = np.zeros_like(overlay_img)
66
+ red_overlay[mask > 0] = [255, 0, 0] # Red
67
+ overlay_img = cv2.addWeighted(overlay_img, 0.7, red_overlay, 0.3, 0)
68
+
69
+ # Add legend
70
+ cv2.putText(overlay_img, "Yellow: Subcutaneous Fat", (10, 30),
71
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
72
+ cv2.putText(overlay_img, "Red: Visceral Fat", (10, 60),
73
+ cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 0, 0), 2)
74
+
75
+ return overlay_img
76
+
77
+ def process_and_analyze(file_obj, task, roi_x, roi_y, roi_radius, symptoms, show_overlay=False):
78
+ """
79
+ Processes uploaded file and performs analysis
80
+ """
81
+ if file_obj is None:
82
+ return None, "No file selected", "", {}, None
83
+
84
+ # Create analyzer instance
85
+ analyzer = MedicalImageAnalyzer(
86
+ analysis_mode="structured",
87
+ include_confidence=True,
88
+ include_reasoning=True
89
+ )
90
+
91
+ try:
92
+ # Process the file (DICOM or image)
93
+ file_path = file_obj.name if hasattr(file_obj, 'name') else str(file_obj)
94
+ pixel_array, display_array, metadata = analyzer.process_file(file_path)
95
+
96
+ # Update modality from file metadata
97
+ modality = metadata.get('modality', 'CR')
98
+
99
+ # Prepare analysis parameters
100
+ analysis_params = {
101
+ "image": pixel_array,
102
+ "modality": modality,
103
+ "task": task
104
+ }
105
+
106
+ # Add ROI if applicable
107
+ if task in ["analyze_point", "full_analysis"]:
108
+ # Scale ROI coordinates to image size
109
+ h, w = pixel_array.shape
110
+ roi_x_scaled = int(roi_x * w / 512) # Assuming slider max is 512
111
+ roi_y_scaled = int(roi_y * h / 512)
112
+
113
+ analysis_params["roi"] = {
114
+ "x": roi_x_scaled,
115
+ "y": roi_y_scaled,
116
+ "radius": roi_radius
117
+ }
118
+
119
+ # Add clinical context
120
+ if symptoms:
121
+ analysis_params["clinical_context"] = ", ".join(symptoms)
122
+
123
+ # Perform analysis
124
+ results = analyzer.analyze_image(**analysis_params)
125
+
126
+ # Create visual report
127
+ if analyzer.analysis_mode == "visual":
128
+ visual_report = analyzer._create_html_report(results)
129
+ else:
130
+ # Create our own visual report
131
+ visual_report = create_visual_report(results, metadata)
132
+
133
+ # Add metadata info
134
+ info = f"πŸ“„ {metadata.get('file_type', 'Unknown')} | "
135
+ info += f"πŸ₯ {modality} | "
136
+ info += f"πŸ“ {metadata.get('shape', 'Unknown')}"
137
+
138
+ if metadata.get('window_center'):
139
+ info += f" | Window C:{metadata['window_center']:.0f} W:{metadata['window_width']:.0f}"
140
+
141
+ # Create overlay image if requested
142
+ overlay_image = None
143
+ if show_overlay:
144
+ # For ROI visualization
145
+ if task in ["analyze_point", "full_analysis"] and roi_x and roi_y:
146
+ overlay_image = draw_roi_on_image(display_array.copy(), roi_x, roi_y, roi_radius)
147
+
148
+ # For fat segmentation overlay
149
+ if task == "segment_fat" and 'segmentation' in results and modality == 'CT':
150
+ # Get segmentation masks from results
151
+ seg_results = {
152
+ 'segments': {
153
+ 'subcutaneous': {'mask': None},
154
+ 'visceral': {'mask': None}
155
+ }
156
+ }
157
+
158
+ # Check if we have actual mask data
159
+ if 'segments' in results['segmentation']:
160
+ seg_results = results['segmentation']
161
+
162
+ overlay_image = create_fat_overlay(display_array.copy(), seg_results)
163
+
164
+ return display_array, info, visual_report, results, overlay_image
165
+
166
+ except Exception as e:
167
+ error_msg = f"Error: {str(e)}"
168
+ return None, error_msg, f"<div style='color: red;'>❌ {error_msg}</div>", {"error": error_msg}, None
169
+
170
+
171
+ def create_visual_report(results, metadata):
172
+ """Creates a visual HTML report with improved styling"""
173
+ html = f"""
174
+ <div class='medical-report' style='font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
175
+ padding: 24px;
176
+ background: #ffffff;
177
+ border-radius: 12px;
178
+ max-width: 100%;
179
+ box-shadow: 0 2px 8px rgba(0,0,0,0.1);
180
+ color: #1a1a1a !important;'>
181
+
182
+ <h2 style='color: #1e40af !important;
183
+ border-bottom: 3px solid #3b82f6;
184
+ padding-bottom: 12px;
185
+ margin-bottom: 20px;
186
+ font-size: 24px;
187
+ font-weight: 600;'>
188
+ πŸ₯ Medical Image Analysis Report
189
+ </h2>
190
+
191
+ <div style='background: #f0f9ff;
192
+ padding: 20px;
193
+ margin: 16px 0;
194
+ border-radius: 8px;
195
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
196
+ <h3 style='color: #1e3a8a !important;
197
+ font-size: 18px;
198
+ font-weight: 600;
199
+ margin-bottom: 12px;'>
200
+ πŸ“‹ Metadata
201
+ </h3>
202
+ <table style='width: 100%; border-collapse: collapse;'>
203
+ <tr>
204
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>File Type:</strong></td>
205
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('file_type', 'Unknown')}</td>
206
+ </tr>
207
+ <tr>
208
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Modality:</strong></td>
209
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('modality', 'Unknown')}</td>
210
+ </tr>
211
+ <tr>
212
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Image Size:</strong></td>
213
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{metadata.get('shape', 'Unknown')}</td>
214
+ </tr>
215
+ <tr>
216
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Timestamp:</strong></td>
217
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{results.get('timestamp', 'N/A')}</td>
218
+ </tr>
219
+ </table>
220
+ </div>
221
+ """
222
+
223
+ # Point Analysis
224
+ if 'point_analysis' in results:
225
+ pa = results['point_analysis']
226
+ tissue = pa.get('tissue_type', {})
227
+
228
+ html += f"""
229
+ <div style='background: #f0f9ff;
230
+ padding: 20px;
231
+ margin: 16px 0;
232
+ border-radius: 8px;
233
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
234
+ <h3 style='color: #1e3a8a !important;
235
+ font-size: 18px;
236
+ font-weight: 600;
237
+ margin-bottom: 12px;'>
238
+ 🎯 Point Analysis
239
+ </h3>
240
+ <table style='width: 100%; border-collapse: collapse;'>
241
+ <tr>
242
+ <td style='padding: 8px 0; color: #4b5563 !important; width: 40%;'><strong style='color: #374151 !important;'>Position:</strong></td>
243
+ <td style='padding: 8px 0; color: #1f2937 !important;'>({pa.get('location', {}).get('x', 'N/A')}, {pa.get('location', {}).get('y', 'N/A')})</td>
244
+ </tr>
245
+ """
246
+
247
+ if results.get('modality') == 'CT':
248
+ html += f"""
249
+ <tr>
250
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>HU Value:</strong></td>
251
+ <td style='padding: 8px 0; color: #1f2937 !important; font-weight: 500;'>{pa.get('hu_value', 'N/A'):.1f}</td>
252
+ </tr>
253
+ """
254
+ else:
255
+ html += f"""
256
+ <tr>
257
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Intensity:</strong></td>
258
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('intensity', 'N/A'):.3f}</td>
259
+ </tr>
260
+ """
261
+
262
+ html += f"""
263
+ <tr>
264
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Tissue Type:</strong></td>
265
+ <td style='padding: 8px 0; color: #1f2937 !important;'>
266
+ <span style='font-size: 1.3em; vertical-align: middle;'>{tissue.get('icon', '')}</span>
267
+ <span style='font-weight: 500; text-transform: capitalize;'>{tissue.get('type', 'Unknown').replace('_', ' ')}</span>
268
+ </td>
269
+ </tr>
270
+ <tr>
271
+ <td style='padding: 8px 0; color: #4b5563 !important;'><strong style='color: #374151 !important;'>Confidence:</strong></td>
272
+ <td style='padding: 8px 0; color: #1f2937 !important;'>{pa.get('confidence', 'N/A')}</td>
273
+ </tr>
274
+ </table>
275
+ """
276
+
277
+ if 'reasoning' in pa:
278
+ html += f"""
279
+ <div style='margin-top: 12px;
280
+ padding: 12px;
281
+ background: #f0f7ff;
282
+ border-left: 3px solid #0066cc;
283
+ border-radius: 4px;'>
284
+ <p style='margin: 0; color: #4b5563 !important; font-style: italic;'>
285
+ πŸ’­ {pa['reasoning']}
286
+ </p>
287
+ </div>
288
+ """
289
+
290
+ html += "</div>"
291
+
292
+ # Segmentation Results
293
+ if 'segmentation' in results and results['segmentation']:
294
+ seg = results['segmentation']
295
+
296
+ if 'statistics' in seg:
297
+ # Fat segmentation for CT
298
+ stats = seg['statistics']
299
+ html += f"""
300
+ <div style='background: #f0f9ff;
301
+ padding: 20px;
302
+ margin: 16px 0;
303
+ border-radius: 8px;
304
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
305
+ <h3 style='color: #1e3a8a !important;
306
+ font-size: 18px;
307
+ font-weight: 600;
308
+ margin-bottom: 12px;'>
309
+ πŸ”¬ Fat Segmentation Analysis
310
+ </h3>
311
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 16px;'>
312
+ <div style='padding: 16px; background: #ffffff; border-radius: 6px; border: 1px solid #e5e7eb;'>
313
+ <h4 style='color: #6b7280 !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Total Fat</h4>
314
+ <p style='color: #1f2937 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('total_fat_percentage', 0):.1f}%</p>
315
+ </div>
316
+ <div style='padding: 16px; background: #fffbeb; border-radius: 6px; border: 1px solid #fbbf24;'>
317
+ <h4 style='color: #92400e !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Subcutaneous</h4>
318
+ <p style='color: #d97706 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('subcutaneous_fat_percentage', 0):.1f}%</p>
319
+ </div>
320
+ <div style='padding: 16px; background: #fef2f2; border-radius: 6px; border: 1px solid #fca5a5;'>
321
+ <h4 style='color: #991b1b !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>Visceral</h4>
322
+ <p style='color: #dc2626 !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_fat_percentage', 0):.1f}%</p>
323
+ </div>
324
+ <div style='padding: 16px; background: #eff6ff; border-radius: 6px; border: 1px solid #93c5fd;'>
325
+ <h4 style='color: #1e3a8a !important; font-size: 14px; margin: 0 0 8px 0; font-weight: 500;'>V/S Ratio</h4>
326
+ <p style='color: #1e40af !important; font-size: 24px; font-weight: 600; margin: 0;'>{stats.get('visceral_subcutaneous_ratio', 0):.2f}</p>
327
+ </div>
328
+ </div>
329
+ """
330
+
331
+ if 'interpretation' in seg:
332
+ interp = seg['interpretation']
333
+ obesity_color = "#27ae60" if interp.get("obesity_risk") == "normal" else "#f39c12" if interp.get("obesity_risk") == "moderate" else "#e74c3c"
334
+ visceral_color = "#27ae60" if interp.get("visceral_risk") == "normal" else "#f39c12" if interp.get("visceral_risk") == "moderate" else "#e74c3c"
335
+
336
+ html += f"""
337
+ <div style='margin-top: 16px; padding: 16px; background: #f8f9fa; border-radius: 6px;'>
338
+ <h4 style='color: #333; font-size: 16px; font-weight: 600; margin-bottom: 8px;'>Risk Assessment</h4>
339
+ <div style='display: grid; grid-template-columns: 1fr 1fr; gap: 12px;'>
340
+ <div>
341
+ <span style='color: #666; font-size: 14px;'>Obesity Risk:</span>
342
+ <span style='color: {obesity_color}; font-weight: 600; margin-left: 8px;'>{interp.get('obesity_risk', 'N/A').upper()}</span>
343
+ </div>
344
+ <div>
345
+ <span style='color: #666; font-size: 14px;'>Visceral Risk:</span>
346
+ <span style='color: {visceral_color}; font-weight: 600; margin-left: 8px;'>{interp.get('visceral_risk', 'N/A').upper()}</span>
347
+ </div>
348
+ </div>
349
+ """
350
+
351
+ if interp.get('recommendations'):
352
+ html += """
353
+ <div style='margin-top: 12px; padding-top: 12px; border-top: 1px solid #e0e0e0;'>
354
+ <h5 style='color: #333; font-size: 14px; font-weight: 600; margin-bottom: 8px;'>πŸ’‘ Recommendations</h5>
355
+ <ul style='margin: 0; padding-left: 20px; color: #555;'>
356
+ """
357
+ for rec in interp['recommendations']:
358
+ html += f"<li style='margin: 4px 0;'>{rec}</li>"
359
+ html += "</ul></div>"
360
+
361
+ html += "</div>"
362
+ html += "</div>"
363
+
364
+ elif 'tissue_distribution' in seg:
365
+ # X-ray tissue distribution
366
+ html += f"""
367
+ <div style='background: white;
368
+ padding: 20px;
369
+ margin: 16px 0;
370
+ border-radius: 8px;
371
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
372
+ <h3 style='color: #333;
373
+ font-size: 18px;
374
+ font-weight: 600;
375
+ margin-bottom: 12px;'>
376
+ 🦴 Tissue Distribution
377
+ </h3>
378
+ <div style='display: grid; grid-template-columns: repeat(auto-fit, minmax(120px, 1fr)); gap: 12px;'>
379
+ """
380
+
381
+ tissue_dist = seg['tissue_distribution']
382
+ tissue_icons = {
383
+ 'bone': '🦴', 'soft_tissue': 'πŸ”΄', 'air': '🌫️',
384
+ 'metal': 'βš™οΈ', 'fat': '🟑', 'fluid': 'πŸ’§'
385
+ }
386
+
387
+ tissue_colors = {
388
+ 'bone': '#fff7e6',
389
+ 'soft_tissue': '#fee',
390
+ 'air': '#e6f3ff',
391
+ 'metal': '#f0f0f0',
392
+ 'fat': '#fffbe6',
393
+ 'fluid': '#e6f7ff'
394
+ }
395
+
396
+ for tissue, percentage in tissue_dist.items():
397
+ if percentage > 0:
398
+ icon = tissue_icons.get(tissue, 'πŸ“')
399
+ bg_color = tissue_colors.get(tissue, '#f8f9fa')
400
+ html += f"""
401
+ <div style='text-align: center;
402
+ padding: 16px;
403
+ background: {bg_color};
404
+ border-radius: 8px;
405
+ transition: transform 0.2s;'>
406
+ <div style='font-size: 2.5em; margin-bottom: 8px;'>{icon}</div>
407
+ <div style='font-weight: 600;
408
+ color: #333;
409
+ font-size: 14px;
410
+ margin-bottom: 4px;'>
411
+ {tissue.replace('_', ' ').title()}
412
+ </div>
413
+ <div style='color: #0066cc;
414
+ font-size: 20px;
415
+ font-weight: 700;'>
416
+ {percentage:.1f}%
417
+ </div>
418
+ </div>
419
+ """
420
+
421
+ html += "</div>"
422
+
423
+ if seg.get('clinical_findings'):
424
+ html += """
425
+ <div style='margin-top: 16px;
426
+ padding: 16px;
427
+ background: #fff3cd;
428
+ border-left: 4px solid #ffc107;
429
+ border-radius: 4px;'>
430
+ <h4 style='color: #856404;
431
+ font-size: 16px;
432
+ font-weight: 600;
433
+ margin: 0 0 8px 0;'>
434
+ ⚠️ Clinical Findings
435
+ </h4>
436
+ <ul style='margin: 0; padding-left: 20px; color: #856404;'>
437
+ """
438
+ for finding in seg['clinical_findings']:
439
+ html += f"<li style='margin: 4px 0;'>{finding.get('description', 'Unknown finding')}</li>"
440
+ html += "</ul></div>"
441
+
442
+ html += "</div>"
443
+
444
+ # Quality Assessment
445
+ if 'quality_metrics' in results:
446
+ quality = results['quality_metrics']
447
+ quality_colors = {
448
+ 'excellent': '#27ae60',
449
+ 'good': '#27ae60',
450
+ 'fair': '#f39c12',
451
+ 'poor': '#e74c3c',
452
+ 'unknown': '#95a5a6'
453
+ }
454
+ q_color = quality_colors.get(quality.get('overall_quality', 'unknown'), '#95a5a6')
455
+
456
+ html += f"""
457
+ <div style='background: #f0f9ff;
458
+ padding: 20px;
459
+ margin: 16px 0;
460
+ border-radius: 8px;
461
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);'>
462
+ <h3 style='color: #1e3a8a !important;
463
+ font-size: 18px;
464
+ font-weight: 600;
465
+ margin-bottom: 12px;'>
466
+ πŸ“Š Image Quality Assessment
467
+ </h3>
468
+ <div style='display: flex; align-items: center; gap: 16px;'>
469
+ <div>
470
+ <span style='color: #4b5563 !important; font-size: 14px;'>Overall Quality:</span>
471
+ <span style='color: {q_color} !important;
472
+ font-size: 18px;
473
+ font-weight: 700;
474
+ margin-left: 8px;'>
475
+ {quality.get('overall_quality', 'unknown').upper()}
476
+ </span>
477
+ </div>
478
+ </div>
479
+ """
480
+
481
+ if quality.get('issues'):
482
+ html += f"""
483
+ <div style='margin-top: 12px;
484
+ padding: 12px;
485
+ background: #fff3cd;
486
+ border-left: 3px solid #ffc107;
487
+ border-radius: 4px;'>
488
+ <strong style='color: #856404;'>Issues Detected:</strong>
489
+ <ul style='margin: 4px 0 0 0; padding-left: 20px; color: #856404;'>
490
+ """
491
+ for issue in quality['issues']:
492
+ html += f"<li style='margin: 2px 0;'>{issue}</li>"
493
+ html += "</ul></div>"
494
+
495
+ html += "</div>"
496
+
497
+ html += "</div>"
498
+ return html
499
+
500
+
501
+ # Create Gradio interface
502
+ with gr.Blocks(
503
+ title="Medical Image Analyzer - Wrapper Test",
504
+ theme=gr.themes.Soft(
505
+ primary_hue="blue",
506
+ secondary_hue="blue",
507
+ neutral_hue="slate",
508
+ text_size="md",
509
+ spacing_size="md",
510
+ radius_size="md",
511
+ ).set(
512
+ # Medical blue theme colors
513
+ body_background_fill="*neutral_950",
514
+ body_background_fill_dark="*neutral_950",
515
+ block_background_fill="*neutral_900",
516
+ block_background_fill_dark="*neutral_900",
517
+ border_color_primary="*primary_600",
518
+ border_color_primary_dark="*primary_600",
519
+ # Text colors for better contrast
520
+ body_text_color="*neutral_100",
521
+ body_text_color_dark="*neutral_100",
522
+ body_text_color_subdued="*neutral_300",
523
+ body_text_color_subdued_dark="*neutral_300",
524
+ # Button colors
525
+ button_primary_background_fill="*primary_600",
526
+ button_primary_background_fill_dark="*primary_600",
527
+ button_primary_text_color="white",
528
+ button_primary_text_color_dark="white",
529
+ ),
530
+ css="""
531
+ /* Medical blue theme with high contrast */
532
+ :root {
533
+ --medical-blue: #1e40af;
534
+ --medical-blue-light: #3b82f6;
535
+ --medical-blue-dark: #1e3a8a;
536
+ --text-primary: #f9fafb;
537
+ --text-secondary: #e5e7eb;
538
+ --bg-primary: #0f172a;
539
+ --bg-secondary: #1e293b;
540
+ --bg-tertiary: #334155;
541
+ }
542
+
543
+ /* Override default text colors for medical theme */
544
+ * {
545
+ color: var(--text-primary) !important;
546
+ }
547
+
548
+ /* Fix text contrast issues */
549
+ .svelte-12ioyct { color: var(--text-primary) !important; }
550
+
551
+ /* Override table cell colors for better readability */
552
+ td {
553
+ color: var(--text-primary) !important;
554
+ padding: 8px 12px !important;
555
+ }
556
+ td strong {
557
+ color: var(--text-primary) !important;
558
+ font-weight: 600 !important;
559
+ }
560
+
561
+ /* Fix upload component text and translate */
562
+ .wrap.svelte-12ioyct::before {
563
+ content: 'Drop file here' !important;
564
+ color: var(--text-primary) !important;
565
+ }
566
+ .wrap.svelte-12ioyct::after {
567
+ content: 'Click to upload' !important;
568
+ color: var(--text-secondary) !important;
569
+ }
570
+ .wrap.svelte-12ioyct span {
571
+ display: none !important; /* Hide German text */
572
+ }
573
+
574
+ /* Style the file upload area */
575
+ .file-upload {
576
+ border: 2px dashed var(--medical-blue-light) !important;
577
+ border-radius: 8px !important;
578
+ padding: 20px !important;
579
+ text-align: center !important;
580
+ background: var(--bg-secondary) !important;
581
+ transition: all 0.3s ease !important;
582
+ color: var(--text-primary) !important;
583
+ }
584
+
585
+ .file-upload:hover {
586
+ border-color: var(--medical-blue) !important;
587
+ background: var(--bg-tertiary) !important;
588
+ box-shadow: 0 0 20px rgba(59, 130, 246, 0.2) !important;
589
+ }
590
+
591
+ /* Make sure all text in tables is readable */
592
+ table {
593
+ width: 100%;
594
+ border-collapse: collapse;
595
+ }
596
+ th {
597
+ font-weight: 600;
598
+ background: var(--bg-tertiary);
599
+ padding: 8px 12px;
600
+ }
601
+
602
+ /* Ensure report text is readable with white background */
603
+ .medical-report {
604
+ background: #ffffff !important;
605
+ border: 2px solid var(--medical-blue-light) !important;
606
+ border-radius: 8px !important;
607
+ padding: 16px !important;
608
+ color: #1a1a1a !important;
609
+ }
610
+
611
+ .medical-report * {
612
+ color: #1f2937 !important; /* Dark gray text */
613
+ }
614
+
615
+ .medical-report h2 {
616
+ color: #1e40af !important; /* Medical blue for main heading */
617
+ }
618
+
619
+ .medical-report h3, .medical-report h4 {
620
+ color: #1e3a8a !important; /* Darker medical blue for subheadings */
621
+ }
622
+
623
+ .medical-report strong {
624
+ color: #374151 !important; /* Darker gray for labels */
625
+ }
626
+
627
+ .medical-report td {
628
+ color: #1f2937 !important; /* Ensure table text is dark */
629
+ }
630
+
631
+ /* Report sections with light blue background */
632
+ .medical-report > div {
633
+ background: #f0f9ff !important;
634
+ color: #1f2937 !important;
635
+ }
636
+
637
+ /* Medical blue accents for UI elements */
638
+ .gr-button-primary {
639
+ background: var(--medical-blue) !important;
640
+ border-color: var(--medical-blue) !important;
641
+ }
642
+
643
+ .gr-button-primary:hover {
644
+ background: var(--medical-blue-dark) !important;
645
+ border-color: var(--medical-blue-dark) !important;
646
+ }
647
+
648
+ /* Accordion styling with medical theme */
649
+ .gr-accordion {
650
+ border-color: var(--medical-blue-light) !important;
651
+ }
652
+
653
+ /* Slider track in medical blue */
654
+ input[type="range"]::-webkit-slider-track {
655
+ background: var(--bg-tertiary) !important;
656
+ }
657
+
658
+ input[type="range"]::-webkit-slider-thumb {
659
+ background: var(--medical-blue) !important;
660
+ }
661
+
662
+ /* Tab styling */
663
+ .gr-tab-item {
664
+ border-color: var(--medical-blue-light) !important;
665
+ }
666
+
667
+ .gr-tab-item.selected {
668
+ background: var(--medical-blue) !important;
669
+ color: white !important;
670
+ }
671
+ """
672
+ ) as demo:
673
+ gr.Markdown("""
674
+ # πŸ₯ Medical Image Analyzer
675
+
676
+ Supports **DICOM** (.dcm) and all image formats with automatic modality detection!
677
+ """)
678
+
679
+ with gr.Row():
680
+ with gr.Column(scale=1):
681
+ # File upload with custom styling - no file type restrictions
682
+ with gr.Group():
683
+ gr.Markdown("### πŸ“€ Upload Medical Image")
684
+ file_input = gr.File(
685
+ label="Select Medical Image File (.dcm, .dicom, IM_*, .png, .jpg, etc.)",
686
+ file_count="single",
687
+ type="filepath",
688
+ elem_classes="file-upload"
689
+ # Note: NO file_types parameter = accepts ALL files
690
+ )
691
+ gr.Markdown("""
692
+ <small style='color: #666;'>
693
+ Accepts: DICOM (.dcm, .dicom), Images (.png, .jpg, .jpeg, .tiff, .bmp),
694
+ and files without extensions (e.g., IM_0001, IM_0002, etc.)
695
+ </small>
696
+ """)
697
+
698
+ # Task selection
699
+ task = gr.Dropdown(
700
+ choices=[
701
+ ("🎯 Point Analysis", "analyze_point"),
702
+ ("πŸ”¬ Fat Segmentation (CT only)", "segment_fat"),
703
+ ("πŸ“Š Full Analysis", "full_analysis")
704
+ ],
705
+ value="full_analysis",
706
+ label="Analysis Task"
707
+ )
708
+
709
+ # ROI settings
710
+ with gr.Accordion("🎯 Region of Interest (ROI)", open=True):
711
+ roi_x = gr.Slider(0, 512, 256, label="X Position", step=1)
712
+ roi_y = gr.Slider(0, 512, 256, label="Y Position", step=1)
713
+ roi_radius = gr.Slider(5, 50, 10, label="Radius", step=1)
714
+
715
+ # Clinical context
716
+ with gr.Accordion("πŸ₯ Clinical Context", open=False):
717
+ symptoms = gr.CheckboxGroup(
718
+ choices=[
719
+ "Dyspnea", "Chest Pain", "Abdominal Pain",
720
+ "Trauma", "Obesity Screening", "Routine Check"
721
+ ],
722
+ label="Symptoms/Indication"
723
+ )
724
+
725
+ # Visualization options
726
+ with gr.Accordion("🎨 Visualization Options", open=True):
727
+ show_overlay = gr.Checkbox(
728
+ label="Show ROI/Segmentation Overlay",
729
+ value=True,
730
+ info="Display ROI circle or fat segmentation overlay on the image"
731
+ )
732
+
733
+ analyze_btn = gr.Button("πŸ”¬ Analyze", variant="primary", size="lg")
734
+
735
+ with gr.Column(scale=2):
736
+ # Results with tabs for different views
737
+ with gr.Tab("πŸ–ΌοΈ Original Image"):
738
+ image_display = gr.Image(label="Medical Image", type="numpy")
739
+
740
+ with gr.Tab("🎯 Overlay View"):
741
+ overlay_display = gr.Image(label="Image with Overlay", type="numpy")
742
+
743
+ file_info = gr.Textbox(label="File Information", lines=1)
744
+
745
+ with gr.Tab("πŸ“Š Visual Report"):
746
+ report_html = gr.HTML()
747
+
748
+ with gr.Tab("πŸ”§ JSON Output"):
749
+ json_output = gr.JSON(label="Structured Data for AI Agents")
750
+
751
+ # Examples and help
752
+ with gr.Row():
753
+ gr.Markdown("""
754
+ ### πŸ“ Supported Formats
755
+ - **DICOM**: Automatic HU value extraction and modality detection
756
+ - **PNG/JPG**: Interpreted as X-ray (CR) unless filename contains 'CT'
757
+ - **All Formats**: Automatic grayscale conversion
758
+ - **Files without extension**: Supported (e.g., IM_0001)
759
+
760
+ ### 🎯 Usage
761
+ 1. Upload a medical image file
762
+ 2. Select the desired analysis task
763
+ 3. For point analysis: Adjust the ROI position
764
+ 4. Click "Analyze"
765
+
766
+ ### πŸ’‘ Tips
767
+ - The ROI sliders will automatically adjust to your image size
768
+ - CT images show HU values, X-rays show intensity values
769
+ - Fat segmentation is only available for CT images
770
+ """)
771
+
772
+ # Connect the interface
773
+ analyze_btn.click(
774
+ fn=process_and_analyze,
775
+ inputs=[file_input, task, roi_x, roi_y, roi_radius, symptoms, show_overlay],
776
+ outputs=[image_display, file_info, report_html, json_output, overlay_display]
777
+ )
778
+
779
+ # Auto-update ROI limits when image is loaded
780
+ def update_roi_on_upload(file_obj):
781
+ if file_obj is None:
782
+ return gr.update(), gr.update()
783
+
784
+ try:
785
+ analyzer = MedicalImageAnalyzer()
786
+ _, _, metadata = analyzer.process_file(file_obj.name if hasattr(file_obj, 'name') else str(file_obj))
787
+
788
+ if 'shape' in metadata:
789
+ h, w = metadata['shape']
790
+ return gr.update(maximum=w-1, value=w//2), gr.update(maximum=h-1, value=h//2)
791
+ except:
792
+ pass
793
+
794
+ return gr.update(), gr.update()
795
+
796
+ file_input.change(
797
+ fn=update_roi_on_upload,
798
+ inputs=[file_input],
799
+ outputs=[roi_x, roi_y]
800
+ )
801
+
802
+ # Add custom JavaScript to translate upload text
803
+ demo.load(
804
+ None,
805
+ None,
806
+ None,
807
+ js="""
808
+ () => {
809
+ // Wait for the page to load
810
+ setTimeout(() => {
811
+ // Find and replace German text in upload component
812
+ const uploadElements = document.querySelectorAll('.wrap.svelte-12ioyct');
813
+ uploadElements.forEach(el => {
814
+ if (el.textContent.includes('Datei hier ablegen')) {
815
+ el.innerHTML = el.innerHTML
816
+ .replace('Datei hier ablegen', 'Drop file here')
817
+ .replace('oder', 'or')
818
+ .replace('Hochladen', 'Click to upload');
819
+ }
820
+ });
821
+
822
+ // Also update the button text if it exists
823
+ const uploadButtons = document.querySelectorAll('button');
824
+ uploadButtons.forEach(btn => {
825
+ if (btn.textContent.includes('Hochladen')) {
826
+ btn.textContent = 'Upload';
827
+ }
828
+ });
829
+ }, 100);
830
+ }
831
+ """
832
+ )
833
+
834
+ if __name__ == "__main__":
835
+ demo.launch()