andreped commited on
Commit
e7f6bf3
1 Parent(s): 4eaed2f

Refactored + updated gradio app README

Browse files
Files changed (3) hide show
  1. README.md +1 -1
  2. demo/README.md +30 -0
  3. demo/app.py +0 -8
README.md CHANGED
@@ -4,7 +4,7 @@ colorFrom: indigo
4
  colorTo: indigo
5
  sdk: docker
6
  app_port: 7860
7
- emoji: 🚀
8
  pinned: false
9
  license: mit
10
  app_file: demo/app.py
 
4
  colorTo: indigo
5
  sdk: docker
6
  app_port: 7860
7
+ emoji: 🔎
8
  pinned: false
9
  license: mit
10
  app_file: demo/app.py
demo/README.md CHANGED
@@ -11,3 +11,33 @@ app_file: demo/app.py
11
  ---
12
 
13
  # livermask Hugging Face demo - through docker SDK
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  # livermask Hugging Face demo - through docker SDK
14
+
15
+ Deploying simple models in a gradio-based web interface in Hugging Face spaces is easy.
16
+ For any other custom pipeline, with various dependencies and challenging behaviour, it
17
+ might be necessary to use Docker containers instead.
18
+
19
+ Deployment through a custom Docker image over the existing Gradio image was
20
+ necessary in this case due to `tensorflow` and `gradio` having colliding
21
+ versions. As `livermask` depends on `tf`, the only way to get around it was
22
+ fixing the broken dependency, which was handled by reinstalling and changing
23
+ the `typing_extensions` with a version that `gradio` required for the widgets
24
+ we used. Luckily, this did not break anything in `tf`, even though `tf` has a
25
+ very strict versioning criteria for this dependency.
26
+
27
+ Anyways, everything works as intended now. For every new push to the main branch,
28
+ continuous deployment to the Hugging Face `livermask` space is performed through
29
+ GitHub Actions.
30
+
31
+ When the space is updated, the Docker image is rebuilt/updated (caching if possible).
32
+ Then when finished, the end users can test the app as they please.
33
+
34
+ Right now, the functionality of the app is extremely limited, only offering a widget
35
+ for uploading a NIfTI file (`.nii` or `.nii.gz`) and visualizing the produced surface
36
+ of the predicted liver parenchyma 3D volume when finished processing.
37
+
38
+ Analysis process can be monitored from the `Logs` tab next to the `Running` button
39
+ in the Hugging Face `livermask` space.
40
+
41
+ Natural future TODOs include:
42
+ - [ ] Add gallery widget to enable scrolling through 2D slices
43
+ - [ ] Render segmentation for individual 2D slices as overlays
demo/app.py CHANGED
@@ -30,14 +30,6 @@ def run_model(input_path):
30
  from livermask.utils.run import run_analysis
31
 
32
  run_analysis(cpu=True, extension='.nii', path=input_path, output='prediction', verbose=True, vessels=False, name="/home/user/app/model.h5", mp_enabled=False)
33
-
34
- #cmd_docker = ["python3", "-m", "livermask.livermask", "--input", input_path, "--output", "prediction", "--verbose"]
35
- #sp.check_call(cmd_docker, shell=True) # @FIXME: shell=True here is not optimal -> starts a shell after calling script
36
-
37
- #p = sp.Popen(cmd_docker, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
38
- #stdout, stderr = p.communicate()
39
- #print("stdout:", stdout)
40
- #print("stderr:", stderr)
41
 
42
 
43
  def load_mesh(mesh_file_name):
 
30
  from livermask.utils.run import run_analysis
31
 
32
  run_analysis(cpu=True, extension='.nii', path=input_path, output='prediction', verbose=True, vessels=False, name="/home/user/app/model.h5", mp_enabled=False)
 
 
 
 
 
 
 
 
33
 
34
 
35
  def load_mesh(mesh_file_name):