chansung commited on
Commit
f0c7c16
β€’
1 Parent(s): d7a7630

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -5
app.py CHANGED
@@ -242,19 +242,16 @@ def sepia(input_img):
242
  title = "SegFormer(ADE20k) in TensorFlow"
243
  description = """
244
 
245
- This is demo TensorFlow SegFormer from πŸ€— `transformers` official package. The pre-trained model is optimized to segment scene specific images. We are currently using ONNX model converted from the TensorFlow based SegFormer to improve the latency. The average latency of an inference is 21 and 8 seconds for TensorFlow and ONNX converted models respectively (in Colab).
246
 
247
  """
248
 
249
- article = "Check out the [repository](https://github.com/deep-diver/segformer-tf-transformers) to find out how to make inference, finetune the model with custom dataset, and further information."
250
-
251
  demo = gr.Interface(sepia,
252
  gr.inputs.Image(type="filepath"),
253
  outputs=['plot'],
254
  examples=["ADE_val_00000001.jpeg"],
255
  allow_flagging='never',
256
  title=title,
257
- description=description,
258
- article=article)
259
 
260
  demo.launch()
242
  title = "SegFormer(ADE20k) in TensorFlow"
243
  description = """
244
 
245
+ This is demo TensorFlow SegFormer from πŸ€— `transformers` official package. The pre-trained model is optimized to segment scene specific images. We are **currently using ONNX model converted from the TensorFlow based SegFormer to improve the latency**. The average latency of an inference is **21** and **8** seconds for TensorFlow and ONNX converted models respectively (in Colab). Check out the [repository](https://github.com/deep-diver/segformer-tf-transformers) to find out how to make inference, finetune the model with custom dataset, and further information.
246
 
247
  """
248
 
 
 
249
  demo = gr.Interface(sepia,
250
  gr.inputs.Image(type="filepath"),
251
  outputs=['plot'],
252
  examples=["ADE_val_00000001.jpeg"],
253
  allow_flagging='never',
254
  title=title,
255
+ description=description)
 
256
 
257
  demo.launch()