Matyáš Boháček commited on
Commit
9f5ba58
1 Parent(s): a39ed5d

Add more md info

Browse files
Files changed (1) hide show
  1. app.py +7 -16
app.py CHANGED
@@ -106,22 +106,13 @@ def greet(label, video0, video1):
106
  label = gr.outputs.Label(num_top_classes=5, label="Top class probabilities")
107
  demo = gr.Interface(fn=greet, inputs=[gr.Dropdown(["Webcam", "Video"], label="Please select the input type:", type="value"), gr.Video(source="webcam", label="Webcam recording", type="mp4"), gr.Video(source="upload", label="Video upload", type="mp4")], outputs=label,
108
  title="🤟 SPOTER Sign language recognition",
109
- description="""
110
-
111
- Try out our recent model for sign language recognition right in your browser! The model below takes a video of a single sign in the American Sign Language at the input and provides you with probabilities of the lemmas (equivalent to words in natural language).
112
-
113
- ### Our work at CVPR
114
-
115
- Our efforts on lightweight and efficient models for sign language recognition were first introduced at WACV with our SPOTER paper. We now presented a work-in-progress follow-up here at CVPR's AVA workshop. Be sure to check our work and code below:
116
-
117
- - **WACV2022** - Original SPOTER paper - [Paper](), [Code]()
118
- - **CVPR2022 AVA Worshop** - Follow-up WIP – [Extended Abstract](), [Poster]()
119
-
120
- ### How to sign?
121
-
122
- The model wrapped in this demo was trained on [WLASL100](https://dxli94.github.io/WLASL/), so it only knows selected ASL vocabulary. Take a look at these tutorial video examples, try to replicate them yourself, and have them recognized using the webcam capture below. Have fun!
123
-
124
- """,
125
  article="This is joint work of [Matyas Bohacek](https://scholar.google.cz/citations?user=wDy1xBwAAAAJ) and [Zhuo Cao](https://www.linkedin.com/in/zhuo-cao-b0787a1aa/?originalSubdomain=hk). For more info, visit [our website.](https://www.signlanguagerecognition.com)",
126
  css="""
127
  @font-face {
 
106
  label = gr.outputs.Label(num_top_classes=5, label="Top class probabilities")
107
  demo = gr.Interface(fn=greet, inputs=[gr.Dropdown(["Webcam", "Video"], label="Please select the input type:", type="value"), gr.Video(source="webcam", label="Webcam recording", type="mp4"), gr.Video(source="upload", label="Video upload", type="mp4")], outputs=label,
108
  title="🤟 SPOTER Sign language recognition",
109
+ description="""Try out our recent model for sign language recognition right in your browser! The model below takes a video of a single sign in the American Sign Language at the input and provides you with probabilities of the lemmas (equivalent to words in natural language).
110
+ ### Our work at CVPR
111
+ Our efforts on lightweight and efficient models for sign language recognition were first introduced at WACV with our SPOTER paper. We now presented a work-in-progress follow-up here at CVPR's AVA workshop. Be sure to check our work and code below:
112
+ - **WACV2022** - Original SPOTER paper - [Paper](), [Code]()
113
+ - **CVPR2022 AVA Worshop** - Follow-up WIP – [Extended Abstract](), [Poster]()
114
+ ### How to sign?
115
+ The model wrapped in this demo was trained on [WLASL100](https://dxli94.github.io/WLASL/), so it only knows selected ASL vocabulary. Take a look at these tutorial video examples, try to replicate them yourself, and have them recognized using the webcam capture below. Have fun!""",
 
 
 
 
 
 
 
 
 
116
  article="This is joint work of [Matyas Bohacek](https://scholar.google.cz/citations?user=wDy1xBwAAAAJ) and [Zhuo Cao](https://www.linkedin.com/in/zhuo-cao-b0787a1aa/?originalSubdomain=hk). For more info, visit [our website.](https://www.signlanguagerecognition.com)",
117
  css="""
118
  @font-face {