Martijn van Beers commited on
Commit
98deba6
1 Parent(s): acf06cd

Reformat description

Browse files

Use a multi-line string, so it can be formatted more recognizably as
markdown.

Files changed (1) hide show
  1. app.py +10 -1
app.py CHANGED
@@ -285,7 +285,16 @@ lig = gradio.Interface(
285
 
286
  iface = gradio.Parallel(hila, lig,
287
  title="RoBERTa Explainability",
288
- description="Quick comparison demo of explainability for sentiment prediction with RoBERTa. The outputs are from:\n\n* a version of [Hila Chefer's](https://github.com/hila-chefer) [Transformer-Explanability](https://github.com/hila-chefer/Transformer-Explainability/) but without the layerwise relevance propagation (as in [Transformer-MM_explainability](https://github.com/hila-chefer/Transformer-MM-Explainability/)) for a RoBERTa model.\n* [captum](https://captum.ai/)'s LayerIntegratedGradients",
 
 
 
 
 
 
 
 
 
289
  examples=[
290
  [
291
  "This movie was the best movie I have ever seen! some scenes were ridiculous, but acting was great"
 
285
 
286
  iface = gradio.Parallel(hila, lig,
287
  title="RoBERTa Explainability",
288
+ description="""
289
+ Quick comparison demo of explainability for sentiment prediction with RoBERTa. The outputs are from:
290
+
291
+ * a version of [Hila Chefer's](https://github.com/hila-chefer)
292
+ [Transformer-Explanability](https://github.com/hila-chefer/Transformer-Explainability/)
293
+ but without the layerwise relevance propagation (as in
294
+ [Transformer-MM_explainability](https://github.com/hila-chefer/Transformer-MM-Explainability/))
295
+ for a RoBERTa model.
296
+ * [captum](https://captum.ai/)'s LayerIntegratedGradients
297
+ """,
298
  examples=[
299
  [
300
  "This movie was the best movie I have ever seen! some scenes were ridiculous, but acting was great"