paul hilders commited on
Commit
9ee9e02
1 Parent(s): 66dfac7

Update description again

Browse files
Files changed (1) hide show
  1. app.py +3 -2
app.py CHANGED
@@ -61,11 +61,12 @@ outputs = [gr.inputs.Image(type='pil', label="Output Image"), "highlight"]
61
 
62
 
63
  description = """A demonstration based on the Generic Attention-model Explainability method for Interpreting Bi-Modal
64
- Transformers by Chefer et al. (2021): https://github.com/hila-chefer/Transformer-MM-Explainability. \n \n
 
65
  This demo shows attributions scores on both the image and the text input when presented CLIP with a
66
  <text,image> pair. Attributions are computed as Gradient-weighted Attention Rollout (Chefer et al.,
67
  2021), and can be thought of as an estimate of the effective attention CLIP pays to its input when
68
- computing a multimodal representation"""
69
 
70
  iface = gr.Interface(fn=run_demo,
71
  inputs=inputs,
 
61
 
62
 
63
  description = """A demonstration based on the Generic Attention-model Explainability method for Interpreting Bi-Modal
64
+ Transformers by Chefer et al. (2021): https://github.com/hila-chefer/Transformer-MM-Explainability.
65
+
66
  This demo shows attributions scores on both the image and the text input when presented CLIP with a
67
  <text,image> pair. Attributions are computed as Gradient-weighted Attention Rollout (Chefer et al.,
68
  2021), and can be thought of as an estimate of the effective attention CLIP pays to its input when
69
+ computing a multimodal representation."""
70
 
71
  iface = gr.Interface(fn=run_demo,
72
  inputs=inputs,