paul hilders commited on
Commit
8434fc6
1 Parent(s): 9775911

Update descriptions again

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -63,7 +63,7 @@ outputs = [gr.inputs.Image(type='pil', label="Output Image"), "highlight"]
63
  description = """A demonstration based on the Generic Attention-model Explainability method for Interpreting Bi-Modal
64
  Transformers by Chefer et al. (2021): https://github.com/hila-chefer/Transformer-MM-Explainability.
65
  <br> <br>
66
- This demo shows attributions scores on both the image and the text input when presented CLIP with a
67
  <text,image> pair. Attributions are computed as Gradient-weighted Attention Rollout (Chefer et al.,
68
  2021), and can be thought of as an estimate of the effective attention CLIP pays to its input when
69
  computing a multimodal representation."""
 
63
  description = """A demonstration based on the Generic Attention-model Explainability method for Interpreting Bi-Modal
64
  Transformers by Chefer et al. (2021): https://github.com/hila-chefer/Transformer-MM-Explainability.
65
  <br> <br>
66
+ This demo shows attributions scores on both the image and the text input when presenting CLIP with a
67
  <text,image> pair. Attributions are computed as Gradient-weighted Attention Rollout (Chefer et al.,
68
  2021), and can be thought of as an estimate of the effective attention CLIP pays to its input when
69
  computing a multimodal representation."""