Files changed (1) hide show
  1. app_dialogue.py +6 -10
app_dialogue.py CHANGED
@@ -340,16 +340,12 @@ with gr.Blocks(title="IDEFICS Playground", theme=gr.themes.Base()) as demo:
340
  gr.Image(IDEFICS_LOGO, elem_id="banner-image", show_label=False, show_download_button=False)
341
  with gr.Column(scale=5):
342
  gr.HTML("""
343
- This demo showcases <b>IDEFICS</b>, a open-access large visual language model. Like GPT-4, the multimodal model accepts arbitrary sequences of image and text inputs and produces text outputs. IDEFICS can answer questions about images, describe visual content, create stories grounded in multiple images, etc.<br><br>
344
-
345
- IDEFICS (which stands for <b>I</b>mage-aware <b>D</b>ecoder <b>E</b>nhanced à la <b>F</b>lamingo with <b>I</b>nterleaved <b>C</b>ross-attention<b>S</b>) is an open-access reproduction of <a href="https://huggingface.co/papers/2204.14198" target="_blank">Flamingo</a>, a closed-source visual language model developed by Deepmind. IDEFICS was built solely on publicly available data and models. It is currently the only visual language model of this scale (80 billion parameters) that is available in open-access.<br>
346
-
347
- 📚 The variants available in this demo were fine-tuned on a mixture of supervised and instruction fine-tuning datasets to make the models more suitable in conversational settings. For more details, we refer to our <a href="https://huggingface.co/blog/idefics" target="_blank">blog post</a>.<br>
348
-
349
- 🅿️ <b>Intended uses:</b> This demo along with the <a href="https://huggingface.co/models?sort=trending&search=HuggingFaceM4%2Fidefics" target="_blank">supporting models</a> are provided as research artifacts to the community. We detail misuses and out-of-scope uses <a href="https://huggingface.co/HuggingFaceM4/idefics-80b#misuse-and-out-of-scope-use" target="_blank">here</a>.<br>
350
-
351
- ⛔️ <b>Limitations:</b> The model can produce factually incorrect texts, hallucinate facts (with or without an image) and will struggle with small details in images. While the model will tend to refuse answering questionable user requests, it can produce problematic outputs (including racist, stereotypical, and disrespectful texts), in particular when prompted to do so. We encourage users to read our findings from evaluating the model for potential biases in the <a href="https://huggingface.co/HuggingFaceM4/idefics-80b#bias-evaluation" target="_blank">model card</a>.<br>
352
- """)
353
 
354
  # with gr.Row():
355
  # with gr.Column(scale=2):
 
340
  gr.Image(IDEFICS_LOGO, elem_id="banner-image", show_label=False, show_download_button=False)
341
  with gr.Column(scale=5):
342
  gr.HTML("""
343
+ <p>This demo showcases <strong>IDEFICS</strong>, a open-access large visual language model. Like GPT-4, the multimodal model accepts arbitrary sequences of image and text inputs and produces text outputs. IDEFICS can answer questions about images, describe visual content, create stories grounded in multiple images, etc.</p>
344
+ <p>IDEFICS (which stands for <strong>I</strong>mage-aware <strong>D</strong>ecoder <strong>E</strong>nhanced à la <strong>F</strong>lamingo with <strong>I</strong>nterleaved <strong>C</strong>ross-attention<strong>S</strong>) is an open-access reproduction of <a href="https://huggingface.co/papers/2204.14198">Flamingo</a>, a closed-source visual language model developed by Deepmind. IDEFICS was built solely on publicly available data and models. It is currently the only visual language model of this scale (80 billion parameters) that is available in open-access.</p>
345
+ <p>📚 The variants available in this demo were fine-tuned on a mixture of supervised and instruction fine-tuning datasets to make the models more suitable in conversational settings. For more details, we refer to our <a href="https://huggingface.co/blog/idefics">blog post</a>.</p>
346
+ <p>🅿️ <strong>Intended uses:</strong> This demo along with the <a href="https://huggingface.co/models?sort=trending&amp;search=HuggingFaceM4%2Fidefics">supporting models</a> are provided as research artifacts to the community. We detail misuses and out-of-scope uses <a href="https://huggingface.co/HuggingFaceM4/idefics-80b#misuse-and-out-of-scope-use">here</a>.</p>
347
+ <p>⛔️ <strong>Limitations:</strong> The model can produce factually incorrect texts, hallucinate facts (with or without an image) and will struggle with small details in images. While the model will tend to refuse answering questionable user requests, it can produce problematic outputs (including racist, stereotypical, and disrespectful texts), in particular when prompted to do so. We encourage users to read our findings from evaluating the model for potential biases in the <a href="https://huggingface.co/HuggingFaceM4/idefics-80b#bias-evaluation">model card</a>.</p>
348
+ """)
 
 
 
 
349
 
350
  # with gr.Row():
351
  # with gr.Column(scale=2):