anas-awadalla commited on
Commit
84256d5
1 Parent(s): 8cb1584
Files changed (1) hide show
  1. app.py +5 -8
app.py CHANGED
@@ -173,24 +173,21 @@ with gr.Blocks() as demo:
173
  # As a consequence, you should treat this model as a research prototype and not as a production-ready model. Before using this demo please familiarize yourself with our [model card](https://github.com/mlfoundations/open_flamingo/blob/main/MODEL_CARD.md) and [terms and conditions](https://github.com/mlfoundations/open_flamingo/blob/main/TERMS_AND_CONDITIONS.md)
174
  gr.Markdown(
175
  """
176
- # 🦩 OpenFlamingo-9B Demo
177
 
178
- Blog post: [An open-source framework for training vision-language models with in-context learning (like GPT-4!)]()
179
  GitHub: [open_flamingo](https://github.com/mlfoundations/open_flamingo)
180
 
181
  In this demo we implement an interactive interface that showcases the in-context learning capabilities of the OpenFlamingo-9B model, a large multimodal model trained on top of LLaMA-7B.
182
  The model is trained on an interleaved mixture of text and images and is able to generate text conditioned on sequences of images/text. To safeguard against harmful generations, we detect toxic text in the model output and reject it. However, we understand that this is not a perfect solution and we encourage you to use this demo responsibly. If you find that the model is generating harmful text, please report it using this [form](https://forms.gle/StbcPvyyW2p3Pc7z6).
183
-
184
- Note: This model is still a work in progress and is not fully trained. We are releasing it to showcase the capabilities of the framework and to get feedback from the community.
185
  """
186
  )
187
 
188
  with gr.Accordion("See terms and conditions"):
189
  gr.Markdown("""**Please read the following information carefully before proceeding.**
190
- OpenFlamingo is a **research prototype** that aims to enable users to interact with AI through both language and images. AI agents equipped with both language and visual understanding can be useful on a larger variety of tasks compared to models that communicate solely via language. By releasing an open-source research prototype, we hope to help the research community better understand the risks and limitations of modern visual-language AI models and accelerate the development of safer and more reliable methods.
191
- **Limitations.** OpenFlamingo is built on top of the LLaMA large language model developed by Meta AI. Large language models, including LLaMA, are trained on mostly unfiltered internet data, and have been shown to be able to produce toxic, unethical, inaccurate, and harmful content. On top of this, OpenFlamingo’s ability to support visual inputs creates additional risks, since it can be used in a wider variety of applications; image+text models may carry additional risks specific to multimodality. Please use discretion when assessing the accuracy or appropriateness of the model’s outputs, and be mindful before sharing its results.
192
- **Privacy and data collection.** This demo does NOT store any personal information on its users, and it does NOT store user queries.
193
- **Licensing.** As OpenFlamingo is built on top of the LLaMA large language model from Meta AI, the LLaMA license agreement (as documented in the Meta request form) also applies.""")
194
  read_tc = gr.Checkbox(
195
  label="I have read and agree to the terms and conditions")
196
 
 
173
  # As a consequence, you should treat this model as a research prototype and not as a production-ready model. Before using this demo please familiarize yourself with our [model card](https://github.com/mlfoundations/open_flamingo/blob/main/MODEL_CARD.md) and [terms and conditions](https://github.com/mlfoundations/open_flamingo/blob/main/TERMS_AND_CONDITIONS.md)
174
  gr.Markdown(
175
  """
176
+ # 🦩 OpenFlamingo Demo
177
 
178
+ Blog posts: #1 [An open-source framework for training vision-language models with in-context learning (like GPT-4!)](https://laion.ai/blog/open-flamingo/) // #2 [OpenFlamingo v2: New Models and Enhanced Training Setup]()\n
179
  GitHub: [open_flamingo](https://github.com/mlfoundations/open_flamingo)
180
 
181
  In this demo we implement an interactive interface that showcases the in-context learning capabilities of the OpenFlamingo-9B model, a large multimodal model trained on top of LLaMA-7B.
182
  The model is trained on an interleaved mixture of text and images and is able to generate text conditioned on sequences of images/text. To safeguard against harmful generations, we detect toxic text in the model output and reject it. However, we understand that this is not a perfect solution and we encourage you to use this demo responsibly. If you find that the model is generating harmful text, please report it using this [form](https://forms.gle/StbcPvyyW2p3Pc7z6).
 
 
183
  """
184
  )
185
 
186
  with gr.Accordion("See terms and conditions"):
187
  gr.Markdown("""**Please read the following information carefully before proceeding.**
188
+ [OpenFlamingo-9B](https://huggingface.co/openflamingo/OpenFlamingo-9B-vitl-mpt7b) is a **research prototype** that aims to enable users to interact with AI through both language and images. AI agents equipped with both language and visual understanding can be useful on a larger variety of tasks compared to models that communicate solely via language. By releasing an open-source research prototype, we hope to help the research community better understand the risks and limitations of modern visual-language AI models and accelerate the development of safer and more reliable methods.
189
+ **Limitations.** OpenFlamingo-9B is built on top of the [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) large language model developed by Meta AI. Large language models are trained on mostly unfiltered internet data, and have been shown to be able to produce toxic, unethical, inaccurate, and harmful content. On top of this, OpenFlamingo’s ability to support visual inputs creates additional risks, since it can be used in a wider variety of applications; image+text models may carry additional risks specific to multimodality. Please use discretion when assessing the accuracy or appropriateness of the model’s outputs, and be mindful before sharing its results.
190
+ **Privacy and data collection.** This demo does NOT store any personal information on its users, and it does NOT store user queries.""")
 
191
  read_tc = gr.Checkbox(
192
  label="I have read and agree to the terms and conditions")
193