sagnak somaniarushi commited on
Commit
3f6c7a7
1 Parent(s): f4f9521

Update README.md (#13)

Browse files

- Update README.md (b5c3f7253e2e78da0da77a21d7e8dce15d9abe23)
- Update README.md (04ceb1317abd49bfddc29f8f15220eeda5307708)


Co-authored-by: Arushi Somani <somaniarushi@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -3,7 +3,13 @@ license: cc-by-nc-4.0
3
  ---
4
  # Fuyu-8B Model Card
5
 
6
- Note: Running Fuyu requires https://github.com/huggingface/transformers/pull/26911, which may require running transformers on main!
 
 
 
 
 
 
7
 
8
  ## Model
9
 
 
3
  ---
4
  # Fuyu-8B Model Card
5
 
6
+ We’re releasing Fuyu-8B, a small version of the multimodal model that powers our product. The model is available on HuggingFace. We think Fuyu-8B is exciting because:
7
+ 1. It has a much simpler architecture and training procedure than other multi-modal models, which makes it easier to understand, scale, and deploy.
8
+ 2. It’s designed from the ground up for digital agents, so it can support arbitrary image resolutions, answer questions about graphs and diagrams, answer UI-based questions, and do fine-grained localization on screen images.
9
+ 3. It’s fast - we can get responses for large images in less than 100 milliseconds.
10
+ 4. Despite being optimized for our use-case, it performs well at standard image understanding benchmarks such as visual question-answering and natural-image-captioning.
11
+
12
+ Please note that **the model we have released is a base model. We expect you to need to finetune the model for specific use cases like verbose captioning or multimodal chat.** In our experience, the model responds well to few-shotting and fine-tuning for a variety of use-cases.
13
 
14
  ## Model
15