gordonhu608 commited on
Commit
af7f305
2 Parent(s): 9d5ce89 0c94c41

Merge branch 'main' of https://huggingface.co/mlpc-lab/LoViM_Vicuna

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: bsd-3-clause
3
  language:
4
  - en
5
  pipeline_tag: visual-question-answering
@@ -9,29 +8,29 @@ library_name: transformers
9
  <br>
10
  <br>
11
 
12
- # LoViM Model Card
13
 
14
  ## Model details
15
 
16
  **Model type:**
17
- LoViM is an open-source Vision-Languagde model trained by initializing from InstructBLIP and alignment with Vicuna on multimodal instruction-finetuning data.
18
  It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an auto-regressive language model, based on the decoder only transformer architecture.
19
 
20
  **Model date:**
21
- LoViM_Vicuna was trained in July 2023.
22
 
23
  **Paper or resources for more information:**
24
- https://gordonhu608.github.io/lovim/
25
 
26
  **License:**
27
- BSD 3-Clause License
28
 
29
  **Where to send questions or comments about the model:**
30
- https://github.com/mlpc-ucsd/LoViM
31
 
32
  ## Intended use
33
  **Primary intended uses:**
34
- The primary use of LoViM is research on large multimodal models.
35
 
36
  **Primary intended users:**
37
  The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
@@ -46,4 +45,4 @@ For zero-shot evaluation on general image task, we selected Nocaps, Flickr30K, V
46
 
47
  For zero-shot evaluation on text-rich image OCR task, we selected ST-VQA, OCR-VQA, Text-VQA, and Doc-VQA.
48
 
49
- More detials are in our github, https://github.com/mlpc-ucsd/LoViM
 
1
  ---
 
2
  language:
3
  - en
4
  pipeline_tag: visual-question-answering
 
8
  <br>
9
  <br>
10
 
11
+ # BLIVA Model Card
12
 
13
  ## Model details
14
 
15
  **Model type:**
16
+ BLIVA is an open-source Vision-Languagde model trained by initializing from InstructBLIP and alignment with Vicuna on multimodal instruction-finetuning data.
17
  It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an auto-regressive language model, based on the decoder only transformer architecture.
18
 
19
  **Model date:**
20
+ BLIVA_Vicuna was trained in July 2023.
21
 
22
  **Paper or resources for more information:**
23
+ https://gordonhu608.github.io/bliva/
24
 
25
  **License:**
26
+ Non-commercial bespoke license
27
 
28
  **Where to send questions or comments about the model:**
29
+ https://github.com/mlpc-ucsd/BLIVA
30
 
31
  ## Intended use
32
  **Primary intended uses:**
33
+ The primary use of BLIVA is research on large multimodal models.
34
 
35
  **Primary intended users:**
36
  The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
 
45
 
46
  For zero-shot evaluation on text-rich image OCR task, we selected ST-VQA, OCR-VQA, Text-VQA, and Doc-VQA.
47
 
48
+ More detials are in our github, https://github.com/mlpc-ucsd/BLIVA