Ozhou commited on
Commit
ecc4e55
1 Parent(s): 30423d8

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. .github/ISSUE_TEMPLATE/1-usage.yaml +31 -0
  3. .github/ISSUE_TEMPLATE/2-feature-request.yaml +13 -0
  4. .github/ISSUE_TEMPLATE/3-question.yaml +13 -0
  5. .github/ISSUE_TEMPLATE/4-discussion.yaml +13 -0
  6. .gitignore +29 -0
  7. LICENSE +201 -0
  8. README.md +367 -7
  9. controller.log +67 -0
  10. docs/Customize_Component.md +20 -0
  11. docs/Data.md +29 -0
  12. docs/LLaVA_Bench.md +31 -0
  13. docs/LLaVA_from_LLaMA2.md +29 -0
  14. docs/LoRA.md +46 -0
  15. docs/MODEL_ZOO.md +136 -0
  16. docs/ScienceQA.md +53 -0
  17. gradio_web_server.log +34 -0
  18. images/demo_cli.gif +3 -0
  19. images/llava_example_cmp.png +0 -0
  20. images/llava_logo.png +0 -0
  21. images/llava_v1_5_radar.jpg +0 -0
  22. llava.egg-info/PKG-INFO +403 -0
  23. llava.egg-info/SOURCES.txt +60 -0
  24. llava.egg-info/dependency_links.txt +1 -0
  25. llava.egg-info/requires.txt +25 -0
  26. llava.egg-info/top_level.txt +2 -0
  27. llava/__init__.py +1 -0
  28. llava/__pycache__/__init__.cpython-310.pyc +0 -0
  29. llava/__pycache__/constants.cpython-310.pyc +0 -0
  30. llava/__pycache__/conversation.cpython-310.pyc +0 -0
  31. llava/__pycache__/mm_utils.cpython-310.pyc +0 -0
  32. llava/__pycache__/utils.cpython-310.pyc +0 -0
  33. llava/constants.py +12 -0
  34. llava/conversation.py +381 -0
  35. llava/eval/eval_gpt_review.py +113 -0
  36. llava/eval/eval_gpt_review_bench.py +121 -0
  37. llava/eval/eval_gpt_review_visual.py +118 -0
  38. llava/eval/eval_science_qa.py +99 -0
  39. llava/eval/eval_science_qa_gpt4.py +104 -0
  40. llava/eval/eval_science_qa_gpt4_requery.py +149 -0
  41. llava/eval/generate_webpage_data_from_table.py +111 -0
  42. llava/eval/model_qa.py +85 -0
  43. llava/eval/model_vqa.py +112 -0
  44. llava/eval/model_vqa_science.py +141 -0
  45. llava/eval/qa_baseline_gpt35.py +74 -0
  46. llava/eval/run_llava.py +97 -0
  47. llava/eval/summarize_gpt_review.py +50 -0
  48. llava/eval/table/answer/answer_alpaca-13b.jsonl +80 -0
  49. llava/eval/table/answer/answer_bard.jsonl +0 -0
  50. llava/eval/table/answer/answer_gpt35.jsonl +0 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ images/demo_cli.gif filter=lfs diff=lfs merge=lfs -text
.github/ISSUE_TEMPLATE/1-usage.yaml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Usage issues
2
+ description: Report issues in usage.
3
+ title: "[Usage] "
4
+ body:
5
+ - type: markdown
6
+ attributes:
7
+ value: |
8
+ Thanks for taking the time to fill out this form. Please give as detailed description as possible for us to better assist with the issue :)
9
+ - type: textarea
10
+ id: what-happened
11
+ attributes:
12
+ label: Describe the issue
13
+ description: Please give as detailed description as possible for us to better assist with the issue. Please paste the **FULL** error log here, so that we can better understand the issue. Wrap the log with ``` for better readability in GitHub.
14
+ placeholder: Issue
15
+ value: |
16
+ Issue:
17
+
18
+ Command:
19
+ ```
20
+ PASTE THE COMMANDS HERE.
21
+ ```
22
+
23
+ Log:
24
+ ```
25
+ PASTE THE LOGS HERE.
26
+ ```
27
+
28
+ Screenshots:
29
+ You may attach screenshots if it better explains the issue.
30
+ validations:
31
+ required: true
.github/ISSUE_TEMPLATE/2-feature-request.yaml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Feature Request
2
+ description: Request for a new feature
3
+ title: "[Feature request] "
4
+ body:
5
+ - type: markdown
6
+ attributes:
7
+ value: |
8
+ Thanks for your interest in our work. Please share your thoughts of the new features below.
9
+ - type: textarea
10
+ id: feature
11
+ attributes:
12
+ label: feature
13
+ placeholder: Start your thoughts here...
.github/ISSUE_TEMPLATE/3-question.yaml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Questions
2
+ description: General questions about the work
3
+ title: "[Question] "
4
+ body:
5
+ - type: markdown
6
+ attributes:
7
+ value: |
8
+ Thanks for your interest in our work. For this type of question, it may be more suitable to go to [discussion](https://github.com/haotian-liu/LLaVA/discussions) sections. If you believe an issue would be better for your request, please continue your post below :)
9
+ - type: textarea
10
+ id: question
11
+ attributes:
12
+ label: Question
13
+ placeholder: Start question here...
.github/ISSUE_TEMPLATE/4-discussion.yaml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Discussions
2
+ description: General discussions about the work
3
+ title: "[Discussion] "
4
+ body:
5
+ - type: markdown
6
+ attributes:
7
+ value: |
8
+ Thanks for your interest in our work. For this type of question, it may be more suitable to go to [discussion](https://github.com/haotian-liu/LLaVA/discussions) sections. If you believe an issue would be better for your request, please continue your post below :)
9
+ - type: textarea
10
+ id: discussion
11
+ attributes:
12
+ label: Discussion
13
+ placeholder: Start discussion here...
.gitignore ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__
3
+ *.pyc
4
+ *.egg-info
5
+ dist
6
+
7
+ # Log
8
+ *.log
9
+ *.log.*
10
+ *.json
11
+ *.jsonl
12
+
13
+ # Data
14
+ !**/alpaca-data-conversation.json
15
+
16
+ # Editor
17
+ .idea
18
+ *.swp
19
+
20
+ # Other
21
+ .DS_Store
22
+ wandb
23
+ output
24
+
25
+ checkpoints
26
+ ckpts*
27
+
28
+ .ipynb_checkpoints
29
+ *.ipynb
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,12 +1,372 @@
1
  ---
2
  title: ZAK
3
- emoji: 📚
4
- colorFrom: pink
5
- colorTo: purple
6
  sdk: gradio
7
- sdk_version: 4.4.1
8
- app_file: app.py
9
- pinned: false
10
  ---
 
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: ZAK
3
+ app_file: /content/LLaVA/llava/serve/gradio_web_server.py
 
 
4
  sdk: gradio
5
+ sdk_version: 3.35.2
 
 
6
  ---
7
+ # 🌋 LLaVA: Large Language and Vision Assistant
8
 
9
+ *Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.*
10
+
11
+ [[Project Page](https://llava-vl.github.io/)] [[Demo](https://llava.hliu.cc/)] [[Data](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)] [[Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)]
12
+
13
+ **Improved Baselines with Visual Instruction Tuning** [[Paper](https://arxiv.org/abs/2310.03744)] <br>
14
+ [Haotian Liu](https://hliu.cc), [Chunyuan Li](https://chunyuan.li/), [Yuheng Li](https://yuheng-li.github.io/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)
15
+
16
+ **Visual Instruction Tuning** (NeurIPS 2023, **Oral**) [[Paper](https://arxiv.org/abs/2304.08485)]<br>
17
+ [Haotian Liu*](https://hliu.cc), [Chunyuan Li*](https://chunyuan.li/), [Qingyang Wu](https://scholar.google.ca/citations?user=HDiw-TsAAAAJ&hl=en/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/) (*Equal Contribution)
18
+
19
+ <p align="center">
20
+ <a href="https://llava.hliu.cc/"><img src="images/llava_logo.png" width="50%"></a> <br>
21
+ Generated by <a href="https://gligen.github.io/">GLIGEN</a> via "a cute lava llama with glasses" and box prompt
22
+ </p>
23
+
24
+
25
+ ## Release
26
+ - [10/5] 🔥 LLaVA-1.5 is out! Achieving SoTA on 11 benchmarks, with just simple modifications to the original LLaVA, utilizes all public data, completes training in ~1 day on a single 8-A100 node, and surpasses methods like Qwen-VL-Chat that use billion-scale data. Check out the [technical report](https://arxiv.org/abs/2310.03744), and explore the [demo](https://llava.hliu.cc/)! Models are available in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md), with training and evaluation scripts coming in the next week!
27
+ - [9/26] LLaVA is improved with reinforcement learning from human feedback (RLHF) to improve fact grounding and reduce hallucination. Check out the new SFT and RLHF checkpoints at project [[LLavA-RLHF]](https://llava-rlhf.github.io/)
28
+ - [9/22] [LLaVA](https://arxiv.org/abs/2304.08485) is accpeted by NeurIPS 2023 as **oral presentation**, and [LLaVA-Med](https://arxiv.org/abs/2306.00890) is accpeted by NeurIPS 2023 Datasets and Benchmarks Track as **spotlight presentation**.
29
+ - [9/20] We summarize our emprical study of training 33B and 65B LLaVA mdoels in a [note](https://arxiv.org/abs/2309.09958). Further, if you are interested in the comprehensive review, evolution and trend of multimodal foundation models, please check out our recent survey paper [``Multimodal Foundation Models: From Specialists to General-Purpose Assistants''.](https://arxiv.org/abs/2309.10020)
30
+ <p align="center">
31
+ <img src="https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings/blob/main/images/mfm_evolution.jpeg?raw=true" width=50%/>
32
+ </p>
33
+
34
+ - [7/19] 🔥 We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We release [LLaVA Bench](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check out [LLaVA-from-LLaMA-2](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_from_LLaMA2.md), and our [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)!
35
+ - [6/26] [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/) on **Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4**! Please check out [[Slides](https://datarelease.blob.core.windows.net/tutorial/vision_foundation_models_2023/slides/Chunyuan_cvpr2023_tutorial_lmm.pdf)] [[Notes](https://arxiv.org/abs/2306.14895)] [[YouTube](https://youtu.be/mkI7EPD1vp8)] [[Bilibli](https://www.bilibili.com/video/BV1Ng4y1T7v3/)].
36
+ - [6/11] We released the preview for the most requested feature: DeepSpeed and LoRA support! Please see documentations [here](./docs/LoRA.md).
37
+ - [6/1] We released **LLaVA-Med: Large Language and Vision Assistant for Biomedicine**, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2306.00890) and [page](https://github.com/microsoft/LLaVA-Med).
38
+ - [5/6] We are releasing [LLaVA-Lighting-MPT-7B-preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview), based on MPT-7B-Chat! See [here](#LLaVA-MPT-7b) for more details.
39
+ - [5/2] 🔥 We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! See [here](#train-llava-lightning) for more details.
40
+ - [4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out [here](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava).
41
+ - [4/17] 🔥 We released **LLaVA: Large Language and Vision Assistant**. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2304.08485) and [demo](https://llava.hliu.cc/).
42
+
43
+ <!-- <a href="https://llava.hliu.cc/"><img src="assets/demo.gif" width="70%"></a> -->
44
+
45
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
46
+ [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
47
+ **Usage and License Notices**: The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
48
+
49
+
50
+ ## Contents
51
+ - [Install](#install)
52
+ - [LLaVA Weights](#llava-weights)
53
+ - [Demo](#Demo)
54
+ - [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)
55
+ - [Dataset](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)
56
+ - [Train](#train)
57
+ - [Evaluation](#evaluation)
58
+
59
+ ## Install
60
+
61
+ 1. Clone this repository and navigate to LLaVA folder
62
+ ```bash
63
+ git clone https://github.com/haotian-liu/LLaVA.git
64
+ cd LLaVA
65
+ ```
66
+
67
+ 2. Install Package
68
+ ```Shell
69
+ conda create -n llava python=3.10 -y
70
+ conda activate llava
71
+ pip install --upgrade pip # enable PEP 660 support
72
+ pip install -e .
73
+ ```
74
+
75
+ 3. Install additional packages for training cases
76
+ ```
77
+ pip install ninja
78
+ pip install flash-attn --no-build-isolation
79
+ ```
80
+
81
+ ### Upgrade to latest code base
82
+
83
+ ```Shell
84
+ git pull
85
+ pip uninstall transformers
86
+ pip install -e .
87
+ ```
88
+
89
+ ## LLaVA Weights
90
+ Please check out our [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md) for all public LLaVA checkpoints, and the instructions of how to use the weights.
91
+
92
+ ## Demo
93
+
94
+ To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions [here](#llava-weights) to download the checkpoints.
95
+
96
+ ### Gradio Web UI
97
+
98
+ To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.
99
+
100
+ #### Launch a controller
101
+ ```Shell
102
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000
103
+ ```
104
+
105
+ #### Launch a gradio web server.
106
+ ```Shell
107
+ python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
108
+ ```
109
+ You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
110
+
111
+ #### Launch a model worker
112
+
113
+ This is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in `--model-path`.
114
+
115
+ ```Shell
116
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
117
+ ```
118
+ Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
119
+
120
+ You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker.
121
+ ```Shell
122
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>
123
+ ```
124
+
125
+ If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the `--device` flag: `--device mps`.
126
+
127
+ #### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)
128
+
129
+ If the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. Our latest code base will automatically try to use multiple GPUs if you have more than one GPU. You can specify which GPUs to use with `CUDA_VISIBLE_DEVICES`. Below is an example of running with the first two GPUs.
130
+
131
+ ```Shell
132
+ CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
133
+ ```
134
+
135
+ #### Launch a model worker (4-bit, 8-bit inference, quantized)
136
+
137
+ You can launch the model worker with quantized bits (4-bit, 8-bit), which allows you to run the inference with reduced GPU memory footprint, potentially allowing you to run on a GPU with as few as 12GB VRAM. Note that inference with quantized bits may not be as accurate as the full-precision model. Simply append `--load-4bit` or `--load-8bit` to the **model worker** command that you are executing. Below is an example of running with 4-bit quantization.
138
+
139
+ ```Shell
140
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --load-4bit
141
+ ```
142
+
143
+ #### Launch a model worker (LoRA weights, unmerged)
144
+
145
+ You can launch the model worker with LoRA weights, without merging them with the base checkpoint, to save disk space. There will be additional loading time, while the inference speed is the same as the merged checkpoints. Unmerged LoRA checkpoints do not have `lora-merge` in the model name, and are usually much smaller (less than 1GB) than the merged checkpoints (13G for 7B, and 25G for 13B).
146
+
147
+ To load unmerged LoRA weights, you simply need to pass an additional argument `--model-base`, which is the base LLM that is used to train the LoRA weights. You can check the base LLM of each LoRA weights in the [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md).
148
+
149
+ ```Shell
150
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 --model-base lmsys/vicuna-13b-v1.3
151
+ ```
152
+
153
+ ### CLI Inference
154
+
155
+ Chat about images using LLaVA without the need of Gradio interface. It also supports multiple GPUs, 4-bit and 8-bit quantized inference. With 4-bit quantization, for our LLaVA-1.5-7B, it uses less than 8GB VRAM on a single GPU.
156
+
157
+ ```Shell
158
+ python -m llava.serve.cli \
159
+ --model-path liuhaotian/llava-v1.5-7b \
160
+ --image-file "https://llava-vl.github.io/static/images/view.jpg" \
161
+ --load-4bit
162
+ ```
163
+
164
+ <img src="images/demo_cli.gif" width="70%">
165
+
166
+ ## Train
167
+
168
+ LLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.
169
+
170
+ LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the `per_device_train_batch_size` and increase the `gradient_accumulation_steps` accordingly. Always keep the global batch size the same: `per_device_train_batch_size` x `gradient_accumulation_steps` x `num_gpus`.
171
+
172
+ ### Hyperparameters
173
+ We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.
174
+
175
+ 1. Pretraining
176
+
177
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
178
+ | --- | ---: | ---: | ---: | ---: | ---: |
179
+ | LLaVA-13B | 256 | 1e-3 | 1 | 2048 | 0 |
180
+
181
+ 2. Finetuning
182
+
183
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
184
+ | --- | ---: | ---: | ---: | ---: | ---: |
185
+ | LLaVA-13B | 128 | 2e-5 | 1 | 2048 | 0 |
186
+
187
+ ### Prepare Vicuna checkpoints
188
+
189
+ Before you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights [here](https://github.com/lm-sys/FastChat#model-weights).
190
+
191
+ Vicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: `v0` for `v0` weights, and `v1` for all Vicuna `v1.x` models.
192
+
193
+ ### Pretrain (feature alignment)
194
+
195
+ Please download the subset of the CC3M dataset we use in the paper [here](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K).
196
+
197
+ Pretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.
198
+
199
+ We recommend training with DeepSpeed as it can save a lot of GPU RAM. We provide training script with DeepSpeed [here](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain.sh).
200
+
201
+ You may run this with a single A100 GPU with the following code. Please note that the `per_device_train_batch_size` * `gradient_accumulation_steps` should be equal to 128 to keep the global batch size the same.
202
+
203
+ <details>
204
+ <summary>Pretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.</summary>
205
+
206
+ ```Shell
207
+ python llava/train/train_mem.py \
208
+ --model_name_or_path ./checkpoints/vicuna-13b \
209
+ --version [v0 or v1] \
210
+ --data_path /path/to/cc3m_595k.json \
211
+ --image_folder /path/to/cc3m_595k_images \
212
+ --vision_tower openai/clip-vit-large-patch14 \
213
+ --tune_mm_mlp_adapter True \
214
+ --mm_vision_select_layer -2 \
215
+ --mm_use_im_start_end False \
216
+ --mm_use_im_patch_token False \
217
+ --bf16 True \
218
+ --output_dir ./checkpoints/llava-13b-pretrain \
219
+ --num_train_epochs 1 \
220
+ --per_device_train_batch_size 16 \
221
+ --per_device_eval_batch_size 4 \
222
+ --gradient_accumulation_steps 8 \
223
+ --evaluation_strategy "no" \
224
+ --save_strategy "steps" \
225
+ --save_steps 2400 \
226
+ --save_total_limit 1 \
227
+ --learning_rate 2e-3 \
228
+ --weight_decay 0. \
229
+ --warmup_ratio 0.03 \
230
+ --lr_scheduler_type "cosine" \
231
+ --logging_steps 1 \
232
+ --tf32 True \
233
+ --model_max_length 2048 \
234
+ --gradient_checkpointing True \
235
+ --lazy_preprocess True \
236
+ --report_to wandb
237
+ ```
238
+ </details>
239
+
240
+
241
+ ### Visual Instruction Tuning
242
+
243
+ 1. Prepare data
244
+
245
+ Please download the annotation of our instruction tuning data [llava_instruct_158k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_150k.json), and download the COCO train2017 images [here](https://cocodataset.org/#download).
246
+
247
+ 2. Start training!
248
+
249
+ You may download our pretrained projectors in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md). It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.
250
+
251
+ When we initially released our paper, we used a full 3-epoch schedule on the LLaVA-Instruct-158K dataset. The scripts are provided [here](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune_full_schedule.sh).
252
+
253
+ In our later exploration, we introduced LLaVA-Lightning, as we find that a much faster 1-epoch schedule on LLaVA-Instruct-80K can achieve fast convergence and good performance. With LLaVA Lightning, we are able to train, validate, and release LLaVA-LLaMA-2 checkpoints preview on the same day as LLaMA-2 release. If you are interested to learn more about LLaVA Lightning, please continue to the following section.
254
+
255
+ ### Lightning
256
+
257
+ LLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.
258
+
259
+ For LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper. We find such schedule is effective and can achieve fast convergence and good performance.
260
+
261
+ For pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/tree/main).
262
+
263
+ For instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download `llava_instruct_80k.json` [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_80k.json).
264
+
265
+ #### Hyperparameters
266
+
267
+ 1. Pretraining ([script](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain.sh))
268
+
269
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
270
+ | --- | ---: | ---: | ---: | ---: | ---: |
271
+ | LLaVA-Lightning | 128 | 2e-3 | 1 | 2048 | 0 |
272
+
273
+ 2. Visual Instruction Tuning ([script](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune.sh))
274
+
275
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
276
+ | --- | ---: | ---: | ---: | ---: | ---: |
277
+ | LLaVA-Lightning | 128 | 2e-5 | 1 | 2048 | 0 |
278
+
279
+ #### LLaVA-MPT-7b
280
+ Thanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7B-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.
281
+
282
+ **NOTE**: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.
283
+
284
+ 1. Usage
285
+
286
+ You do not need to download our checkpoint, it will directly load from our Hugging Face model: [`liuhaotian/LLaVA-Lightning-MPT-7B-preview`](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview).
287
+
288
+ ```Shell
289
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000
290
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview
291
+ python -m llava.serve.gradio_web_server --controller http://localhost:10000
292
+ ```
293
+
294
+ 2. Training
295
+
296
+ We use the same set of training dataset, and the hyperparameters as other *Lightning* checkpoints.
297
+
298
+ ## Evaluation
299
+
300
+ ### GPT-assisted Evaluation
301
+
302
+ Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.
303
+
304
+ 1. Generate LLaVA responses
305
+
306
+ ```Shell
307
+ python model_vqa.py \
308
+ --model-path ./checkpoints/LLaVA-13B-v0 \
309
+ --question-file \
310
+ playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
311
+ --image-folder \
312
+ /path/to/coco2014_val \
313
+ --answers-file \
314
+ /path/to/answer-file-our.jsonl
315
+ ```
316
+
317
+ 2. Evaluate the generated responses. In our case, [`answer-file-ref.jsonl`](./playground/data/coco2014_val_qa_eval/qa90_gpt4_answer.jsonl) is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
318
+
319
+ ```Shell
320
+ OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \
321
+ --question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
322
+ --context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \
323
+ --answer-list \
324
+ /path/to/answer-file-ref.jsonl \
325
+ /path/to/answer-file-our.jsonl \
326
+ --rule llava/eval/table/rule.json \
327
+ --output /path/to/review.json
328
+ ```
329
+
330
+ 3. Summarize the evaluation results
331
+
332
+ ```Shell
333
+ python summarize_gpt_review.py
334
+ ```
335
+
336
+ ## ScienceQA
337
+
338
+ Please check out the documentation [here](https://github.com/haotian-liu/LLaVA/blob/main/docs/ScienceQA.md).
339
+
340
+ ## Citation
341
+
342
+ If you find LLaVA useful for your research and applications, please cite using this BibTeX:
343
+ ```bibtex
344
+
345
+ @misc{liu2023improvedllava,
346
+ title={Improved Baselines with Visual Instruction Tuning},
347
+ author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
348
+ publisher={arXiv:2310.03744},
349
+ year={2023},
350
+ }
351
+
352
+ @misc{liu2023llava,
353
+ title={Visual Instruction Tuning},
354
+ author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
355
+ publisher={arXiv:2304.08485},
356
+ year={2023},
357
+ }
358
+ ```
359
+
360
+ ## Acknowledgement
361
+
362
+ - [Vicuna](https://github.com/lm-sys/FastChat): the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!
363
+
364
+ ## Related Projects
365
+
366
+ - [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
367
+ - [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://github.com/microsoft/LLaVA-Med)
368
+ - [Otter: In-Context Multi-Modal Instruction Tuning](https://github.com/Luodian/Otter)
369
+
370
+ For future project ideas, please check out:
371
+ - [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
372
+ - [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) to detect, segment, and generate anything by marrying [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO) and [Segment-Anything](https://github.com/facebookresearch/segment-anything).
controller.log ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2023-11-18 09:03:25 | INFO | controller | args: Namespace(host='0.0.0.0', port=10000, dispatch_method='shortest_queue')
2
+ 2023-11-18 09:03:25 | INFO | controller | Init controller
3
+ 2023-11-18 09:03:25 | ERROR | stderr | INFO: Started server process [6312]
4
+ 2023-11-18 09:03:25 | ERROR | stderr | INFO: Waiting for application startup.
5
+ 2023-11-18 09:03:25 | ERROR | stderr | INFO: Application startup complete.
6
+ 2023-11-18 09:03:25 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:10000 (Press CTRL+C to quit)
7
+ 2023-11-18 09:06:26 | INFO | controller | Register a new worker: http://localhost:40000
8
+ 2023-11-18 09:06:26 | INFO | controller | Register done: http://localhost:40000, {'model_names': ['llava-v1.5-7b-5GB'], 'speed': 1, 'queue_length': 0}
9
+ 2023-11-18 09:06:26 | INFO | stdout | INFO: 127.0.0.1:55496 - "POST /register_worker HTTP/1.1" 200 OK
10
+ 2023-11-18 09:06:41 | INFO | controller | Receive heart beat. http://localhost:40000
11
+ 2023-11-18 09:06:41 | INFO | stdout | INFO: 127.0.0.1:47384 - "POST /receive_heart_beat HTTP/1.1" 200 OK
12
+ 2023-11-18 09:06:56 | INFO | controller | Receive heart beat. http://localhost:40000
13
+ 2023-11-18 09:06:56 | INFO | stdout | INFO: 127.0.0.1:51760 - "POST /receive_heart_beat HTTP/1.1" 200 OK
14
+ 2023-11-18 09:07:11 | INFO | controller | Receive heart beat. http://localhost:40000
15
+ 2023-11-18 09:07:11 | INFO | stdout | INFO: 127.0.0.1:60042 - "POST /receive_heart_beat HTTP/1.1" 200 OK
16
+ 2023-11-18 09:07:26 | INFO | controller | Receive heart beat. http://localhost:40000
17
+ 2023-11-18 09:07:26 | INFO | stdout | INFO: 127.0.0.1:52662 - "POST /receive_heart_beat HTTP/1.1" 200 OK
18
+ 2023-11-18 09:07:41 | INFO | controller | Receive heart beat. http://localhost:40000
19
+ 2023-11-18 09:07:41 | INFO | stdout | INFO: 127.0.0.1:46100 - "POST /receive_heart_beat HTTP/1.1" 200 OK
20
+ 2023-11-18 09:07:56 | INFO | controller | Receive heart beat. http://localhost:40000
21
+ 2023-11-18 09:07:56 | INFO | stdout | INFO: 127.0.0.1:34094 - "POST /receive_heart_beat HTTP/1.1" 200 OK
22
+ 2023-11-18 09:08:11 | INFO | controller | Receive heart beat. http://localhost:40000
23
+ 2023-11-18 09:08:11 | INFO | stdout | INFO: 127.0.0.1:50558 - "POST /receive_heart_beat HTTP/1.1" 200 OK
24
+ 2023-11-18 09:08:26 | INFO | controller | Receive heart beat. http://localhost:40000
25
+ 2023-11-18 09:08:26 | INFO | stdout | INFO: 127.0.0.1:34502 - "POST /receive_heart_beat HTTP/1.1" 200 OK
26
+ 2023-11-18 09:08:41 | INFO | controller | Receive heart beat. http://localhost:40000
27
+ 2023-11-18 09:08:41 | INFO | stdout | INFO: 127.0.0.1:32922 - "POST /receive_heart_beat HTTP/1.1" 200 OK
28
+ 2023-11-18 09:08:56 | INFO | controller | Receive heart beat. http://localhost:40000
29
+ 2023-11-18 09:08:56 | INFO | stdout | INFO: 127.0.0.1:37724 - "POST /receive_heart_beat HTTP/1.1" 200 OK
30
+ 2023-11-18 09:09:11 | INFO | controller | Receive heart beat. http://localhost:40000
31
+ 2023-11-18 09:09:11 | INFO | stdout | INFO: 127.0.0.1:53322 - "POST /receive_heart_beat HTTP/1.1" 200 OK
32
+ 2023-11-18 09:09:26 | INFO | controller | Receive heart beat. http://localhost:40000
33
+ 2023-11-18 09:09:26 | INFO | stdout | INFO: 127.0.0.1:33208 - "POST /receive_heart_beat HTTP/1.1" 200 OK
34
+ 2023-11-18 09:09:41 | INFO | controller | Receive heart beat. http://localhost:40000
35
+ 2023-11-18 09:09:41 | INFO | stdout | INFO: 127.0.0.1:53010 - "POST /receive_heart_beat HTTP/1.1" 200 OK
36
+ 2023-11-18 09:09:56 | INFO | controller | Receive heart beat. http://localhost:40000
37
+ 2023-11-18 09:09:56 | INFO | stdout | INFO: 127.0.0.1:34478 - "POST /receive_heart_beat HTTP/1.1" 200 OK
38
+ 2023-11-18 09:10:11 | INFO | controller | Receive heart beat. http://localhost:40000
39
+ 2023-11-18 09:10:11 | INFO | stdout | INFO: 127.0.0.1:52736 - "POST /receive_heart_beat HTTP/1.1" 200 OK
40
+ 2023-11-18 09:10:26 | INFO | controller | Receive heart beat. http://localhost:40000
41
+ 2023-11-18 09:10:26 | INFO | stdout | INFO: 127.0.0.1:47056 - "POST /receive_heart_beat HTTP/1.1" 200 OK
42
+ 2023-11-18 09:10:41 | INFO | controller | Receive heart beat. http://localhost:40000
43
+ 2023-11-18 09:10:41 | INFO | stdout | INFO: 127.0.0.1:38568 - "POST /receive_heart_beat HTTP/1.1" 200 OK
44
+ 2023-11-18 09:10:56 | INFO | controller | Receive heart beat. http://localhost:40000
45
+ 2023-11-18 09:10:56 | INFO | stdout | INFO: 127.0.0.1:37650 - "POST /receive_heart_beat HTTP/1.1" 200 OK
46
+ 2023-11-18 09:11:11 | INFO | controller | Receive heart beat. http://localhost:40000
47
+ 2023-11-18 09:11:11 | INFO | stdout | INFO: 127.0.0.1:33520 - "POST /receive_heart_beat HTTP/1.1" 200 OK
48
+ 2023-11-18 09:11:26 | INFO | controller | Receive heart beat. http://localhost:40000
49
+ 2023-11-18 09:11:26 | INFO | stdout | INFO: 127.0.0.1:54452 - "POST /receive_heart_beat HTTP/1.1" 200 OK
50
+ 2023-11-18 09:11:34 | INFO | controller | Register a new worker: http://localhost:40000
51
+ 2023-11-18 09:11:34 | INFO | controller | Register done: http://localhost:40000, {'model_names': ['llava-v1.5-7b-5GB'], 'speed': 1, 'queue_length': 0}
52
+ 2023-11-18 09:11:34 | INFO | stdout | INFO: 127.0.0.1:37052 - "POST /refresh_all_workers HTTP/1.1" 200 OK
53
+ 2023-11-18 09:11:34 | INFO | stdout | INFO: 127.0.0.1:37054 - "POST /list_models HTTP/1.1" 200 OK
54
+ 2023-11-18 09:11:41 | INFO | controller | Receive heart beat. http://localhost:40000
55
+ 2023-11-18 09:11:41 | INFO | stdout | INFO: 127.0.0.1:46636 - "POST /receive_heart_beat HTTP/1.1" 200 OK
56
+ 2023-11-18 09:11:56 | INFO | controller | Receive heart beat. http://localhost:40000
57
+ 2023-11-18 09:11:56 | INFO | stdout | INFO: 127.0.0.1:41816 - "POST /receive_heart_beat HTTP/1.1" 200 OK
58
+ 2023-11-18 09:12:11 | INFO | controller | Receive heart beat. http://localhost:40000
59
+ 2023-11-18 09:12:11 | INFO | stdout | INFO: 127.0.0.1:35034 - "POST /receive_heart_beat HTTP/1.1" 200 OK
60
+ 2023-11-18 09:12:26 | INFO | controller | Receive heart beat. http://localhost:40000
61
+ 2023-11-18 09:12:26 | INFO | stdout | INFO: 127.0.0.1:43630 - "POST /receive_heart_beat HTTP/1.1" 200 OK
62
+ 2023-11-18 09:12:41 | INFO | controller | Receive heart beat. http://localhost:40000
63
+ 2023-11-18 09:12:41 | INFO | stdout | INFO: 127.0.0.1:40032 - "POST /receive_heart_beat HTTP/1.1" 200 OK
64
+ 2023-11-18 09:12:43 | ERROR | stderr | INFO: Shutting down
65
+ 2023-11-18 09:12:43 | ERROR | stderr | INFO: Waiting for application shutdown.
66
+ 2023-11-18 09:12:43 | ERROR | stderr | INFO: Application shutdown complete.
67
+ 2023-11-18 09:12:43 | ERROR | stderr | INFO: Finished server process [6312]
docs/Customize_Component.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Customize Components in LLaVA
2
+
3
+ This is an initial guide on how to replace the LLMs, visual encoders, etc. with your choice of components.
4
+
5
+ ## LLM
6
+
7
+ It is quite simple to swap out LLaMA to any other LLMs. You can refer to our implementation of [`llava_llama.py`](https://raw.githubusercontent.com/haotian-liu/LLaVA/main/llava/model/language_model/llava_llama.py) for an example of how to replace the LLM.
8
+
9
+ Although it may seem that it still needs ~100 lines of code, most of them are copied from the original `llama.py` from HF. The only part that is different is to insert some lines for processing the multimodal inputs.
10
+
11
+ In `forward` function, you can see that we call `self.prepare_inputs_labels_for_multimodal` to process the multimodal inputs. This function is defined in `LlavaMetaForCausalLM` and you just need to insert it into the `forward` function of your LLM.
12
+
13
+ In `prepare_inputs_for_generation` function, you can see that we add `images` to the `model_inputs`. This is because we need to pass the images to the LLM during generation.
14
+
15
+ These are basically all the changes you need to make to replace the LLM.
16
+
17
+ ## Visual Encoder
18
+
19
+ You can check out [`clip_encoder.py`](https://github.com/haotian-liu/LLaVA/blob/main/llava/model/multimodal_encoder/clip_encoder.py) on how we implement the CLIP visual encoder.
20
+
docs/Data.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Data
2
+
3
+ | Data file name | Size |
4
+ | --- | ---: |
5
+ | [llava_instruct_150k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_150k.json) | 229 MB |
6
+ | [llava_instruct_80k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_80k.json) | 229 MB |
7
+ | [conversation_58k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/conversation_58k.json) | 126 MB |
8
+ | [detail_23k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/detail_23k.json) | 20.5 MB |
9
+ | [complex_reasoning_77k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/complex_reasoning_77k.json) | 79.6 MB |
10
+
11
+ ### Pretraining Dataset
12
+ The pretraining dataset used in this release is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. Please see [here](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K) for a detailed description of the dataset structure and how to download the images.
13
+
14
+ If you already have CC-3M dataset on your disk, the image names follow this format: `GCC_train_000000000.jpg`. You may edit the `image` field correspondingly if necessary.
15
+
16
+ | Data | Chat File | Meta Data | Size |
17
+ | --- | --- | --- | ---: |
18
+ | CC-3M Concept-balanced 595K | [chat.json](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/chat.json) | [metadata.json](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/metadata.json) | 211 MB
19
+ | LAION/CC/SBU BLIP-Caption Concept-balanced 558K | [blip_laion_cc_sbu_558k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/blip_laion_cc_sbu_558k.json) | [metadata.json](#) | 181 MB
20
+
21
+ **Important notice**: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload [`images.zip`](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K/blob/main/images.zip) for better reproducing our work in research community. It must not be used for any other purposes. The use of these images must comply with the CC-3M license. This may be taken down at any time when requested by the original CC-3M dataset owner or owners of the referenced images.
22
+
23
+ ### GPT-4 Prompts
24
+
25
+ We provide our prompts and few-shot samples for GPT-4 queries, to better facilitate research in this domain. Please check out the [`prompts`](playground/data/prompts) folder for three kinds of questions: conversation, detail description, and complex reasoning.
26
+
27
+ They are organized in a format of `system_message.txt` for system message, pairs of `abc_caps.txt` for few-shot sample user input, and `abc_conv.txt` for few-shot sample reference output.
28
+
29
+ Note that you may find them in different format. For example, `conversation` is in `jsonl`, and detail description is answer-only. The selected format in our preliminary experiments works slightly better than a limited set of alternatives that we tried: `jsonl`, more natural format, answer-only. If interested, you may try other variants or conduct more careful study in this. Contributions are welcomed!
docs/LLaVA_Bench.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLaVA-Bench [[Download](https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild)]
2
+
3
+ **-Introduction-** Large commercial multimodal chatbots have been released in this week, including
4
+ - [Multimodal Bing-Chat by Microsoft](https://blogs.bing.com/search/july-2023/Bing-Chat-Enterprise-announced,-multimodal-Visual-Search-rolling-out-to-Bing-Chat) (July 18, 2023)
5
+ - [Multimodal Bard by Google](https://bard.google.com/).
6
+
7
+ These chatbots are presumably supported by proprietary large multimodal models (LMM). Compared with the open-source LMM such as LLaVA, proprietary LMM represent the scaling success upperbound of the current SoTA techniques. They share the goal of developing multimodal chatbots that follow human intents to complete various daily-life visual tasks in the wild. While it remains less unexplored how to evaluate multimodal chat ability, it provides useful feedback to study open-source LMMs against the commercial multimodal chatbots. In addition to the *LLaVA-Bench (COCO)* dataset we used to develop the early versions of LLaVA, we are releasing [*LLaVA-Bench (In-the-Wild)*](https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild) to the community for the public use.
8
+
9
+ ## LLaVA-Bench (In-the-Wild *[Ongoing work]*)
10
+
11
+ To evaluate the model's capability in more challenging tasks and generalizability to novel domains, we collect a diverse set of 24 images with 60 questions in total, including indoor and outdoor scenes, memes, paintings, sketches, etc, and associate each image with a highly-detailed and manually-curated description and a proper selection of questions. Such design also assesses the model's robustness to different prompts. In this release, we also categorize questions into three categories: conversation (simple QA), detailed description, and complex reasoning. We continue to expand and improve the diversity of the LLaVA-Bench (In-the-Wild). We manually query Bing-Chat and Bard to get the responses.
12
+
13
+ ### Results
14
+
15
+ The score is measured by comparing against a reference answer generated by text-only GPT-4. It is generated by feeding the question, along with the ground truth image annotations as the context. A text-only GPT-4 evaluator rates both answers. We query GPT-4 by putting the reference answer first, and then the answer generated by the candidate model. We upload images at their original resolution to Bard and Bing-Chat to obtain the results.
16
+
17
+ | Approach | Conversation | Detail | Reasoning | Overall |
18
+ |----------------|--------------|--------|-----------|---------|
19
+ | Bard-0718 | 83.7 | 69.7 | 78.7 | 77.8 |
20
+ | Bing-Chat-0629 | 59.6 | 52.2 | 90.1 | 71.5 |
21
+ | LLaVA-13B-v1-336px-0719 (beam=1) | 64.3 | 55.9 | 81.7 | 70.1 |
22
+ | LLaVA-13B-v1-336px-0719 (beam=5) | 68.4 | 59.9 | 84.3 | 73.5 |
23
+
24
+ Note that Bard sometimes refuses to answer questions about images containing humans, and Bing-Chat blurs the human faces in the images. We also provide the benchmark score for the subset without humans.
25
+
26
+ | Approach | Conversation | Detail | Reasoning | Overall |
27
+ |----------------|--------------|--------|-----------|---------|
28
+ | Bard-0718 | 94.9 | 74.3 | 84.3 | 84.6 |
29
+ | Bing-Chat-0629 | 55.8 | 53.6 | 93.5 | 72.6 |
30
+ | LLaVA-13B-v1-336px-0719 (beam=1) | 62.2 | 56.4 | 82.2 | 70.0 |
31
+ | LLaVA-13B-v1-336px-0719 (beam=5) | 65.6 | 61.7 | 85.0 | 73.6 |
docs/LLaVA_from_LLaMA2.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLaVA (based on Llama 2 LLM, Preview)
2
+
3
+ *NOTE: This is a technical preview. We are still running hyperparameter search, and will release the final model soon. If you'd like to contribute to this, please contact us.*
4
+
5
+ :llama: **-Introduction-** [Llama 2 is an open-source LLM released by Meta AI](https://about.fb.com/news/2023/07/llama-2/) today (July 18, 2023). Compared with its early version [Llama 1](https://ai.meta.com/blog/large-language-model-llama-meta-ai/), Llama 2 is more favored in ***stronger language performance***, ***longer context window***, and importantly ***commercially usable***! While Llama 2 is changing the LLM market landscape in the language space, its multimodal ability remains unknown. We quickly develop the LLaVA variant based on the latest Llama 2 checkpoints, and release it to the community for the public use.
6
+
7
+ You need to apply for and download the lastest Llama 2 checkpoints to start your own training (apply [here](https://ai.meta.com/resources/models-and-libraries/llama-downloads/))
8
+
9
+
10
+ ## Training
11
+
12
+ Please checkout [`pretrain.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain.sh), [`finetune.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune.sh), [`finetune_lora.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune_lora.sh).
13
+
14
+ ## LLaVA (based on Llama 2), What is different?
15
+
16
+ :volcano: How is the new LLaVA based on Llama 2 different from Llama 1? The comparisons of the training process are described:
17
+ - **Pre-training**. The pre-trained base LLM is changed from Llama 1 to Llama 2
18
+ - **Language instruction-tuning**. The previous LLaVA model starts with Vicuna, which is instruct tuned on ShareGPT data from Llama 1; The new LLaVA model starts with Llama 2 Chat, which is an instruct tuned checkpoint on dialogue data from Llama 2.
19
+ - **Multimodal instruction-tuning**. The same LLaVA-Lighting process is applied.
20
+
21
+
22
+ ### Results
23
+
24
+ - Llama 2 is better at following the instructions of role playing; Llama 2 fails in following the instructions of translation
25
+ - The quantitative evaluation on [LLaVA-Bench](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) demonstrates on-par performance between Llama 2 and Llama 1 in LLaVA's multimodal chat ability.
26
+
27
+
28
+ <img src="../images/llava_example_cmp.png" width="100%">
29
+
docs/LoRA.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLaVA (LoRA, Preview)
2
+
3
+ NOTE: This is a technical preview, and is not yet ready for production use. We are still running hyperparameter search for the LoRA model, and will release the final model soon. If you'd like to contribute to this, please contact us.
4
+
5
+ You need latest code base for LoRA support (instructions [here](https://github.com/haotian-liu/LLaVA#upgrade-to-latest-code-base))
6
+
7
+ ## Demo (Web UI)
8
+
9
+ Please execute each of the command below one by one (after the previous one has finished). The commands are the same as launching other demos except for an additional `--model-base` flag to specify the base model to use. Please make sure the base model corresponds to the LoRA checkpoint that you are using. For this technical preview, you need Vicuna v1.1 (7B) checkpoint (if you do not have that already, follow the instructions [here](https://github.com/lm-sys/FastChat#vicuna-weights)).
10
+
11
+ #### Launch a controller
12
+ ```Shell
13
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000
14
+ ```
15
+
16
+ #### Launch a gradio web server.
17
+ ```Shell
18
+ python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
19
+ ```
20
+ You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
21
+
22
+ #### Launch a model worker
23
+ ```Shell
24
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-vicuna-7b-v1.1-lcs_558k-instruct_80k_3e-lora-preview-alpha --model-base /path/to/vicuna-v1.1
25
+ ```
26
+ Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
27
+
28
+ You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker.
29
+
30
+
31
+ ## Training
32
+
33
+ Please see sample training scripts for [LoRA](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune_lora.sh) and [QLoRA](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune_qlora.sh).
34
+
35
+ We provide sample DeepSpeed configs, [`zero3.json`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/zero3.json) is more like PyTorch FSDP, and [`zero3_offload.json`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/zero3_offload.json) can further save memory consumption by offloading parameters to CPU. `zero3.json` is usually faster than `zero3_offload.json` but requires more GPU memory, therefore, we recommend trying `zero3.json` first, and if you run out of GPU memory, try `zero3_offload.json`. You can also tweak the `per_device_train_batch_size` and `gradient_accumulation_steps` in the config to save memory, and just to make sure that `per_device_train_batch_size` and `gradient_accumulation_steps` remains the same.
36
+
37
+ If you are having issues with ZeRO-3 configs, and there are enough VRAM, you may try [`zero2.json`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/zero2.json). This consumes slightly more memory than ZeRO-3, and behaves more similar to PyTorch FSDP, while still supporting parameter-efficient tuning.
38
+
39
+ ## Create Merged Checkpoints
40
+
41
+ ```Shell
42
+ python scripts/merge_lora_weights.py \
43
+ --model-path /path/to/lora_model \
44
+ --model-base /path/to/base_model \
45
+ --save-model-path /path/to/merge_model
46
+ ```
docs/MODEL_ZOO.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Zoo
2
+
3
+ **To Use LLaVA-1.5 checkpoints, your llava package version must be newer than 1.1.0. [Instructions](https://github.com/haotian-liu/LLaVA#upgrade-to-latest-code-base) on how to upgrade.**
4
+
5
+ If you are interested in including any other details in Model Zoo, please open an issue :)
6
+
7
+ The model weights below are *merged* weights. You do not need to apply delta. The usage of LLaVA checkpoints should comply with the base LLM's model license: [Llama 2](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
8
+
9
+ ## LLaVA-v1.5
10
+
11
+ | Version | Size | Schedule | Checkpoint | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-Bench-CN | SEED | LLaVA-Bench-Wild | MM-Vet |
12
+ |----------|----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---|---|---|
13
+ | LLaVA-1.5 | 7B | full_ft-1e | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 1510.7 | 64.3 | 58.3 | 58.6 | 63.4 | 30.5 |
14
+ | LLaVA-1.5 | 13B | full_ft-1e | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 1531.3 | 67.7 | 63.6 | 61.6 | 70.7 | 35.4 |
15
+ | LLaVA-1.5 | 7B | lora-1e | coming soon |
16
+ | LLaVA-1.5 | 13B | lora-1e | coming soon |
17
+
18
+ <p align="center">
19
+ <img src="../images/llava_v1_5_radar.jpg" width="500px"> <br>
20
+ LLaVA-1.5 achieves SoTA performance across 11 benchmarks.
21
+ </p>
22
+
23
+
24
+ ## LLaVA-v1
25
+
26
+ *Note: We recommend using the most capable LLaVA-v1.5 series above for the best performance.*
27
+
28
+ | Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | LLaVA-Bench-Conv | LLaVA-Bench-Detail | LLaVA-Bench-Complex | LLaVA-Bench-Overall | Download |
29
+ |----------|----------------|---------------|----------------------|-----------------|--------------------|------------------|--------------------|---------------------|---------------------|---------------------|
30
+ | Vicuna-13B-v1.3 | CLIP-L-336px | LCS-558K | 1e | LLaVA-Instruct-80K | proj-1e, lora-1e | 64.3 | 55.9 | 81.7 | 70.1 | [LoRA](https://huggingface.co/liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3) [LoRA-Merged](https://huggingface.co/liuhaotian/llava-v1-0719-336px-lora-merge-vicuna-13b-v1.3) |
31
+ | LLaMA-2-13B-Chat | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | full_ft-1e | 56.7 | 58.6 | 80.0 | 67.9 | [ckpt](https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview) |
32
+ | LLaMA-2-7B-Chat | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | lora-1e | 51.2 | 58.9 | 71.6 | 62.8 | [LoRA](https://huggingface.co/liuhaotian/llava-llama-2-7b-chat-lightning-lora-preview) |
33
+
34
+
35
+ ## Projector weights
36
+
37
+ The model weights below are projector weights we have pretrained. You can use these projector weights for visual instruction tuning. We'll add more projector weights into model zoo very soon.
38
+
39
+ **NOTE**: These projector weights are only compatible with the `llava>=1.0.0`, please check out the latest code base if your local code version is below `v1.0.0`.
40
+
41
+ **NOTE**: When you use our pretrained projector for visual instruction tuning, it is very important to **use the same base LLM and vision encoder** as the one we used for pretraining the projector. Otherwise, the performance will be very bad.
42
+
43
+ When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
44
+
45
+ ```Shell
46
+ --mm_use_im_start_end False
47
+ --mm_use_im_patch_token False
48
+ ```
49
+
50
+ | Base LLM | Vision Encoder | Projection | Pretrain Data | Pretraining schedule | Download |
51
+ |----------|----------------|---------------|----------------------|----------|----------|
52
+ | Vicuna-13B-v1.5 | CLIP-L-336px | MLP-2x | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-13b-v1.5) |
53
+ | Vicuna-7B-v1.5 | CLIP-L-336px | MLP-2x | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5) |
54
+ | LLaMA-2-13B-Chat | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-llama-2-13b-chat) |
55
+ | LLaMA-2-7B-Chat | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-llama-2-7b-chat) |
56
+ | LLaMA-2-13B-Chat | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-llama-2-13b-chat) |
57
+ | LLaMA-2-7B-Chat | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-llama-2-7b-chat) |
58
+ | Vicuna-13B-v1.3 | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-vicuna-13b-v1.3) |
59
+ | Vicuna-7B-v1.3 | CLIP-L-336px | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-336px-pretrain-vicuna-7b-v1.3) |
60
+ | Vicuna-13B-v1.3 | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-vicuna-13b-v1.3) |
61
+ | Vicuna-7B-v1.3 | CLIP-L | Linear | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/llava-pretrain-vicuna-7b-v1.3) |
62
+
63
+
64
+ ## Science QA Checkpoints
65
+
66
+ | Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | Download |
67
+ |----------|----------------|---------------|----------------------|-----------------|--------------------|---------------------|
68
+ | Vicuna-13B-v1.3 | CLIP-L | LCS-558K | 1e | ScienceQA | full_ft-12e | [ckpt](https://huggingface.co/liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3) |
69
+
70
+
71
+ ## Legacy Models (merged weights)
72
+
73
+ The model weights below are *merged* weights. You do not need to apply delta. The usage of LLaVA checkpoints should comply with the base LLM's model license.
74
+
75
+ | Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | Download |
76
+ |----------|----------------|---------------|----------------------|-----------------|--------------------|------------------|
77
+ | MPT-7B-Chat | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | full_ft-1e | [preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview) |
78
+
79
+
80
+ ## Legacy Models (delta weights)
81
+
82
+ The model weights below are *delta* weights. The usage of LLaVA checkpoints should comply with the base LLM's model license: [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
83
+
84
+ You can add our delta to the original LLaMA weights to obtain the LLaVA weights.
85
+
86
+ Instructions:
87
+
88
+ 1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama).
89
+ 2. Use the following scripts to get LLaVA weights by applying our delta. It will automatically download delta weights from our Hugging Face account. In the script below, we use the delta weights of [`liuhaotian/LLaVA-7b-delta-v0`](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0) as an example. It can be adapted for other delta weights by changing the `--delta` argument (and base/target accordingly).
90
+
91
+ ```bash
92
+ python3 -m llava.model.apply_delta \
93
+ --base /path/to/llama-7b \
94
+ --target /output/path/to/LLaVA-7B-v0 \
95
+ --delta liuhaotian/LLaVA-7b-delta-v0
96
+ ```
97
+
98
+ | Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Finetuning Data | Finetuning schedule | Download |
99
+ |----------|----------------|---------------|----------------------|-----------------|--------------------|------------------|
100
+ | Vicuna-13B-v1.1 | CLIP-L | CC-595K | 1e | LLaVA-Instruct-158K | full_ft-3e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1) |
101
+ | Vicuna-7B-v1.1 | CLIP-L | LCS-558K | 1e | LLaVA-Instruct-80K | full_ft-1e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-Lightning-7B-delta-v1-1) |
102
+ | Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | LLaVA-Instruct-158K | full_ft-3e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0) |
103
+ | Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | ScienceQA | full_ft-12e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0-science_qa) |
104
+ | Vicuna-7B-v0 | CLIP-L | CC-595K | 1e | LLaVA-Instruct-158K | full_ft-3e | [delta-weights](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0) |
105
+
106
+
107
+
108
+ ## Legacy Projector weights
109
+
110
+ The following projector weights are deprecated, and the support for them may be removed in the future. They do not support zero-shot inference. Please use the projector weights in the [table above](#projector-weights) if possible.
111
+
112
+ **NOTE**: When you use our pretrained projector for visual instruction tuning, it is very important to **use the same base LLM and vision encoder** as the one we used for pretraining the projector. Otherwise, the performance will be very bad.
113
+
114
+ When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
115
+
116
+ ```Shell
117
+ --mm_use_im_start_end True
118
+ --mm_use_im_patch_token False
119
+ ```
120
+
121
+ | Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Download |
122
+ |----------|----------------|---------------|----------------------|----------|
123
+ | Vicuna-7B-v1.1 | CLIP-L | LCS-558K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-7b-pretrain-projector-v1-1-LCS-558K-blip_caption.bin) |
124
+ | Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption.bin) |
125
+ | Vicuna-7B-v0 | CLIP-L | CC-595K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-7b-pretrain-projector-v0-CC3M-595K-original_caption.bin) |
126
+
127
+ When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
128
+
129
+ ```Shell
130
+ --mm_use_im_start_end False
131
+ --mm_use_im_patch_token False
132
+ ```
133
+
134
+ | Base LLM | Vision Encoder | Pretrain Data | Pretraining schedule | Download |
135
+ |----------|----------------|---------------|----------------------|----------|
136
+ | Vicuna-13B-v0 | CLIP-L | CC-595K | 1e | [projector](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption-no_im_token.bin) |
docs/ScienceQA.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### ScienceQA
2
+
3
+ #### Prepare Data
4
+ 1. Please see ScienceQA [repo](https://github.com/lupantech/ScienceQA) for setting up the dataset.
5
+ 2. Generate ScienceQA dataset for LLaVA conversation-style format.
6
+
7
+ ```Shell
8
+ python scripts/convert_sqa_to_llava.py \
9
+ convert_to_llava \
10
+ --base-dir /path/to/ScienceQA/data/scienceqa \
11
+ --prompt-format "QCM-LEA" \
12
+ --split {train,val,minival,test,minitest}
13
+ ```
14
+
15
+ #### Training
16
+
17
+ 1. Pretraining
18
+
19
+ You can download our pretrained projector weights from our [Model Zoo](), or train your own projector weights using [`pretrain.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain.sh).
20
+
21
+ 2. Finetuning
22
+
23
+ See [`finetune_sqa.sh`](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune_sqa.sh).
24
+
25
+ #### Evaluation
26
+
27
+ 1. Multiple-GPU inference
28
+ You may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for [batch evaluation](https://github.com/haotian-liu/LLaVA/blob/main/scripts/sqa_eval_batch.sh) and [results gathering](https://github.com/haotian-liu/LLaVA/blob/main/scripts/sqa_eval_gather.sh).
29
+
30
+ 2. Single-GPU inference
31
+
32
+ (a) Generate LLaVA responses on ScienceQA dataset
33
+
34
+ ```Shell
35
+ python -m llava.eval.model_vqa_science \
36
+ --model-path liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3 \
37
+ --question-file /path/to/ScienceQA/data/scienceqa/llava_test_QCM-LEA.json \
38
+ --image-folder /path/to/ScienceQA/data/scienceqa/images/test \
39
+ --answers-file vqa/results/ScienceQA/test_llava-13b.jsonl \
40
+ --conv-mode llava_v1
41
+ ```
42
+
43
+ (b) Evaluate the generated responses
44
+
45
+ ```Shell
46
+ python eval_science_qa.py \
47
+ --base-dir /path/to/ScienceQA/data/scienceqa \
48
+ --result-file vqa/results/ScienceQA/test_llava-13b.jsonl \
49
+ --output-file vqa/results/ScienceQA/test_llava-13b_output.json \
50
+ --output-result vqa/results/ScienceQA/test_llava-13b_result.json \
51
+ ```
52
+
53
+ For reference, we attach our prediction file [`test_sqa_llava_lcs_558k_sqa_12e_vicuna_v1_3_13b.json`](https://github.com/haotian-liu/LLaVA/blob/main/llava/eval/table/results/test_sqa_llava_lcs_558k_sqa_12e_vicuna_v1_3_13b.json) and [`test_sqa_llava_13b_v0.json`](https://github.com/haotian-liu/LLaVA/blob/main/llava/eval/table/results/test_sqa_llava_13b_v0.json) for comparison when reproducing our results, as well as for further analysis in detail.
gradio_web_server.log ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2023-11-18 09:11:34 | INFO | gradio_web_server | args: Namespace(host='0.0.0.0', port=None, controller_url='http://localhost:10000', concurrency_count=10, model_list_mode='reload', share=True, moderate=False, embed=False)
2
+ 2023-11-18 09:11:34 | INFO | gradio_web_server | Models: ['llava-v1.5-7b-5GB']
3
+ 2023-11-18 09:11:34 | INFO | gradio_web_server | Namespace(host='0.0.0.0', port=None, controller_url='http://localhost:10000', concurrency_count=10, model_list_mode='reload', share=True, moderate=False, embed=False)
4
+ 2023-11-18 09:11:35 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
5
+ 2023-11-18 09:11:44 | INFO | stdout | Running on public URL: https://e759bfbc461ab6868f.gradio.live
6
+ 2023-11-18 09:11:44 | INFO | stdout |
7
+ 2023-11-18 09:11:44 | INFO | stdout | This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
8
+ 2023-11-18 09:12:43 | INFO | stdout | Keyboard interruption in main thread... closing server.
9
+ 2023-11-18 09:12:43 | ERROR | stderr | Traceback (most recent call last):
10
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2058, in block_thread
11
+ 2023-11-18 09:12:43 | ERROR | stderr | time.sleep(0.1)
12
+ 2023-11-18 09:12:43 | ERROR | stderr | KeyboardInterrupt
13
+ 2023-11-18 09:12:43 | ERROR | stderr |
14
+ 2023-11-18 09:12:43 | ERROR | stderr | During handling of the above exception, another exception occurred:
15
+ 2023-11-18 09:12:43 | ERROR | stderr |
16
+ 2023-11-18 09:12:43 | ERROR | stderr | Traceback (most recent call last):
17
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
18
+ 2023-11-18 09:12:43 | ERROR | stderr | return _run_code(code, main_globals, None,
19
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
20
+ 2023-11-18 09:12:43 | ERROR | stderr | exec(code, run_globals)
21
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/content/LLaVA/llava/serve/gradio_web_server.py", line 413, in <module>
22
+ 2023-11-18 09:12:43 | ERROR | stderr | demo.queue(
23
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1975, in launch
24
+ 2023-11-18 09:12:43 | ERROR | stderr | self.block_thread()
25
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2061, in block_thread
26
+ 2023-11-18 09:12:43 | ERROR | stderr | self.server.close()
27
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/local/lib/python3.10/dist-packages/gradio/networking.py", line 43, in close
28
+ 2023-11-18 09:12:43 | ERROR | stderr | self.thread.join()
29
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/lib/python3.10/threading.py", line 1096, in join
30
+ 2023-11-18 09:12:43 | ERROR | stderr | self._wait_for_tstate_lock()
31
+ 2023-11-18 09:12:43 | ERROR | stderr | File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
32
+ 2023-11-18 09:12:43 | ERROR | stderr | if lock.acquire(block, timeout):
33
+ 2023-11-18 09:12:43 | ERROR | stderr | KeyboardInterrupt
34
+ 2023-11-18 09:12:43 | INFO | stdout | Killing tunnel 0.0.0.0:7860 <> https://e759bfbc461ab6868f.gradio.live
images/demo_cli.gif ADDED

Git LFS Details

  • SHA256: 09227563f4fe04f077587eeb7b7c33ace2fbb8830e6cc9cfce03a25a57c43bfe
  • Pointer size: 133 Bytes
  • Size of remote file: 10 MB
images/llava_example_cmp.png ADDED
images/llava_logo.png ADDED
images/llava_v1_5_radar.jpg ADDED
llava.egg-info/PKG-INFO ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: llava
3
+ Version: 1.1.0
4
+ Summary: Towards GPT-4 like large language and visual assistant.
5
+ Project-URL: Homepage, https://llava-vl.github.io
6
+ Project-URL: Bug Tracker, https://github.com/haotian-liu/LLaVA/issues
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: License :: OSI Approved :: Apache Software License
9
+ Requires-Python: >=3.8
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+ Requires-Dist: einops
13
+ Requires-Dist: fastapi
14
+ Requires-Dist: gradio==3.35.2
15
+ Requires-Dist: markdown2[all]
16
+ Requires-Dist: numpy
17
+ Requires-Dist: requests
18
+ Requires-Dist: sentencepiece
19
+ Requires-Dist: tokenizers>=0.12.1
20
+ Requires-Dist: torch
21
+ Requires-Dist: torchvision
22
+ Requires-Dist: uvicorn
23
+ Requires-Dist: wandb
24
+ Requires-Dist: shortuuid
25
+ Requires-Dist: httpx==0.24.0
26
+ Requires-Dist: deepspeed==0.9.5
27
+ Requires-Dist: peft==0.4.0
28
+ Requires-Dist: transformers==4.31.0
29
+ Requires-Dist: accelerate==0.21.0
30
+ Requires-Dist: bitsandbytes==0.41.0
31
+ Requires-Dist: scikit-learn==1.2.2
32
+ Requires-Dist: sentencepiece==0.1.99
33
+ Requires-Dist: einops==0.6.1
34
+ Requires-Dist: einops-exts==0.0.4
35
+ Requires-Dist: timm==0.6.13
36
+ Requires-Dist: gradio_client==0.2.9
37
+
38
+ # 🌋 LLaVA: Large Language and Vision Assistant
39
+
40
+ *Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.*
41
+
42
+ [[Project Page](https://llava-vl.github.io/)] [[Demo](https://llava.hliu.cc/)] [[Data](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)] [[Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)]
43
+
44
+ **Improved Baselines with Visual Instruction Tuning** [[Paper](https://arxiv.org/abs/2310.03744)] <br>
45
+ [Haotian Liu](https://hliu.cc), [Chunyuan Li](https://chunyuan.li/), [Yuheng Li](https://yuheng-li.github.io/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)
46
+
47
+ **Visual Instruction Tuning** (NeurIPS 2023, **Oral**) [[Paper](https://arxiv.org/abs/2304.08485)]<br>
48
+ [Haotian Liu*](https://hliu.cc), [Chunyuan Li*](https://chunyuan.li/), [Qingyang Wu](https://scholar.google.ca/citations?user=HDiw-TsAAAAJ&hl=en/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/) (*Equal Contribution)
49
+
50
+ <p align="center">
51
+ <a href="https://llava.hliu.cc/"><img src="images/llava_logo.png" width="50%"></a> <br>
52
+ Generated by <a href="https://gligen.github.io/">GLIGEN</a> via "a cute lava llama with glasses" and box prompt
53
+ </p>
54
+
55
+
56
+ ## Release
57
+ - [10/5] 🔥 LLaVA-1.5 is out! Achieving SoTA on 11 benchmarks, with just simple modifications to the original LLaVA, utilizes all public data, completes training in ~1 day on a single 8-A100 node, and surpasses methods like Qwen-VL-Chat that use billion-scale data. Check out the [technical report](https://arxiv.org/abs/2310.03744), and explore the [demo](https://llava.hliu.cc/)! Models are available in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md), with training and evaluation scripts coming in the next week!
58
+ - [9/26] LLaVA is improved with reinforcement learning from human feedback (RLHF) to improve fact grounding and reduce hallucination. Check out the new SFT and RLHF checkpoints at project [[LLavA-RLHF]](https://llava-rlhf.github.io/)
59
+ - [9/22] [LLaVA](https://arxiv.org/abs/2304.08485) is accpeted by NeurIPS 2023 as **oral presentation**, and [LLaVA-Med](https://arxiv.org/abs/2306.00890) is accpeted by NeurIPS 2023 Datasets and Benchmarks Track as **spotlight presentation**.
60
+ - [9/20] We summarize our emprical study of training 33B and 65B LLaVA mdoels in a [note](https://arxiv.org/abs/2309.09958). Further, if you are interested in the comprehensive review, evolution and trend of multimodal foundation models, please check out our recent survey paper [``Multimodal Foundation Models: From Specialists to General-Purpose Assistants''.](https://arxiv.org/abs/2309.10020)
61
+ <p align="center">
62
+ <img src="https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings/blob/main/images/mfm_evolution.jpeg?raw=true" width=50%/>
63
+ </p>
64
+
65
+ - [7/19] 🔥 We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We release [LLaVA Bench](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check out [LLaVA-from-LLaMA-2](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_from_LLaMA2.md), and our [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)!
66
+ - [6/26] [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/) on **Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4**! Please check out [[Slides](https://datarelease.blob.core.windows.net/tutorial/vision_foundation_models_2023/slides/Chunyuan_cvpr2023_tutorial_lmm.pdf)] [[Notes](https://arxiv.org/abs/2306.14895)] [[YouTube](https://youtu.be/mkI7EPD1vp8)] [[Bilibli](https://www.bilibili.com/video/BV1Ng4y1T7v3/)].
67
+ - [6/11] We released the preview for the most requested feature: DeepSpeed and LoRA support! Please see documentations [here](./docs/LoRA.md).
68
+ - [6/1] We released **LLaVA-Med: Large Language and Vision Assistant for Biomedicine**, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2306.00890) and [page](https://github.com/microsoft/LLaVA-Med).
69
+ - [5/6] We are releasing [LLaVA-Lighting-MPT-7B-preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview), based on MPT-7B-Chat! See [here](#LLaVA-MPT-7b) for more details.
70
+ - [5/2] 🔥 We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! See [here](#train-llava-lightning) for more details.
71
+ - [4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out [here](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava).
72
+ - [4/17] 🔥 We released **LLaVA: Large Language and Vision Assistant**. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2304.08485) and [demo](https://llava.hliu.cc/).
73
+
74
+ <!-- <a href="https://llava.hliu.cc/"><img src="assets/demo.gif" width="70%"></a> -->
75
+
76
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
77
+ [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
78
+ **Usage and License Notices**: The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
79
+
80
+
81
+ ## Contents
82
+ - [Install](#install)
83
+ - [LLaVA Weights](#llava-weights)
84
+ - [Demo](#Demo)
85
+ - [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)
86
+ - [Dataset](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)
87
+ - [Train](#train)
88
+ - [Evaluation](#evaluation)
89
+
90
+ ## Install
91
+
92
+ 1. Clone this repository and navigate to LLaVA folder
93
+ ```bash
94
+ git clone https://github.com/haotian-liu/LLaVA.git
95
+ cd LLaVA
96
+ ```
97
+
98
+ 2. Install Package
99
+ ```Shell
100
+ conda create -n llava python=3.10 -y
101
+ conda activate llava
102
+ pip install --upgrade pip # enable PEP 660 support
103
+ pip install -e .
104
+ ```
105
+
106
+ 3. Install additional packages for training cases
107
+ ```
108
+ pip install ninja
109
+ pip install flash-attn --no-build-isolation
110
+ ```
111
+
112
+ ### Upgrade to latest code base
113
+
114
+ ```Shell
115
+ git pull
116
+ pip uninstall transformers
117
+ pip install -e .
118
+ ```
119
+
120
+ ## LLaVA Weights
121
+ Please check out our [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md) for all public LLaVA checkpoints, and the instructions of how to use the weights.
122
+
123
+ ## Demo
124
+
125
+ To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions [here](#llava-weights) to download the checkpoints.
126
+
127
+ ### Gradio Web UI
128
+
129
+ To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.
130
+
131
+ #### Launch a controller
132
+ ```Shell
133
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000
134
+ ```
135
+
136
+ #### Launch a gradio web server.
137
+ ```Shell
138
+ python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
139
+ ```
140
+ You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
141
+
142
+ #### Launch a model worker
143
+
144
+ This is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in `--model-path`.
145
+
146
+ ```Shell
147
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
148
+ ```
149
+ Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
150
+
151
+ You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker.
152
+ ```Shell
153
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>
154
+ ```
155
+
156
+ If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the `--device` flag: `--device mps`.
157
+
158
+ #### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)
159
+
160
+ If the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. Our latest code base will automatically try to use multiple GPUs if you have more than one GPU. You can specify which GPUs to use with `CUDA_VISIBLE_DEVICES`. Below is an example of running with the first two GPUs.
161
+
162
+ ```Shell
163
+ CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
164
+ ```
165
+
166
+ #### Launch a model worker (4-bit, 8-bit inference, quantized)
167
+
168
+ You can launch the model worker with quantized bits (4-bit, 8-bit), which allows you to run the inference with reduced GPU memory footprint, potentially allowing you to run on a GPU with as few as 12GB VRAM. Note that inference with quantized bits may not be as accurate as the full-precision model. Simply append `--load-4bit` or `--load-8bit` to the **model worker** command that you are executing. Below is an example of running with 4-bit quantization.
169
+
170
+ ```Shell
171
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --load-4bit
172
+ ```
173
+
174
+ #### Launch a model worker (LoRA weights, unmerged)
175
+
176
+ You can launch the model worker with LoRA weights, without merging them with the base checkpoint, to save disk space. There will be additional loading time, while the inference speed is the same as the merged checkpoints. Unmerged LoRA checkpoints do not have `lora-merge` in the model name, and are usually much smaller (less than 1GB) than the merged checkpoints (13G for 7B, and 25G for 13B).
177
+
178
+ To load unmerged LoRA weights, you simply need to pass an additional argument `--model-base`, which is the base LLM that is used to train the LoRA weights. You can check the base LLM of each LoRA weights in the [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md).
179
+
180
+ ```Shell
181
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 --model-base lmsys/vicuna-13b-v1.3
182
+ ```
183
+
184
+ ### CLI Inference
185
+
186
+ Chat about images using LLaVA without the need of Gradio interface. It also supports multiple GPUs, 4-bit and 8-bit quantized inference. With 4-bit quantization, for our LLaVA-1.5-7B, it uses less than 8GB VRAM on a single GPU.
187
+
188
+ ```Shell
189
+ python -m llava.serve.cli \
190
+ --model-path liuhaotian/llava-v1.5-7b \
191
+ --image-file "https://llava-vl.github.io/static/images/view.jpg" \
192
+ --load-4bit
193
+ ```
194
+
195
+ <img src="images/demo_cli.gif" width="70%">
196
+
197
+ ## Train
198
+
199
+ LLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.
200
+
201
+ LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the `per_device_train_batch_size` and increase the `gradient_accumulation_steps` accordingly. Always keep the global batch size the same: `per_device_train_batch_size` x `gradient_accumulation_steps` x `num_gpus`.
202
+
203
+ ### Hyperparameters
204
+ We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.
205
+
206
+ 1. Pretraining
207
+
208
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
209
+ | --- | ---: | ---: | ---: | ---: | ---: |
210
+ | LLaVA-13B | 256 | 1e-3 | 1 | 2048 | 0 |
211
+
212
+ 2. Finetuning
213
+
214
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
215
+ | --- | ---: | ---: | ---: | ---: | ---: |
216
+ | LLaVA-13B | 128 | 2e-5 | 1 | 2048 | 0 |
217
+
218
+ ### Prepare Vicuna checkpoints
219
+
220
+ Before you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights [here](https://github.com/lm-sys/FastChat#model-weights).
221
+
222
+ Vicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: `v0` for `v0` weights, and `v1` for all Vicuna `v1.x` models.
223
+
224
+ ### Pretrain (feature alignment)
225
+
226
+ Please download the subset of the CC3M dataset we use in the paper [here](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K).
227
+
228
+ Pretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.
229
+
230
+ We recommend training with DeepSpeed as it can save a lot of GPU RAM. We provide training script with DeepSpeed [here](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain.sh).
231
+
232
+ You may run this with a single A100 GPU with the following code. Please note that the `per_device_train_batch_size` * `gradient_accumulation_steps` should be equal to 128 to keep the global batch size the same.
233
+
234
+ <details>
235
+ <summary>Pretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.</summary>
236
+
237
+ ```Shell
238
+ python llava/train/train_mem.py \
239
+ --model_name_or_path ./checkpoints/vicuna-13b \
240
+ --version [v0 or v1] \
241
+ --data_path /path/to/cc3m_595k.json \
242
+ --image_folder /path/to/cc3m_595k_images \
243
+ --vision_tower openai/clip-vit-large-patch14 \
244
+ --tune_mm_mlp_adapter True \
245
+ --mm_vision_select_layer -2 \
246
+ --mm_use_im_start_end False \
247
+ --mm_use_im_patch_token False \
248
+ --bf16 True \
249
+ --output_dir ./checkpoints/llava-13b-pretrain \
250
+ --num_train_epochs 1 \
251
+ --per_device_train_batch_size 16 \
252
+ --per_device_eval_batch_size 4 \
253
+ --gradient_accumulation_steps 8 \
254
+ --evaluation_strategy "no" \
255
+ --save_strategy "steps" \
256
+ --save_steps 2400 \
257
+ --save_total_limit 1 \
258
+ --learning_rate 2e-3 \
259
+ --weight_decay 0. \
260
+ --warmup_ratio 0.03 \
261
+ --lr_scheduler_type "cosine" \
262
+ --logging_steps 1 \
263
+ --tf32 True \
264
+ --model_max_length 2048 \
265
+ --gradient_checkpointing True \
266
+ --lazy_preprocess True \
267
+ --report_to wandb
268
+ ```
269
+ </details>
270
+
271
+
272
+ ### Visual Instruction Tuning
273
+
274
+ 1. Prepare data
275
+
276
+ Please download the annotation of our instruction tuning data [llava_instruct_158k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_150k.json), and download the COCO train2017 images [here](https://cocodataset.org/#download).
277
+
278
+ 2. Start training!
279
+
280
+ You may download our pretrained projectors in [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md). It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.
281
+
282
+ When we initially released our paper, we used a full 3-epoch schedule on the LLaVA-Instruct-158K dataset. The scripts are provided [here](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune_full_schedule.sh).
283
+
284
+ In our later exploration, we introduced LLaVA-Lightning, as we find that a much faster 1-epoch schedule on LLaVA-Instruct-80K can achieve fast convergence and good performance. With LLaVA Lightning, we are able to train, validate, and release LLaVA-LLaMA-2 checkpoints preview on the same day as LLaMA-2 release. If you are interested to learn more about LLaVA Lightning, please continue to the following section.
285
+
286
+ ### Lightning
287
+
288
+ LLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.
289
+
290
+ For LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper. We find such schedule is effective and can achieve fast convergence and good performance.
291
+
292
+ For pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/tree/main).
293
+
294
+ For instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download `llava_instruct_80k.json` [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_80k.json).
295
+
296
+ #### Hyperparameters
297
+
298
+ 1. Pretraining ([script](https://github.com/haotian-liu/LLaVA/blob/main/scripts/pretrain.sh))
299
+
300
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
301
+ | --- | ---: | ---: | ---: | ---: | ---: |
302
+ | LLaVA-Lightning | 128 | 2e-3 | 1 | 2048 | 0 |
303
+
304
+ 2. Visual Instruction Tuning ([script](https://github.com/haotian-liu/LLaVA/blob/main/scripts/finetune.sh))
305
+
306
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
307
+ | --- | ---: | ---: | ---: | ---: | ---: |
308
+ | LLaVA-Lightning | 128 | 2e-5 | 1 | 2048 | 0 |
309
+
310
+ #### LLaVA-MPT-7b
311
+ Thanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7B-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.
312
+
313
+ **NOTE**: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.
314
+
315
+ 1. Usage
316
+
317
+ You do not need to download our checkpoint, it will directly load from our Hugging Face model: [`liuhaotian/LLaVA-Lightning-MPT-7B-preview`](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview).
318
+
319
+ ```Shell
320
+ python -m llava.serve.controller --host 0.0.0.0 --port 10000
321
+ python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview
322
+ python -m llava.serve.gradio_web_server --controller http://localhost:10000
323
+ ```
324
+
325
+ 2. Training
326
+
327
+ We use the same set of training dataset, and the hyperparameters as other *Lightning* checkpoints.
328
+
329
+ ## Evaluation
330
+
331
+ ### GPT-assisted Evaluation
332
+
333
+ Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.
334
+
335
+ 1. Generate LLaVA responses
336
+
337
+ ```Shell
338
+ python model_vqa.py \
339
+ --model-path ./checkpoints/LLaVA-13B-v0 \
340
+ --question-file \
341
+ playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
342
+ --image-folder \
343
+ /path/to/coco2014_val \
344
+ --answers-file \
345
+ /path/to/answer-file-our.jsonl
346
+ ```
347
+
348
+ 2. Evaluate the generated responses. In our case, [`answer-file-ref.jsonl`](./playground/data/coco2014_val_qa_eval/qa90_gpt4_answer.jsonl) is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
349
+
350
+ ```Shell
351
+ OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \
352
+ --question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
353
+ --context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \
354
+ --answer-list \
355
+ /path/to/answer-file-ref.jsonl \
356
+ /path/to/answer-file-our.jsonl \
357
+ --rule llava/eval/table/rule.json \
358
+ --output /path/to/review.json
359
+ ```
360
+
361
+ 3. Summarize the evaluation results
362
+
363
+ ```Shell
364
+ python summarize_gpt_review.py
365
+ ```
366
+
367
+ ## ScienceQA
368
+
369
+ Please check out the documentation [here](https://github.com/haotian-liu/LLaVA/blob/main/docs/ScienceQA.md).
370
+
371
+ ## Citation
372
+
373
+ If you find LLaVA useful for your research and applications, please cite using this BibTeX:
374
+ ```bibtex
375
+
376
+ @misc{liu2023improvedllava,
377
+ title={Improved Baselines with Visual Instruction Tuning},
378
+ author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
379
+ publisher={arXiv:2310.03744},
380
+ year={2023},
381
+ }
382
+
383
+ @misc{liu2023llava,
384
+ title={Visual Instruction Tuning},
385
+ author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
386
+ publisher={arXiv:2304.08485},
387
+ year={2023},
388
+ }
389
+ ```
390
+
391
+ ## Acknowledgement
392
+
393
+ - [Vicuna](https://github.com/lm-sys/FastChat): the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!
394
+
395
+ ## Related Projects
396
+
397
+ - [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
398
+ - [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://github.com/microsoft/LLaVA-Med)
399
+ - [Otter: In-Context Multi-Modal Instruction Tuning](https://github.com/Luodian/Otter)
400
+
401
+ For future project ideas, please check out:
402
+ - [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
403
+ - [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) to detect, segment, and generate anything by marrying [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO) and [Segment-Anything](https://github.com/facebookresearch/segment-anything).
llava.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ llava/__init__.py
5
+ llava/constants.py
6
+ llava/conversation.py
7
+ llava/mm_utils.py
8
+ llava/utils.py
9
+ llava.egg-info/PKG-INFO
10
+ llava.egg-info/SOURCES.txt
11
+ llava.egg-info/dependency_links.txt
12
+ llava.egg-info/requires.txt
13
+ llava.egg-info/top_level.txt
14
+ llava/eval/eval_gpt_review.py
15
+ llava/eval/eval_gpt_review_bench.py
16
+ llava/eval/eval_gpt_review_visual.py
17
+ llava/eval/eval_science_qa.py
18
+ llava/eval/eval_science_qa_gpt4.py
19
+ llava/eval/eval_science_qa_gpt4_requery.py
20
+ llava/eval/generate_webpage_data_from_table.py
21
+ llava/eval/model_qa.py
22
+ llava/eval/model_vqa.py
23
+ llava/eval/model_vqa_science.py
24
+ llava/eval/qa_baseline_gpt35.py
25
+ llava/eval/run_llava.py
26
+ llava/eval/summarize_gpt_review.py
27
+ llava/model/__init__.py
28
+ llava/model/apply_delta.py
29
+ llava/model/builder.py
30
+ llava/model/consolidate.py
31
+ llava/model/llava_arch.py
32
+ llava/model/make_delta.py
33
+ llava/model/utils.py
34
+ llava/model/language_model/llava_llama.py
35
+ llava/model/language_model/llava_mpt.py
36
+ llava/model/language_model/mpt/adapt_tokenizer.py
37
+ llava/model/language_model/mpt/attention.py
38
+ llava/model/language_model/mpt/blocks.py
39
+ llava/model/language_model/mpt/configuration_mpt.py
40
+ llava/model/language_model/mpt/custom_embedding.py
41
+ llava/model/language_model/mpt/flash_attn_triton.py
42
+ llava/model/language_model/mpt/hf_prefixlm_converter.py
43
+ llava/model/language_model/mpt/meta_init_context.py
44
+ llava/model/language_model/mpt/modeling_mpt.py
45
+ llava/model/language_model/mpt/norm.py
46
+ llava/model/language_model/mpt/param_init_fns.py
47
+ llava/model/multimodal_encoder/builder.py
48
+ llava/model/multimodal_encoder/clip_encoder.py
49
+ llava/model/multimodal_projector/builder.py
50
+ llava/serve/__init__.py
51
+ llava/serve/cli.py
52
+ llava/serve/controller.py
53
+ llava/serve/gradio_web_server.py
54
+ llava/serve/model_worker.py
55
+ llava/serve/register_worker.py
56
+ llava/serve/test_message.py
57
+ llava/train/llama_flash_attn_monkey_patch.py
58
+ llava/train/llava_trainer.py
59
+ llava/train/train.py
60
+ llava/train/train_mem.py
llava.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
llava.egg-info/requires.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ einops
2
+ fastapi
3
+ gradio==3.35.2
4
+ markdown2[all]
5
+ numpy
6
+ requests
7
+ sentencepiece
8
+ tokenizers>=0.12.1
9
+ torch
10
+ torchvision
11
+ uvicorn
12
+ wandb
13
+ shortuuid
14
+ httpx==0.24.0
15
+ deepspeed==0.9.5
16
+ peft==0.4.0
17
+ transformers==4.31.0
18
+ accelerate==0.21.0
19
+ bitsandbytes==0.41.0
20
+ scikit-learn==1.2.2
21
+ sentencepiece==0.1.99
22
+ einops==0.6.1
23
+ einops-exts==0.0.4
24
+ timm==0.6.13
25
+ gradio_client==0.2.9
llava.egg-info/top_level.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ images
2
+ llava
llava/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .model import LlavaLlamaForCausalLM
llava/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (176 Bytes). View file
 
llava/__pycache__/constants.cpython-310.pyc ADDED
Binary file (438 Bytes). View file
 
llava/__pycache__/conversation.cpython-310.pyc ADDED
Binary file (10.3 kB). View file
 
llava/__pycache__/mm_utils.cpython-310.pyc ADDED
Binary file (4.24 kB). View file
 
llava/__pycache__/utils.cpython-310.pyc ADDED
Binary file (4 kB). View file
 
llava/constants.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CONTROLLER_HEART_BEAT_EXPIRATION = 30
2
+ WORKER_HEART_BEAT_INTERVAL = 15
3
+
4
+ LOGDIR = "."
5
+
6
+ # Model Constants
7
+ IGNORE_INDEX = -100
8
+ IMAGE_TOKEN_INDEX = -200
9
+ DEFAULT_IMAGE_TOKEN = "<image>"
10
+ DEFAULT_IMAGE_PATCH_TOKEN = "<im_patch>"
11
+ DEFAULT_IM_START_TOKEN = "<im_start>"
12
+ DEFAULT_IM_END_TOKEN = "<im_end>"
llava/conversation.py ADDED
@@ -0,0 +1,381 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import dataclasses
2
+ from enum import auto, Enum
3
+ from typing import List, Tuple
4
+
5
+
6
+ class SeparatorStyle(Enum):
7
+ """Different separator style."""
8
+ SINGLE = auto()
9
+ TWO = auto()
10
+ MPT = auto()
11
+ PLAIN = auto()
12
+ LLAMA_2 = auto()
13
+
14
+
15
+ @dataclasses.dataclass
16
+ class Conversation:
17
+ """A class that keeps all conversation history."""
18
+ system: str
19
+ roles: List[str]
20
+ messages: List[List[str]]
21
+ offset: int
22
+ sep_style: SeparatorStyle = SeparatorStyle.SINGLE
23
+ sep: str = "###"
24
+ sep2: str = None
25
+ version: str = "Unknown"
26
+
27
+ skip_next: bool = False
28
+
29
+ def get_prompt(self):
30
+ messages = self.messages
31
+ if len(messages) > 0 and type(messages[0][1]) is tuple:
32
+ messages = self.messages.copy()
33
+ init_role, init_msg = messages[0].copy()
34
+ init_msg = init_msg[0].replace("<image>", "").strip()
35
+ if 'mmtag' in self.version:
36
+ messages[0] = (init_role, init_msg)
37
+ messages.insert(0, (self.roles[0], "<Image><image></Image>"))
38
+ messages.insert(1, (self.roles[1], "Received."))
39
+ else:
40
+ messages[0] = (init_role, "<image>\n" + init_msg)
41
+
42
+ if self.sep_style == SeparatorStyle.SINGLE:
43
+ ret = self.system + self.sep
44
+ for role, message in messages:
45
+ if message:
46
+ if type(message) is tuple:
47
+ message, _, _ = message
48
+ ret += role + ": " + message + self.sep
49
+ else:
50
+ ret += role + ":"
51
+ elif self.sep_style == SeparatorStyle.TWO:
52
+ seps = [self.sep, self.sep2]
53
+ ret = self.system + seps[0]
54
+ for i, (role, message) in enumerate(messages):
55
+ if message:
56
+ if type(message) is tuple:
57
+ message, _, _ = message
58
+ ret += role + ": " + message + seps[i % 2]
59
+ else:
60
+ ret += role + ":"
61
+ elif self.sep_style == SeparatorStyle.MPT:
62
+ ret = self.system + self.sep
63
+ for role, message in messages:
64
+ if message:
65
+ if type(message) is tuple:
66
+ message, _, _ = message
67
+ ret += role + message + self.sep
68
+ else:
69
+ ret += role
70
+ elif self.sep_style == SeparatorStyle.LLAMA_2:
71
+ wrap_sys = lambda msg: f"<<SYS>>\n{msg}\n<</SYS>>\n\n"
72
+ wrap_inst = lambda msg: f"[INST] {msg} [/INST]"
73
+ ret = ""
74
+
75
+ for i, (role, message) in enumerate(messages):
76
+ if i == 0:
77
+ assert message, "first message should not be none"
78
+ assert role == self.roles[0], "first message should come from user"
79
+ if message:
80
+ if type(message) is tuple:
81
+ message, _, _ = message
82
+ if i == 0: message = wrap_sys(self.system) + message
83
+ if i % 2 == 0:
84
+ message = wrap_inst(message)
85
+ ret += self.sep + message
86
+ else:
87
+ ret += " " + message + " " + self.sep2
88
+ else:
89
+ ret += ""
90
+ ret = ret.lstrip(self.sep)
91
+ elif self.sep_style == SeparatorStyle.PLAIN:
92
+ seps = [self.sep, self.sep2]
93
+ ret = self.system
94
+ for i, (role, message) in enumerate(messages):
95
+ if message:
96
+ if type(message) is tuple:
97
+ message, _, _ = message
98
+ ret += message + seps[i % 2]
99
+ else:
100
+ ret += ""
101
+ else:
102
+ raise ValueError(f"Invalid style: {self.sep_style}")
103
+
104
+ return ret
105
+
106
+ def append_message(self, role, message):
107
+ self.messages.append([role, message])
108
+
109
+ def get_images(self, return_pil=False):
110
+ images = []
111
+ for i, (role, msg) in enumerate(self.messages[self.offset:]):
112
+ if i % 2 == 0:
113
+ if type(msg) is tuple:
114
+ import base64
115
+ from io import BytesIO
116
+ from PIL import Image
117
+ msg, image, image_process_mode = msg
118
+ if image_process_mode == "Pad":
119
+ def expand2square(pil_img, background_color=(122, 116, 104)):
120
+ width, height = pil_img.size
121
+ if width == height:
122
+ return pil_img
123
+ elif width > height:
124
+ result = Image.new(pil_img.mode, (width, width), background_color)
125
+ result.paste(pil_img, (0, (width - height) // 2))
126
+ return result
127
+ else:
128
+ result = Image.new(pil_img.mode, (height, height), background_color)
129
+ result.paste(pil_img, ((height - width) // 2, 0))
130
+ return result
131
+ image = expand2square(image)
132
+ elif image_process_mode in ["Default", "Crop"]:
133
+ pass
134
+ elif image_process_mode == "Resize":
135
+ image = image.resize((336, 336))
136
+ else:
137
+ raise ValueError(f"Invalid image_process_mode: {image_process_mode}")
138
+ max_hw, min_hw = max(image.size), min(image.size)
139
+ aspect_ratio = max_hw / min_hw
140
+ max_len, min_len = 800, 400
141
+ shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw))
142
+ longest_edge = int(shortest_edge * aspect_ratio)
143
+ W, H = image.size
144
+ if longest_edge != max(image.size):
145
+ if H > W:
146
+ H, W = longest_edge, shortest_edge
147
+ else:
148
+ H, W = shortest_edge, longest_edge
149
+ image = image.resize((W, H))
150
+ if return_pil:
151
+ images.append(image)
152
+ else:
153
+ buffered = BytesIO()
154
+ image.save(buffered, format="PNG")
155
+ img_b64_str = base64.b64encode(buffered.getvalue()).decode()
156
+ images.append(img_b64_str)
157
+ return images
158
+
159
+ def to_gradio_chatbot(self):
160
+ ret = []
161
+ for i, (role, msg) in enumerate(self.messages[self.offset:]):
162
+ if i % 2 == 0:
163
+ if type(msg) is tuple:
164
+ import base64
165
+ from io import BytesIO
166
+ msg, image, image_process_mode = msg
167
+ max_hw, min_hw = max(image.size), min(image.size)
168
+ aspect_ratio = max_hw / min_hw
169
+ max_len, min_len = 800, 400
170
+ shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw))
171
+ longest_edge = int(shortest_edge * aspect_ratio)
172
+ W, H = image.size
173
+ if H > W:
174
+ H, W = longest_edge, shortest_edge
175
+ else:
176
+ H, W = shortest_edge, longest_edge
177
+ image = image.resize((W, H))
178
+ buffered = BytesIO()
179
+ image.save(buffered, format="JPEG")
180
+ img_b64_str = base64.b64encode(buffered.getvalue()).decode()
181
+ img_str = f'<img src="data:image/png;base64,{img_b64_str}" alt="user upload image" />'
182
+ msg = img_str + msg.replace('<image>', '').strip()
183
+ ret.append([msg, None])
184
+ else:
185
+ ret.append([msg, None])
186
+ else:
187
+ ret[-1][-1] = msg
188
+ return ret
189
+
190
+ def copy(self):
191
+ return Conversation(
192
+ system=self.system,
193
+ roles=self.roles,
194
+ messages=[[x, y] for x, y in self.messages],
195
+ offset=self.offset,
196
+ sep_style=self.sep_style,
197
+ sep=self.sep,
198
+ sep2=self.sep2,
199
+ version=self.version)
200
+
201
+ def dict(self):
202
+ if len(self.get_images()) > 0:
203
+ return {
204
+ "system": self.system,
205
+ "roles": self.roles,
206
+ "messages": [[x, y[0] if type(y) is tuple else y] for x, y in self.messages],
207
+ "offset": self.offset,
208
+ "sep": self.sep,
209
+ "sep2": self.sep2,
210
+ }
211
+ return {
212
+ "system": self.system,
213
+ "roles": self.roles,
214
+ "messages": self.messages,
215
+ "offset": self.offset,
216
+ "sep": self.sep,
217
+ "sep2": self.sep2,
218
+ }
219
+
220
+
221
+ conv_vicuna_v0 = Conversation(
222
+ system="A chat between a curious human and an artificial intelligence assistant. "
223
+ "The assistant gives helpful, detailed, and polite answers to the human's questions.",
224
+ roles=("Human", "Assistant"),
225
+ messages=(
226
+ ("Human", "What are the key differences between renewable and non-renewable energy sources?"),
227
+ ("Assistant",
228
+ "Renewable energy sources are those that can be replenished naturally in a relatively "
229
+ "short amount of time, such as solar, wind, hydro, geothermal, and biomass. "
230
+ "Non-renewable energy sources, on the other hand, are finite and will eventually be "
231
+ "depleted, such as coal, oil, and natural gas. Here are some key differences between "
232
+ "renewable and non-renewable energy sources:\n"
233
+ "1. Availability: Renewable energy sources are virtually inexhaustible, while non-renewable "
234
+ "energy sources are finite and will eventually run out.\n"
235
+ "2. Environmental impact: Renewable energy sources have a much lower environmental impact "
236
+ "than non-renewable sources, which can lead to air and water pollution, greenhouse gas emissions, "
237
+ "and other negative effects.\n"
238
+ "3. Cost: Renewable energy sources can be more expensive to initially set up, but they typically "
239
+ "have lower operational costs than non-renewable sources.\n"
240
+ "4. Reliability: Renewable energy sources are often more reliable and can be used in more remote "
241
+ "locations than non-renewable sources.\n"
242
+ "5. Flexibility: Renewable energy sources are often more flexible and can be adapted to different "
243
+ "situations and needs, while non-renewable sources are more rigid and inflexible.\n"
244
+ "6. Sustainability: Renewable energy sources are more sustainable over the long term, while "
245
+ "non-renewable sources are not, and their depletion can lead to economic and social instability.\n")
246
+ ),
247
+ offset=2,
248
+ sep_style=SeparatorStyle.SINGLE,
249
+ sep="###",
250
+ )
251
+
252
+ conv_vicuna_v1 = Conversation(
253
+ system="A chat between a curious user and an artificial intelligence assistant. "
254
+ "The assistant gives helpful, detailed, and polite answers to the user's questions.",
255
+ roles=("USER", "ASSISTANT"),
256
+ version="v1",
257
+ messages=(),
258
+ offset=0,
259
+ sep_style=SeparatorStyle.TWO,
260
+ sep=" ",
261
+ sep2="</s>",
262
+ )
263
+
264
+ conv_llama_2 = Conversation(
265
+ system="""You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
266
+
267
+ If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""",
268
+ roles=("USER", "ASSISTANT"),
269
+ version="llama_v2",
270
+ messages=(),
271
+ offset=0,
272
+ sep_style=SeparatorStyle.LLAMA_2,
273
+ sep="<s>",
274
+ sep2="</s>",
275
+ )
276
+
277
+ conv_llava_llama_2 = Conversation(
278
+ system="You are a helpful language and vision assistant. "
279
+ "You are able to understand the visual content that the user provides, "
280
+ "and assist the user with a variety of tasks using natural language.",
281
+ roles=("USER", "ASSISTANT"),
282
+ version="llama_v2",
283
+ messages=(),
284
+ offset=0,
285
+ sep_style=SeparatorStyle.LLAMA_2,
286
+ sep="<s>",
287
+ sep2="</s>",
288
+ )
289
+
290
+ conv_mpt = Conversation(
291
+ system="""<|im_start|>system
292
+ A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.""",
293
+ roles=("<|im_start|>user\n", "<|im_start|>assistant\n"),
294
+ version="mpt",
295
+ messages=(),
296
+ offset=0,
297
+ sep_style=SeparatorStyle.MPT,
298
+ sep="<|im_end|>",
299
+ )
300
+
301
+ conv_llava_plain = Conversation(
302
+ system="",
303
+ roles=("", ""),
304
+ messages=(
305
+ ),
306
+ offset=0,
307
+ sep_style=SeparatorStyle.PLAIN,
308
+ sep="\n",
309
+ )
310
+
311
+ conv_llava_v0 = Conversation(
312
+ system="A chat between a curious human and an artificial intelligence assistant. "
313
+ "The assistant gives helpful, detailed, and polite answers to the human's questions.",
314
+ roles=("Human", "Assistant"),
315
+ messages=(
316
+ ),
317
+ offset=0,
318
+ sep_style=SeparatorStyle.SINGLE,
319
+ sep="###",
320
+ )
321
+
322
+ conv_llava_v0_mmtag = Conversation(
323
+ system="A chat between a curious user and an artificial intelligence assistant. "
324
+ "The assistant is able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language."
325
+ "The visual content will be provided with the following format: <Image>visual content</Image>.",
326
+ roles=("Human", "Assistant"),
327
+ messages=(
328
+ ),
329
+ offset=0,
330
+ sep_style=SeparatorStyle.SINGLE,
331
+ sep="###",
332
+ version="v0_mmtag",
333
+ )
334
+
335
+ conv_llava_v1 = Conversation(
336
+ system="A chat between a curious human and an artificial intelligence assistant. "
337
+ "The assistant gives helpful, detailed, and polite answers to the human's questions.",
338
+ roles=("USER", "ASSISTANT"),
339
+ version="v1",
340
+ messages=(),
341
+ offset=0,
342
+ sep_style=SeparatorStyle.TWO,
343
+ sep=" ",
344
+ sep2="</s>",
345
+ )
346
+
347
+ conv_llava_v1_mmtag = Conversation(
348
+ system="A chat between a curious user and an artificial intelligence assistant. "
349
+ "The assistant is able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language."
350
+ "The visual content will be provided with the following format: <Image>visual content</Image>.",
351
+ roles=("USER", "ASSISTANT"),
352
+ messages=(),
353
+ offset=0,
354
+ sep_style=SeparatorStyle.TWO,
355
+ sep=" ",
356
+ sep2="</s>",
357
+ version="v1_mmtag",
358
+ )
359
+
360
+ default_conversation = conv_vicuna_v0
361
+ conv_templates = {
362
+ "default": conv_vicuna_v0,
363
+ "v0": conv_vicuna_v0,
364
+ "v1": conv_vicuna_v1,
365
+ "vicuna_v1": conv_vicuna_v1,
366
+ "llama_2": conv_llama_2,
367
+
368
+ "plain": conv_llava_plain,
369
+ "v0_plain": conv_llava_plain,
370
+ "llava_v0": conv_llava_v0,
371
+ "v0_mmtag": conv_llava_v0_mmtag,
372
+ "llava_v1": conv_llava_v1,
373
+ "v1_mmtag": conv_llava_v1_mmtag,
374
+ "llava_llama_2": conv_llava_llama_2,
375
+
376
+ "mpt": conv_mpt,
377
+ }
378
+
379
+
380
+ if __name__ == "__main__":
381
+ print(default_conversation.get_prompt())
llava/eval/eval_gpt_review.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+
5
+ import openai
6
+ import tqdm
7
+ import ray
8
+ import time
9
+
10
+ NUM_SECONDS_TO_SLEEP = 3
11
+
12
+ @ray.remote(num_cpus=4)
13
+ def get_eval(content: str, max_tokens: int):
14
+ while True:
15
+ try:
16
+ response = openai.ChatCompletion.create(
17
+ model='gpt-4',
18
+ messages=[{
19
+ 'role': 'system',
20
+ 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
21
+ }, {
22
+ 'role': 'user',
23
+ 'content': content,
24
+ }],
25
+ temperature=0.2, # TODO: figure out which temperature is best for evaluation
26
+ max_tokens=max_tokens,
27
+ )
28
+ break
29
+ except openai.error.RateLimitError:
30
+ pass
31
+ except Exception as e:
32
+ print(e)
33
+ time.sleep(NUM_SECONDS_TO_SLEEP)
34
+
35
+ print('success!')
36
+ return response['choices'][0]['message']['content']
37
+
38
+
39
+ def parse_score(review):
40
+ try:
41
+ score_pair = review.split('\n')[0]
42
+ score_pair = score_pair.replace(',', ' ')
43
+ sp = score_pair.split(' ')
44
+ if len(sp) == 2:
45
+ return [float(sp[0]), float(sp[1])]
46
+ else:
47
+ print('error', review)
48
+ return [-1, -1]
49
+ except Exception as e:
50
+ print(e)
51
+ print('error', review)
52
+ return [-1, -1]
53
+
54
+
55
+ if __name__ == '__main__':
56
+ parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
57
+ parser.add_argument('-q', '--question')
58
+ # parser.add_argument('-a', '--answer')
59
+ parser.add_argument('-a', '--answer-list', nargs='+', default=[])
60
+ parser.add_argument('-r', '--rule')
61
+ parser.add_argument('-o', '--output')
62
+ parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
63
+ args = parser.parse_args()
64
+
65
+ ray.init()
66
+
67
+ f_q = open(os.path.expanduser(args.question))
68
+ f_ans1 = open(os.path.expanduser(args.answer_list[0]))
69
+ f_ans2 = open(os.path.expanduser(args.answer_list[1]))
70
+ rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
71
+
72
+ review_file = open(f'{args.output}', 'w')
73
+
74
+ js_list = []
75
+ handles = []
76
+ idx = 0
77
+ for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
78
+ # if idx == 1:
79
+ # break
80
+
81
+ ques = json.loads(ques_js)
82
+ ans1 = json.loads(ans1_js)
83
+ ans2 = json.loads(ans2_js)
84
+
85
+ category = json.loads(ques_js)['category']
86
+ if category in rule_dict:
87
+ rule = rule_dict[category]
88
+ else:
89
+ rule = rule_dict['default']
90
+ prompt = rule['prompt']
91
+ role = rule['role']
92
+ content = (f'[Question]\n{ques["text"]}\n\n'
93
+ f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
94
+ f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
95
+ f'[System]\n{prompt}\n\n')
96
+ js_list.append({
97
+ 'id': idx+1,
98
+ 'question_id': ques['question_id'],
99
+ 'answer1_id': ans1['answer_id'],
100
+ 'answer2_id': ans2['answer_id'],
101
+ 'category': category})
102
+ idx += 1
103
+ handles.append(get_eval.remote(content, args.max_tokens))
104
+ # To avoid the rate limit set by OpenAI
105
+ time.sleep(NUM_SECONDS_TO_SLEEP)
106
+
107
+ reviews = ray.get(handles)
108
+ for idx, review in enumerate(reviews):
109
+ scores = parse_score(review)
110
+ js_list[idx]['content'] = review
111
+ js_list[idx]['tuple'] = scores
112
+ review_file.write(json.dumps(js_list[idx]) + '\n')
113
+ review_file.close()
llava/eval/eval_gpt_review_bench.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+
5
+ import openai
6
+ import time
7
+
8
+ NUM_SECONDS_TO_SLEEP = 0.5
9
+
10
+
11
+ def get_eval(content: str, max_tokens: int):
12
+ while True:
13
+ try:
14
+ response = openai.ChatCompletion.create(
15
+ model='gpt-4-0314',
16
+ messages=[{
17
+ 'role': 'system',
18
+ 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
19
+ }, {
20
+ 'role': 'user',
21
+ 'content': content,
22
+ }],
23
+ temperature=0.2, # TODO: figure out which temperature is best for evaluation
24
+ max_tokens=max_tokens,
25
+ )
26
+ break
27
+ except openai.error.RateLimitError:
28
+ pass
29
+ except Exception as e:
30
+ print(e)
31
+ time.sleep(NUM_SECONDS_TO_SLEEP)
32
+
33
+ return response['choices'][0]['message']['content']
34
+
35
+
36
+ def parse_score(review):
37
+ try:
38
+ score_pair = review.split('\n')[0]
39
+ score_pair = score_pair.replace(',', ' ')
40
+ sp = score_pair.split(' ')
41
+ if len(sp) == 2:
42
+ return [float(sp[0]), float(sp[1])]
43
+ else:
44
+ print('error', review)
45
+ return [-1, -1]
46
+ except Exception as e:
47
+ print(e)
48
+ print('error', review)
49
+ return [-1, -1]
50
+
51
+
52
+ if __name__ == '__main__':
53
+ parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
54
+ parser.add_argument('-q', '--question')
55
+ parser.add_argument('-c', '--context')
56
+ parser.add_argument('-a', '--answer-list', nargs='+', default=[])
57
+ parser.add_argument('-r', '--rule')
58
+ parser.add_argument('-o', '--output')
59
+ parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
60
+ args = parser.parse_args()
61
+
62
+ f_q = open(os.path.expanduser(args.question))
63
+ f_ans1 = open(os.path.expanduser(args.answer_list[0]))
64
+ f_ans2 = open(os.path.expanduser(args.answer_list[1]))
65
+ rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
66
+
67
+ if os.path.isfile(os.path.expanduser(args.output)):
68
+ cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
69
+ else:
70
+ cur_reviews = []
71
+
72
+ review_file = open(f'{args.output}', 'a')
73
+
74
+ context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
75
+ image_to_context = {context['image']: context for context in context_list}
76
+
77
+ handles = []
78
+ idx = 0
79
+ for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
80
+ ques = json.loads(ques_js)
81
+ ans1 = json.loads(ans1_js)
82
+ ans2 = json.loads(ans2_js)
83
+
84
+ inst = image_to_context[ques['image']]
85
+
86
+ if isinstance(inst['caption'], list):
87
+ cap_str = '\n'.join(inst['caption'])
88
+ else:
89
+ cap_str = inst['caption']
90
+
91
+ category = 'llava_bench_' + json.loads(ques_js)['category']
92
+ if category in rule_dict:
93
+ rule = rule_dict[category]
94
+ else:
95
+ assert False, f"Visual QA category not found in rule file: {category}."
96
+ prompt = rule['prompt']
97
+ role = rule['role']
98
+ content = (f'[Context]\n{cap_str}\n\n'
99
+ f'[Question]\n{ques["text"]}\n\n'
100
+ f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
101
+ f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
102
+ f'[System]\n{prompt}\n\n')
103
+ cur_js = {
104
+ 'id': idx+1,
105
+ 'question_id': ques['question_id'],
106
+ 'answer1_id': ans1.get('answer_id', ans1['question_id']),
107
+ 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
108
+ 'category': category
109
+ }
110
+ if idx >= len(cur_reviews):
111
+ review = get_eval(content, args.max_tokens)
112
+ scores = parse_score(review)
113
+ cur_js['content'] = review
114
+ cur_js['tuple'] = scores
115
+ review_file.write(json.dumps(cur_js) + '\n')
116
+ review_file.flush()
117
+ else:
118
+ print(f'Skipping {idx} as we already have it.')
119
+ idx += 1
120
+ print(idx)
121
+ review_file.close()
llava/eval/eval_gpt_review_visual.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+
5
+ import openai
6
+ import time
7
+
8
+ NUM_SECONDS_TO_SLEEP = 0.5
9
+
10
+
11
+ def get_eval(content: str, max_tokens: int):
12
+ while True:
13
+ try:
14
+ response = openai.ChatCompletion.create(
15
+ model='gpt-4-0314',
16
+ messages=[{
17
+ 'role': 'system',
18
+ 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
19
+ }, {
20
+ 'role': 'user',
21
+ 'content': content,
22
+ }],
23
+ temperature=0.2, # TODO: figure out which temperature is best for evaluation
24
+ max_tokens=max_tokens,
25
+ )
26
+ break
27
+ except openai.error.RateLimitError:
28
+ pass
29
+ except Exception as e:
30
+ print(e)
31
+ time.sleep(NUM_SECONDS_TO_SLEEP)
32
+
33
+ return response['choices'][0]['message']['content']
34
+
35
+
36
+ def parse_score(review):
37
+ try:
38
+ score_pair = review.split('\n')[0]
39
+ score_pair = score_pair.replace(',', ' ')
40
+ sp = score_pair.split(' ')
41
+ if len(sp) == 2:
42
+ return [float(sp[0]), float(sp[1])]
43
+ else:
44
+ print('error', review)
45
+ return [-1, -1]
46
+ except Exception as e:
47
+ print(e)
48
+ print('error', review)
49
+ return [-1, -1]
50
+
51
+
52
+ if __name__ == '__main__':
53
+ parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
54
+ parser.add_argument('-q', '--question')
55
+ parser.add_argument('-c', '--context')
56
+ parser.add_argument('-a', '--answer-list', nargs='+', default=[])
57
+ parser.add_argument('-r', '--rule')
58
+ parser.add_argument('-o', '--output')
59
+ parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
60
+ args = parser.parse_args()
61
+
62
+ f_q = open(os.path.expanduser(args.question))
63
+ f_ans1 = open(os.path.expanduser(args.answer_list[0]))
64
+ f_ans2 = open(os.path.expanduser(args.answer_list[1]))
65
+ rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
66
+
67
+ if os.path.isfile(os.path.expanduser(args.output)):
68
+ cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
69
+ else:
70
+ cur_reviews = []
71
+
72
+ review_file = open(f'{args.output}', 'a')
73
+
74
+ context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
75
+ image_to_context = {context['image']: context for context in context_list}
76
+
77
+ handles = []
78
+ idx = 0
79
+ for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
80
+ ques = json.loads(ques_js)
81
+ ans1 = json.loads(ans1_js)
82
+ ans2 = json.loads(ans2_js)
83
+
84
+ inst = image_to_context[ques['image']]
85
+ cap_str = '\n'.join(inst['captions'])
86
+ box_str = '\n'.join([f'{instance["category"]}: {instance["bbox"]}' for instance in inst['instances']])
87
+
88
+ category = json.loads(ques_js)['category']
89
+ if category in rule_dict:
90
+ rule = rule_dict[category]
91
+ else:
92
+ assert False, f"Visual QA category not found in rule file: {category}."
93
+ prompt = rule['prompt']
94
+ role = rule['role']
95
+ content = (f'[Context]\n{cap_str}\n\n{box_str}\n\n'
96
+ f'[Question]\n{ques["text"]}\n\n'
97
+ f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
98
+ f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
99
+ f'[System]\n{prompt}\n\n')
100
+ cur_js = {
101
+ 'id': idx+1,
102
+ 'question_id': ques['question_id'],
103
+ 'answer1_id': ans1.get('answer_id', ans1['question_id']),
104
+ 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
105
+ 'category': category
106
+ }
107
+ if idx >= len(cur_reviews):
108
+ review = get_eval(content, args.max_tokens)
109
+ scores = parse_score(review)
110
+ cur_js['content'] = review
111
+ cur_js['tuple'] = scores
112
+ review_file.write(json.dumps(cur_js) + '\n')
113
+ review_file.flush()
114
+ else:
115
+ print(f'Skipping {idx} as we already have it.')
116
+ idx += 1
117
+ print(idx)
118
+ review_file.close()
llava/eval/eval_science_qa.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+ import re
5
+ import random
6
+
7
+
8
+ def get_args():
9
+ parser = argparse.ArgumentParser()
10
+ parser.add_argument('--base-dir', type=str)
11
+ parser.add_argument('--result-file', type=str)
12
+ parser.add_argument('--output-file', type=str)
13
+ parser.add_argument('--output-result', type=str)
14
+ parser.add_argument('--split', type=str, default='test')
15
+ parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
16
+ return parser.parse_args()
17
+
18
+
19
+ def convert_caps(results):
20
+ fakecaps = []
21
+ for result in results:
22
+ image_id = result['question_id']
23
+ caption = result['text']
24
+ fakecaps.append({"image_id": int(image_id), "caption": caption})
25
+ return fakecaps
26
+
27
+
28
+ def get_pred_idx(prediction, choices, options):
29
+ """
30
+ Get the index (e.g. 2) from the prediction (e.g. 'C')
31
+ """
32
+ if prediction in options[:len(choices)]:
33
+ return options.index(prediction)
34
+ else:
35
+ return random.choice(range(len(choices)))
36
+
37
+
38
+ if __name__ == "__main__":
39
+ args = get_args()
40
+
41
+ base_dir = args.base_dir
42
+ split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
43
+ problems = json.load(open(os.path.join(base_dir, "problems.json")))
44
+ predictions = [json.loads(line) for line in open(args.result_file)]
45
+ predictions = {pred['question_id']: pred for pred in predictions}
46
+ split_problems = {idx: problems[idx] for idx in split_indices}
47
+
48
+ results = {'correct': [], 'incorrect': []}
49
+ sqa_results = {}
50
+ sqa_results['acc'] = None
51
+ sqa_results['correct'] = None
52
+ sqa_results['count'] = None
53
+ sqa_results['results'] = {}
54
+ sqa_results['outputs'] = {}
55
+
56
+ for prob_id, prob in split_problems.items():
57
+ if prob_id not in predictions:
58
+ continue
59
+ pred = predictions[prob_id]
60
+ pred_text = pred['text']
61
+
62
+ pattern = re.compile(r'The answer is ([A-Z]).')
63
+ res = pattern.findall(pred_text)
64
+ if len(res) == 1:
65
+ answer = res[0] # 'A', 'B', ...
66
+ else:
67
+ answer = "FAILED"
68
+
69
+ pred_idx = get_pred_idx(answer, prob['choices'], args.options)
70
+
71
+ analysis = {
72
+ 'question_id': prob_id,
73
+ 'parsed_ans': answer,
74
+ 'ground_truth': args.options[prob['answer']],
75
+ 'question': pred['prompt'],
76
+ 'pred': pred_text,
77
+ 'is_multimodal': '<image>' in pred['prompt'],
78
+ }
79
+
80
+ sqa_results['results'][prob_id] = get_pred_idx(answer, prob['choices'], args.options)
81
+ sqa_results['outputs'][prob_id] = pred_text
82
+
83
+ if pred_idx == prob['answer']:
84
+ results['correct'].append(analysis)
85
+ else:
86
+ results['incorrect'].append(analysis)
87
+
88
+ correct = len(results['correct'])
89
+ total = len(results['correct']) + len(results['incorrect'])
90
+ print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%')
91
+
92
+ sqa_results['acc'] = correct / total * 100
93
+ sqa_results['correct'] = correct
94
+ sqa_results['count'] = total
95
+
96
+ with open(args.output_file, 'w') as f:
97
+ json.dump(results, f, indent=2)
98
+ with open(args.output_result, 'w') as f:
99
+ json.dump(sqa_results, f, indent=2)
llava/eval/eval_science_qa_gpt4.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+ import re
5
+ import random
6
+ from collections import defaultdict
7
+
8
+
9
+ def get_args():
10
+ parser = argparse.ArgumentParser()
11
+ parser.add_argument('--base-dir', type=str)
12
+ parser.add_argument('--gpt4-result', type=str)
13
+ parser.add_argument('--our-result', type=str)
14
+ parser.add_argument('--split', type=str, default='test')
15
+ parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
16
+ return parser.parse_args()
17
+
18
+
19
+ def convert_caps(results):
20
+ fakecaps = []
21
+ for result in results:
22
+ image_id = result['question_id']
23
+ caption = result['text']
24
+ fakecaps.append({"image_id": int(image_id), "caption": caption})
25
+ return fakecaps
26
+
27
+
28
+ def get_pred_idx(prediction, choices, options):
29
+ """
30
+ Get the index (e.g. 2) from the prediction (e.g. 'C')
31
+ """
32
+ if prediction in options[:len(choices)]:
33
+ return options.index(prediction)
34
+ else:
35
+ return random.choice(range(len(choices)))
36
+
37
+
38
+ if __name__ == "__main__":
39
+ args = get_args()
40
+
41
+ base_dir = args.base_dir
42
+ split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
43
+ problems = json.load(open(os.path.join(base_dir, "problems.json")))
44
+ our_predictions = [json.loads(line) for line in open(args.our_result)]
45
+ our_predictions = {pred['question_id']: pred for pred in our_predictions}
46
+ split_problems = {idx: problems[idx] for idx in split_indices}
47
+
48
+ gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
49
+
50
+ results = defaultdict(lambda: 0)
51
+
52
+ for prob_id, prob in split_problems.items():
53
+ if prob_id not in our_predictions:
54
+ continue
55
+ if prob_id not in gpt4_predictions:
56
+ continue
57
+ our_pred = our_predictions[prob_id]['text']
58
+ gpt4_pred = gpt4_predictions[prob_id]
59
+
60
+ pattern = re.compile(r'The answer is ([A-Z]).')
61
+ our_res = pattern.findall(our_pred)
62
+ if len(our_res) == 1:
63
+ our_answer = our_res[0] # 'A', 'B', ...
64
+ else:
65
+ our_answer = "FAILED"
66
+ gpt4_res = pattern.findall(gpt4_pred)
67
+ if len(gpt4_res) == 1:
68
+ gpt4_answer = gpt4_res[0] # 'A', 'B', ...
69
+ else:
70
+ gpt4_answer = "FAILED"
71
+
72
+ our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
73
+ gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
74
+
75
+ if gpt4_answer == 'FAILED':
76
+ results['gpt4_failed'] += 1
77
+ # continue
78
+ gpt4_pred_idx = our_pred_idx
79
+ # if our_pred_idx != prob['answer']:
80
+ # print(our_predictions[prob_id]['prompt'])
81
+ # print('-----------------')
82
+ # print(f'LECTURE: {prob["lecture"]}')
83
+ # print(f'SOLUTION: {prob["solution"]}')
84
+ # print('=====================')
85
+ else:
86
+ # continue
87
+ pass
88
+ # gpt4_pred_idx = our_pred_idx
89
+
90
+ if gpt4_pred_idx == prob['answer']:
91
+ results['correct'] += 1
92
+ else:
93
+ results['incorrect'] += 1
94
+
95
+
96
+ if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
97
+ results['correct_upperbound'] += 1
98
+
99
+ correct = results['correct']
100
+ total = results['correct'] + results['incorrect']
101
+ print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%')
102
+ print(f'Total: {total}, Correct (upper): {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
103
+ print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
104
+
llava/eval/eval_science_qa_gpt4_requery.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+ import re
5
+ import random
6
+ from collections import defaultdict
7
+
8
+
9
+ def get_args():
10
+ parser = argparse.ArgumentParser()
11
+ parser.add_argument('--base-dir', type=str)
12
+ parser.add_argument('--gpt4-result', type=str)
13
+ parser.add_argument('--requery-result', type=str)
14
+ parser.add_argument('--our-result', type=str)
15
+ parser.add_argument('--output-result', type=str)
16
+ parser.add_argument('--split', type=str, default='test')
17
+ parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
18
+ return parser.parse_args()
19
+
20
+
21
+ def convert_caps(results):
22
+ fakecaps = []
23
+ for result in results:
24
+ image_id = result['question_id']
25
+ caption = result['text']
26
+ fakecaps.append({"image_id": int(image_id), "caption": caption})
27
+ return fakecaps
28
+
29
+
30
+ def get_pred_idx(prediction, choices, options):
31
+ """
32
+ Get the index (e.g. 2) from the prediction (e.g. 'C')
33
+ """
34
+ if prediction in options[:len(choices)]:
35
+ return options.index(prediction)
36
+ else:
37
+ return random.choice(range(len(choices)))
38
+
39
+
40
+ if __name__ == "__main__":
41
+ args = get_args()
42
+
43
+ base_dir = args.base_dir
44
+ split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
45
+ problems = json.load(open(os.path.join(base_dir, "problems.json")))
46
+ our_predictions = [json.loads(line) for line in open(args.our_result)]
47
+ our_predictions = {pred['question_id']: pred for pred in our_predictions}
48
+ split_problems = {idx: problems[idx] for idx in split_indices}
49
+
50
+ requery_predictions = [json.loads(line) for line in open(args.requery_result)]
51
+ requery_predictions = {pred['question_id']: pred for pred in requery_predictions}
52
+
53
+ gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
54
+
55
+ results = defaultdict(lambda: 0)
56
+
57
+ sqa_results = {}
58
+ sqa_results['acc'] = None
59
+ sqa_results['correct'] = None
60
+ sqa_results['count'] = None
61
+ sqa_results['results'] = {}
62
+ sqa_results['outputs'] = {}
63
+
64
+ for prob_id, prob in split_problems.items():
65
+ if prob_id not in our_predictions:
66
+ assert False
67
+ if prob_id not in gpt4_predictions:
68
+ assert False
69
+ our_pred = our_predictions[prob_id]['text']
70
+ gpt4_pred = gpt4_predictions[prob_id]
71
+ if prob_id not in requery_predictions:
72
+ results['missing_requery'] += 1
73
+ requery_pred = "MISSING"
74
+ else:
75
+ requery_pred = requery_predictions[prob_id]['text']
76
+
77
+ pattern = re.compile(r'The answer is ([A-Z]).')
78
+ our_res = pattern.findall(our_pred)
79
+ if len(our_res) == 1:
80
+ our_answer = our_res[0] # 'A', 'B', ...
81
+ else:
82
+ our_answer = "FAILED"
83
+
84
+ requery_res = pattern.findall(requery_pred)
85
+ if len(requery_res) == 1:
86
+ requery_answer = requery_res[0] # 'A', 'B', ...
87
+ else:
88
+ requery_answer = "FAILED"
89
+
90
+ gpt4_res = pattern.findall(gpt4_pred)
91
+ if len(gpt4_res) == 1:
92
+ gpt4_answer = gpt4_res[0] # 'A', 'B', ...
93
+ else:
94
+ gpt4_answer = "FAILED"
95
+
96
+ our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
97
+ gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
98
+ requery_pred_idx = get_pred_idx(requery_answer, prob['choices'], args.options)
99
+
100
+ results['total'] += 1
101
+
102
+ if gpt4_answer == 'FAILED':
103
+ results['gpt4_failed'] += 1
104
+ if gpt4_pred_idx == prob['answer']:
105
+ results['gpt4_correct'] += 1
106
+ if our_pred_idx == prob['answer']:
107
+ results['gpt4_ourvisual_correct'] += 1
108
+ elif gpt4_pred_idx == prob['answer']:
109
+ results['gpt4_correct'] += 1
110
+ results['gpt4_ourvisual_correct'] += 1
111
+
112
+ if our_pred_idx == prob['answer']:
113
+ results['our_correct'] += 1
114
+
115
+ if requery_answer == 'FAILED':
116
+ sqa_results['results'][prob_id] = our_pred_idx
117
+ if our_pred_idx == prob['answer']:
118
+ results['requery_correct'] += 1
119
+ else:
120
+ sqa_results['results'][prob_id] = requery_pred_idx
121
+ if requery_pred_idx == prob['answer']:
122
+ results['requery_correct'] += 1
123
+ else:
124
+ print(f"""
125
+ Question ({args.options[prob['answer']]}): {our_predictions[prob_id]['prompt']}
126
+ Our ({our_answer}): {our_pred}
127
+ GPT-4 ({gpt4_answer}): {gpt4_pred}
128
+ Requery ({requery_answer}): {requery_pred}
129
+ print("=====================================")
130
+ """)
131
+
132
+ if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
133
+ results['correct_upperbound'] += 1
134
+
135
+ total = results['total']
136
+ print(f'Total: {total}, Our-Correct: {results["our_correct"]}, Accuracy: {results["our_correct"] / total * 100:.2f}%')
137
+ print(f'Total: {total}, GPT-4-Correct: {results["gpt4_correct"]}, Accuracy: {results["gpt4_correct"] / total * 100:.2f}%')
138
+ print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
139
+ print(f'Total: {total}, GPT-4-OursVisual-Correct: {results["gpt4_ourvisual_correct"]}, Accuracy: {results["gpt4_ourvisual_correct"] / total * 100:.2f}%')
140
+ print(f'Total: {total}, Requery-Correct: {results["requery_correct"]}, Accuracy: {results["requery_correct"] / total * 100:.2f}%')
141
+ print(f'Total: {total}, Correct upper: {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
142
+
143
+ sqa_results['acc'] = results["requery_correct"] / total * 100
144
+ sqa_results['correct'] = results["requery_correct"]
145
+ sqa_results['count'] = total
146
+
147
+ with open(args.output_result, 'w') as f:
148
+ json.dump(sqa_results, f, indent=2)
149
+
llava/eval/generate_webpage_data_from_table.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Generate json file for webpage."""
2
+ import json
3
+ import os
4
+ import re
5
+
6
+ # models = ['llama', 'alpaca', 'gpt35', 'bard']
7
+ models = ['vicuna']
8
+
9
+
10
+ def read_jsonl(path: str, key: str=None):
11
+ data = []
12
+ with open(os.path.expanduser(path)) as f:
13
+ for line in f:
14
+ if not line:
15
+ continue
16
+ data.append(json.loads(line))
17
+ if key is not None:
18
+ data.sort(key=lambda x: x[key])
19
+ data = {item[key]: item for item in data}
20
+ return data
21
+
22
+
23
+ def trim_hanging_lines(s: str, n: int) -> str:
24
+ s = s.strip()
25
+ for _ in range(n):
26
+ s = s.split('\n', 1)[1].strip()
27
+ return s
28
+
29
+
30
+ if __name__ == '__main__':
31
+ questions = read_jsonl('table/question.jsonl', key='question_id')
32
+
33
+ # alpaca_answers = read_jsonl('table/answer/answer_alpaca-13b.jsonl', key='question_id')
34
+ # bard_answers = read_jsonl('table/answer/answer_bard.jsonl', key='question_id')
35
+ # gpt35_answers = read_jsonl('table/answer/answer_gpt35.jsonl', key='question_id')
36
+ # llama_answers = read_jsonl('table/answer/answer_llama-13b.jsonl', key='question_id')
37
+ vicuna_answers = read_jsonl('table/answer/answer_vicuna-13b.jsonl', key='question_id')
38
+ ours_answers = read_jsonl('table/results/llama-13b-hf-alpaca.jsonl', key='question_id')
39
+
40
+ review_vicuna = read_jsonl('table/review/review_vicuna-13b_llama-13b-hf-alpaca.jsonl', key='question_id')
41
+ # review_alpaca = read_jsonl('table/review/review_alpaca-13b_vicuna-13b.jsonl', key='question_id')
42
+ # review_bard = read_jsonl('table/review/review_bard_vicuna-13b.jsonl', key='question_id')
43
+ # review_gpt35 = read_jsonl('table/review/review_gpt35_vicuna-13b.jsonl', key='question_id')
44
+ # review_llama = read_jsonl('table/review/review_llama-13b_vicuna-13b.jsonl', key='question_id')
45
+
46
+ records = []
47
+ for qid in questions.keys():
48
+ r = {
49
+ 'id': qid,
50
+ 'category': questions[qid]['category'],
51
+ 'question': questions[qid]['text'],
52
+ 'answers': {
53
+ # 'alpaca': alpaca_answers[qid]['text'],
54
+ # 'llama': llama_answers[qid]['text'],
55
+ # 'bard': bard_answers[qid]['text'],
56
+ # 'gpt35': gpt35_answers[qid]['text'],
57
+ 'vicuna': vicuna_answers[qid]['text'],
58
+ 'ours': ours_answers[qid]['text'],
59
+ },
60
+ 'evaluations': {
61
+ # 'alpaca': review_alpaca[qid]['text'],
62
+ # 'llama': review_llama[qid]['text'],
63
+ # 'bard': review_bard[qid]['text'],
64
+ 'vicuna': review_vicuna[qid]['content'],
65
+ # 'gpt35': review_gpt35[qid]['text'],
66
+ },
67
+ 'scores': {
68
+ 'vicuna': review_vicuna[qid]['tuple'],
69
+ # 'alpaca': review_alpaca[qid]['score'],
70
+ # 'llama': review_llama[qid]['score'],
71
+ # 'bard': review_bard[qid]['score'],
72
+ # 'gpt35': review_gpt35[qid]['score'],
73
+ },
74
+ }
75
+
76
+ # cleanup data
77
+ cleaned_evals = {}
78
+ for k, v in r['evaluations'].items():
79
+ v = v.strip()
80
+ lines = v.split('\n')
81
+ # trim the first line if it's a pair of numbers
82
+ if re.match(r'\d+[, ]+\d+', lines[0]):
83
+ lines = lines[1:]
84
+ v = '\n'.join(lines)
85
+ cleaned_evals[k] = v.replace('Assistant 1', "**Assistant 1**").replace('Assistant 2', '**Assistant 2**')
86
+
87
+ r['evaluations'] = cleaned_evals
88
+ records.append(r)
89
+
90
+ # Reorder the records, this is optional
91
+ for r in records:
92
+ if r['id'] <= 20:
93
+ r['id'] += 60
94
+ else:
95
+ r['id'] -= 20
96
+ for r in records:
97
+ if r['id'] <= 50:
98
+ r['id'] += 10
99
+ elif 50 < r['id'] <= 60:
100
+ r['id'] -= 50
101
+ for r in records:
102
+ if r['id'] == 7:
103
+ r['id'] = 1
104
+ elif r['id'] < 7:
105
+ r['id'] += 1
106
+
107
+ records.sort(key=lambda x: x['id'])
108
+
109
+ # Write to file
110
+ with open('webpage/data.json', 'w') as f:
111
+ json.dump({'questions': records, 'models': models}, f, indent=2)
llava/eval/model_qa.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria
3
+ import torch
4
+ import os
5
+ import json
6
+ from tqdm import tqdm
7
+ import shortuuid
8
+
9
+ from llava.conversation import default_conversation
10
+ from llava.utils import disable_torch_init
11
+
12
+
13
+ # new stopping implementation
14
+ class KeywordsStoppingCriteria(StoppingCriteria):
15
+ def __init__(self, keywords, tokenizer, input_ids):
16
+ self.keywords = keywords
17
+ self.tokenizer = tokenizer
18
+ self.start_len = None
19
+ self.input_ids = input_ids
20
+
21
+ def __call__(self, output_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
22
+ if self.start_len is None:
23
+ self.start_len = self.input_ids.shape[1]
24
+ else:
25
+ outputs = self.tokenizer.batch_decode(output_ids[:, self.start_len:], skip_special_tokens=True)[0]
26
+ for keyword in self.keywords:
27
+ if keyword in outputs:
28
+ return True
29
+ return False
30
+
31
+
32
+ @torch.inference_mode()
33
+ def eval_model(model_name, questions_file, answers_file):
34
+ # Model
35
+ disable_torch_init()
36
+ model_name = os.path.expanduser(model_name)
37
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
38
+ model = AutoModelForCausalLM.from_pretrained(model_name,
39
+ torch_dtype=torch.float16).cuda()
40
+
41
+
42
+ ques_file = open(os.path.expanduser(questions_file), "r")
43
+ ans_file = open(os.path.expanduser(answers_file), "w")
44
+ for i, line in enumerate(tqdm(ques_file)):
45
+ idx = json.loads(line)["question_id"]
46
+ qs = json.loads(line)["text"]
47
+ cat = json.loads(line)["category"]
48
+ conv = default_conversation.copy()
49
+ conv.append_message(conv.roles[0], qs)
50
+ prompt = conv.get_prompt()
51
+ inputs = tokenizer([prompt])
52
+ input_ids = torch.as_tensor(inputs.input_ids).cuda()
53
+ stopping_criteria = KeywordsStoppingCriteria([conv.sep], tokenizer, input_ids)
54
+ output_ids = model.generate(
55
+ input_ids,
56
+ do_sample=True,
57
+ use_cache=True,
58
+ temperature=0.7,
59
+ max_new_tokens=1024,
60
+ stopping_criteria=[stopping_criteria])
61
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
62
+ try:
63
+ index = outputs.index(conv.sep, len(prompt))
64
+ except ValueError:
65
+ outputs += conv.sep
66
+ index = outputs.index(conv.sep, len(prompt))
67
+
68
+ outputs = outputs[len(prompt) + len(conv.roles[1]) + 2:index].strip()
69
+ ans_id = shortuuid.uuid()
70
+ ans_file.write(json.dumps({"question_id": idx,
71
+ "text": outputs,
72
+ "answer_id": ans_id,
73
+ "model_id": model_name,
74
+ "metadata": {}}) + "\n")
75
+ ans_file.flush()
76
+ ans_file.close()
77
+
78
+ if __name__ == "__main__":
79
+ parser = argparse.ArgumentParser()
80
+ parser.add_argument("--model-name", type=str, default="facebook/opt-350m")
81
+ parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
82
+ parser.add_argument("--answers-file", type=str, default="answer.jsonl")
83
+ args = parser.parse_args()
84
+
85
+ eval_model(args.model_name, args.question_file, args.answers_file)
llava/eval/model_vqa.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import torch
3
+ import os
4
+ import json
5
+ from tqdm import tqdm
6
+ import shortuuid
7
+
8
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9
+ from llava.conversation import conv_templates, SeparatorStyle
10
+ from llava.model.builder import load_pretrained_model
11
+ from llava.utils import disable_torch_init
12
+ from llava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
13
+
14
+ from PIL import Image
15
+ import math
16
+
17
+
18
+ def split_list(lst, n):
19
+ """Split a list into n (roughly) equal-sized chunks"""
20
+ chunk_size = math.ceil(len(lst) / n) # integer division
21
+ return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
22
+
23
+
24
+ def get_chunk(lst, n, k):
25
+ chunks = split_list(lst, n)
26
+ return chunks[k]
27
+
28
+
29
+ def eval_model(args):
30
+ # Model
31
+ disable_torch_init()
32
+ model_path = os.path.expanduser(args.model_path)
33
+ model_name = get_model_name_from_path(model_path)
34
+ tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
35
+
36
+ questions = [json.loads(q) for q in open(os.path.expanduser(args.question_file), "r")]
37
+ questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
38
+ answers_file = os.path.expanduser(args.answers_file)
39
+ os.makedirs(os.path.dirname(answers_file), exist_ok=True)
40
+ ans_file = open(answers_file, "w")
41
+ for line in tqdm(questions):
42
+ idx = line["question_id"]
43
+ image_file = line["image"]
44
+ qs = line["text"]
45
+ cur_prompt = qs
46
+ if model.config.mm_use_im_start_end:
47
+ qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
48
+ else:
49
+ qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
50
+
51
+ conv = conv_templates[args.conv_mode].copy()
52
+ conv.append_message(conv.roles[0], qs)
53
+ conv.append_message(conv.roles[1], None)
54
+ prompt = conv.get_prompt()
55
+
56
+ input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
57
+
58
+ image = Image.open(os.path.join(args.image_folder, image_file))
59
+ image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'][0]
60
+
61
+ stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
62
+ keywords = [stop_str]
63
+ stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
64
+
65
+ with torch.inference_mode():
66
+ output_ids = model.generate(
67
+ input_ids,
68
+ images=image_tensor.unsqueeze(0).half().cuda(),
69
+ do_sample=True,
70
+ temperature=args.temperature,
71
+ top_p=args.top_p,
72
+ num_beams=args.num_beams,
73
+ # no_repeat_ngram_size=3,
74
+ max_new_tokens=1024,
75
+ use_cache=True)
76
+
77
+ input_token_len = input_ids.shape[1]
78
+ n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item()
79
+ if n_diff_input_output > 0:
80
+ print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids')
81
+ outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0]
82
+ outputs = outputs.strip()
83
+ if outputs.endswith(stop_str):
84
+ outputs = outputs[:-len(stop_str)]
85
+ outputs = outputs.strip()
86
+
87
+ ans_id = shortuuid.uuid()
88
+ ans_file.write(json.dumps({"question_id": idx,
89
+ "prompt": cur_prompt,
90
+ "text": outputs,
91
+ "answer_id": ans_id,
92
+ "model_id": model_name,
93
+ "metadata": {}}) + "\n")
94
+ ans_file.flush()
95
+ ans_file.close()
96
+
97
+ if __name__ == "__main__":
98
+ parser = argparse.ArgumentParser()
99
+ parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
100
+ parser.add_argument("--model-base", type=str, default=None)
101
+ parser.add_argument("--image-folder", type=str, default="")
102
+ parser.add_argument("--question-file", type=str, default="tables/question.jsonl")
103
+ parser.add_argument("--answers-file", type=str, default="answer.jsonl")
104
+ parser.add_argument("--conv-mode", type=str, default="llava_v1")
105
+ parser.add_argument("--num-chunks", type=int, default=1)
106
+ parser.add_argument("--chunk-idx", type=int, default=0)
107
+ parser.add_argument("--temperature", type=float, default=0.2)
108
+ parser.add_argument("--top_p", type=float, default=None)
109
+ parser.add_argument("--num_beams", type=int, default=1)
110
+ args = parser.parse_args()
111
+
112
+ eval_model(args)
llava/eval/model_vqa_science.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import torch
3
+ import os
4
+ import json
5
+ from tqdm import tqdm
6
+ import shortuuid
7
+
8
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9
+ from llava.conversation import conv_templates, SeparatorStyle
10
+ from llava.model.builder import load_pretrained_model
11
+ from llava.utils import disable_torch_init
12
+ from llava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
13
+
14
+ from PIL import Image
15
+ import math
16
+
17
+
18
+ def split_list(lst, n):
19
+ """Split a list into n (roughly) equal-sized chunks"""
20
+ chunk_size = math.ceil(len(lst) / n) # integer division
21
+ return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
22
+
23
+
24
+ def get_chunk(lst, n, k):
25
+ chunks = split_list(lst, n)
26
+ return chunks[k]
27
+
28
+
29
+ def eval_model(args):
30
+ # Model
31
+ disable_torch_init()
32
+ model_path = os.path.expanduser(args.model_path)
33
+ model_name = get_model_name_from_path(model_path)
34
+ tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
35
+
36
+ questions = json.load(open(os.path.expanduser(args.question_file), "r"))
37
+ questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
38
+ answers_file = os.path.expanduser(args.answers_file)
39
+ os.makedirs(os.path.dirname(answers_file), exist_ok=True)
40
+ ans_file = open(answers_file, "w")
41
+ for i, line in enumerate(tqdm(questions)):
42
+ idx = line["id"]
43
+ question = line['conversations'][0]
44
+ qs = question['value'].replace('<image>', '').strip()
45
+ cur_prompt = qs
46
+
47
+ if 'image' in line:
48
+ image_file = line["image"]
49
+ image = Image.open(os.path.join(args.image_folder, image_file))
50
+ image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'][0]
51
+ images = image_tensor.unsqueeze(0).half().cuda()
52
+ if getattr(model.config, 'mm_use_im_start_end', False):
53
+ qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
54
+ else:
55
+ qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
56
+ cur_prompt = '<image>' + '\n' + cur_prompt
57
+ else:
58
+ images = None
59
+
60
+ conv = conv_templates[args.conv_mode].copy()
61
+ conv.append_message(conv.roles[0], qs)
62
+ conv.append_message(conv.roles[1], None)
63
+ prompt = conv.get_prompt()
64
+
65
+ input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
66
+
67
+ stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
68
+ keywords = [stop_str]
69
+ stopping_criteria = [KeywordsStoppingCriteria(keywords, tokenizer, input_ids)] if conv.version == "v0" else None
70
+
71
+ with torch.inference_mode():
72
+ output_ids = model.generate(
73
+ input_ids,
74
+ images=images,
75
+ do_sample=True,
76
+ temperature=0.2,
77
+ max_new_tokens=1024,
78
+ use_cache=True,
79
+ stopping_criteria=stopping_criteria,
80
+ )
81
+
82
+ input_token_len = input_ids.shape[1]
83
+ n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item()
84
+ if n_diff_input_output > 0:
85
+ print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids')
86
+ outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0]
87
+ outputs = outputs.strip()
88
+ if outputs.endswith(stop_str):
89
+ outputs = outputs[:-len(stop_str)]
90
+ outputs = outputs.strip()
91
+
92
+ # prompt for answer
93
+ if args.answer_prompter:
94
+ outputs_reasoning = outputs
95
+ input_ids = tokenizer_image_token(prompt + outputs_reasoning + ' ###\nANSWER:', tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
96
+
97
+ with torch.inference_mode():
98
+ output_ids = model.generate(
99
+ input_ids,
100
+ images=images,
101
+ do_sample=True,
102
+ temperature=0.2,
103
+ max_new_tokens=64,
104
+ use_cache=True,
105
+ stopping_criteria=[stopping_criteria])
106
+
107
+ input_token_len = input_ids.shape[1]
108
+ n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item()
109
+ if n_diff_input_output > 0:
110
+ print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids')
111
+ outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0]
112
+ outputs = outputs.strip()
113
+ if outputs.endswith(stop_str):
114
+ outputs = outputs[:-len(stop_str)]
115
+ outputs = outputs.strip()
116
+ outputs = outputs_reasoning + '\n The answer is ' + outputs
117
+
118
+ ans_id = shortuuid.uuid()
119
+ ans_file.write(json.dumps({"question_id": idx,
120
+ "prompt": cur_prompt,
121
+ "text": outputs,
122
+ "answer_id": ans_id,
123
+ "model_id": model_name,
124
+ "metadata": {}}) + "\n")
125
+ ans_file.flush()
126
+ ans_file.close()
127
+
128
+ if __name__ == "__main__":
129
+ parser = argparse.ArgumentParser()
130
+ parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
131
+ parser.add_argument("--model-base", type=str, default=None)
132
+ parser.add_argument("--image-folder", type=str, default="")
133
+ parser.add_argument("--question-file", type=str, default="tables/question.json")
134
+ parser.add_argument("--answers-file", type=str, default="answer.jsonl")
135
+ parser.add_argument("--conv-mode", type=str, default="llava_v0")
136
+ parser.add_argument("--num-chunks", type=int, default=1)
137
+ parser.add_argument("--chunk-idx", type=int, default=0)
138
+ parser.add_argument("--answer-prompter", action="store_true")
139
+ args = parser.parse_args()
140
+
141
+ eval_model(args)
llava/eval/qa_baseline_gpt35.py ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Generate answers with GPT-3.5"""
2
+ # Note: you need to be using OpenAI Python v0.27.0 for the code below to work
3
+ import argparse
4
+ import json
5
+ import os
6
+ import time
7
+ import concurrent.futures
8
+
9
+ import openai
10
+ import tqdm
11
+ import shortuuid
12
+
13
+ MODEL = 'gpt-3.5-turbo'
14
+ MODEL_ID = 'gpt-3.5-turbo:20230327'
15
+
16
+ def get_answer(question_id: int, question: str, max_tokens: int):
17
+ ans = {
18
+ 'answer_id': shortuuid.uuid(),
19
+ 'question_id': question_id,
20
+ 'model_id': MODEL_ID,
21
+ }
22
+ for _ in range(3):
23
+ try:
24
+ response = openai.ChatCompletion.create(
25
+ model=MODEL,
26
+ messages=[{
27
+ 'role': 'system',
28
+ 'content': 'You are a helpful assistant.'
29
+ }, {
30
+ 'role': 'user',
31
+ 'content': question,
32
+ }],
33
+ max_tokens=max_tokens,
34
+ )
35
+ ans['text'] = response['choices'][0]['message']['content']
36
+ return ans
37
+ except Exception as e:
38
+ print('[ERROR]', e)
39
+ ans['text'] = '#ERROR#'
40
+ time.sleep(1)
41
+ return ans
42
+
43
+
44
+ if __name__ == '__main__':
45
+ parser = argparse.ArgumentParser(description='ChatGPT answer generation.')
46
+ parser.add_argument('-q', '--question')
47
+ parser.add_argument('-o', '--output')
48
+ parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
49
+ args = parser.parse_args()
50
+
51
+ questions_dict = {}
52
+ with open(os.path.expanduser(args.question)) as f:
53
+ for line in f:
54
+ if not line:
55
+ continue
56
+ q = json.loads(line)
57
+ questions_dict[q['question_id']] = q['text']
58
+
59
+ answers = []
60
+
61
+ with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor:
62
+ futures = []
63
+ for qid, question in questions_dict.items():
64
+ future = executor.submit(get_answer, qid, question, args.max_tokens)
65
+ futures.append(future)
66
+
67
+ for future in tqdm.tqdm(concurrent.futures.as_completed(futures), total=len(futures)):
68
+ answers.append(future.result())
69
+
70
+ answers.sort(key=lambda x: x['question_id'])
71
+
72
+ with open(os.path.expanduser(args.output), 'w') as f:
73
+ table = [json.dumps(ans) for ans in answers]
74
+ f.write('\n'.join(table))
llava/eval/run_llava.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import torch
3
+
4
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
5
+ from llava.conversation import conv_templates, SeparatorStyle
6
+ from llava.model.builder import load_pretrained_model
7
+ from llava.utils import disable_torch_init
8
+ from llava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
9
+
10
+ from PIL import Image
11
+
12
+ import requests
13
+ from PIL import Image
14
+ from io import BytesIO
15
+
16
+
17
+ def load_image(image_file):
18
+ if image_file.startswith('http') or image_file.startswith('https'):
19
+ response = requests.get(image_file)
20
+ image = Image.open(BytesIO(response.content)).convert('RGB')
21
+ else:
22
+ image = Image.open(image_file).convert('RGB')
23
+ return image
24
+
25
+
26
+ def eval_model(args):
27
+ # Model
28
+ disable_torch_init()
29
+
30
+ model_name = get_model_name_from_path(args.model_path)
31
+ tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name)
32
+
33
+ qs = args.query
34
+ if model.config.mm_use_im_start_end:
35
+ qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
36
+ else:
37
+ qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
38
+
39
+ if 'llama-2' in model_name.lower():
40
+ conv_mode = "llava_llama_2"
41
+ elif "v1" in model_name.lower():
42
+ conv_mode = "llava_v1"
43
+ elif "mpt" in model_name.lower():
44
+ conv_mode = "mpt"
45
+ else:
46
+ conv_mode = "llava_v0"
47
+
48
+ if args.conv_mode is not None and conv_mode != args.conv_mode:
49
+ print('[WARNING] the auto inferred conversation mode is {}, while `--conv-mode` is {}, using {}'.format(conv_mode, args.conv_mode, args.conv_mode))
50
+ else:
51
+ args.conv_mode = conv_mode
52
+
53
+ conv = conv_templates[args.conv_mode].copy()
54
+ conv.append_message(conv.roles[0], qs)
55
+ conv.append_message(conv.roles[1], None)
56
+ prompt = conv.get_prompt()
57
+
58
+ image = load_image(args.image_file)
59
+ image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].half().cuda()
60
+
61
+ input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
62
+
63
+ stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
64
+ keywords = [stop_str]
65
+ stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
66
+
67
+ with torch.inference_mode():
68
+ output_ids = model.generate(
69
+ input_ids,
70
+ images=image_tensor,
71
+ do_sample=True,
72
+ temperature=0.2,
73
+ max_new_tokens=1024,
74
+ use_cache=True,
75
+ stopping_criteria=[stopping_criteria])
76
+
77
+ input_token_len = input_ids.shape[1]
78
+ n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item()
79
+ if n_diff_input_output > 0:
80
+ print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids')
81
+ outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0]
82
+ outputs = outputs.strip()
83
+ if outputs.endswith(stop_str):
84
+ outputs = outputs[:-len(stop_str)]
85
+ outputs = outputs.strip()
86
+ print(outputs)
87
+
88
+ if __name__ == "__main__":
89
+ parser = argparse.ArgumentParser()
90
+ parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
91
+ parser.add_argument("--model-base", type=str, default=None)
92
+ parser.add_argument("--image-file", type=str, required=True)
93
+ parser.add_argument("--query", type=str, required=True)
94
+ parser.add_argument("--conv-mode", type=str, default=None)
95
+ args = parser.parse_args()
96
+
97
+ eval_model(args)
llava/eval/summarize_gpt_review.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ from collections import defaultdict
4
+
5
+ import numpy as np
6
+
7
+ import argparse
8
+
9
+ def parse_args():
10
+ parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
11
+ parser.add_argument('-d', '--dir', default=None)
12
+ parser.add_argument('-f', '--files', nargs='*', default=None)
13
+ parser.add_argument('-i', '--ignore', nargs='*', default=None)
14
+ return parser.parse_args()
15
+
16
+
17
+ if __name__ == '__main__':
18
+ args = parse_args()
19
+
20
+ if args.ignore is not None:
21
+ args.ignore = [int(x) for x in args.ignore]
22
+
23
+ if args.files is not None and len(args.files) > 0:
24
+ review_files = args.files
25
+ else:
26
+ review_files = [x for x in os.listdir(args.dir) if x.endswith('.jsonl') and (x.startswith('gpt4_text') or x.startswith('reviews_') or x.startswith('review_'))]
27
+
28
+ for review_file in sorted(review_files):
29
+ config = os.path.basename(review_file).replace('gpt4_text_', '').replace('.jsonl', '')
30
+ scores = defaultdict(list)
31
+ print(config)
32
+ with open(os.path.join(args.dir, review_file) if args.dir is not None else review_file) as f:
33
+ for review_str in f:
34
+ review = json.loads(review_str)
35
+ if args.ignore is not None and review['question_id'] in args.ignore:
36
+ continue
37
+ if 'category' in review:
38
+ scores[review['category']].append(review['tuple'])
39
+ scores['all'].append(review['tuple'])
40
+ else:
41
+ if 'tuple' in review:
42
+ scores['all'].append(review['tuple'])
43
+ else:
44
+ scores['all'].append(review['score'])
45
+ for k, v in sorted(scores.items()):
46
+ stats = np.asarray(v).mean(0).tolist()
47
+ stats = [round(x, 3) for x in stats]
48
+ # print(k, stats, round(stats[1]/stats[0]*100, 1))
49
+ print(k, round(stats[1]/stats[0]*100, 1))
50
+ print('=================================')
llava/eval/table/answer/answer_alpaca-13b.jsonl ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"question_id": 1, "text": "Improving time management skills involves setting priorities, breaking tasks into smaller chunks, delegating tasks, avoiding multitasking, and taking regular breaks. Additionally, it is important to have a positive attitude and be realistic with goals. Making a to-do list, tracking time, and using technology to automate mundane tasks can also help improve time management skills.", "answer_id": "kEL9ifUHDeYuAXzevje2se", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
2
+ {"question_id": 2, "text": "The most effective ways to deal with stress include identifying and avoiding the sources of stress, developing healthy coping mechanisms, and engaging in relaxation techniques such as meditation, deep breathing, and physical exercise. Additionally, it is important to take care of one's mental and physical health, get enough rest and nutrition, and practice self-care.", "answer_id": "VcF3NrWGXhhxLkDVurNrwq", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
3
+ {"question_id": 3, "text": "The main differences between Python and JavaScript programming languages are in their syntax, structure, and the type of applications they are designed for. Python is a general-purpose language designed to be easy to read and write, while JavaScript is primarily used for web development. Python is a compiled language, while JavaScript is an interpreted language. Python is strongly typed, while JavaScript is not. Python is typically faster than JavaScript in terms of execution, while JavaScript is better suited to create interactive web pages.", "answer_id": "LpvtyQi9QdSgRrgGDxiGrT", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
4
+ {"question_id": 4, "text": "Working from home can be a great way to increase productivity, but it does require some strategic planning and tactics. Here are some tips for increasing your productivity while working from home: \n1. Establish a routine: Having a regular schedule and sticking to it can help you stay focused and motivated.\n2. Create a distraction-free zone: Eliminate distractions by turning off notifications and setting a \u201cdo not disturb\u201d mode.\n3. Take breaks: Working for long stretches without taking breaks can lead to burnout and decreased productivity.\n4. Exercise: Exercise can help improve focus and productivity.\n5. Get organized: Having a clean and organized workspace can help you stay on task.\n6. Set goals: Set short-term and long-term goals to keep you motivated and on track.\n7. Connect with colleagues: Don\u2019t be afraid to reach out to colleagues for help or advice.\n8. Track your progress: Keep track of your progress and celebrate your successes.\n9. Unplug: Taking breaks and unplugging from work can help you recharge and come back to work more refreshed.", "answer_id": "7zQm8cSTJhPtPdZdxbcfrX", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
5
+ {"question_id": 5, "text": "Sure. Quantum computing is a new technology that uses quantum mechanical phenomena to perform operations on data. Unlike traditional computers, which process information in binary form, quantum computers can process information in multiple states at the same time, allowing them to perform operations faster and with more accuracy. It also allows them to solve problems that classical computers struggle with. Quantum computing is still in its infancy, but it has the potential to revolutionize the computing industry and open up new possibilities in artificial intelligence, cryptography, and other fields.", "answer_id": "UrLEH82RHwqqLt2LyvYSKj", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
6
+ {"question_id": 6, "text": "Plant-based protein sources include legumes, grains, nuts, and seeds, and are generally lower in fat and calories than animal-based proteins. Animal-based proteins come from sources such as meat, eggs, and dairy, and are generally higher in fat and calories than plant-based proteins. Plant-based proteins are also generally lower in cholesterol and saturated fat than animal-based proteins. Additionally, plant-based proteins require fewer resources for production, such as water and land, than animal-based proteins.", "answer_id": "fpRdMTdnfirosQixuf2Gez", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
7
+ {"question_id": 7, "text": "Developing critical thinking skills involves actively questioning assumptions, taking multiple perspectives, analyzing information, and coming to logical conclusions. You can practice these skills by posing questions to yourself and working through problems, as well as engaging in thoughtful discussion with others.", "answer_id": "PvGmsCJSNFcvQKmPTnnd7s", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
8
+ {"question_id": 8, "text": "The major challenges faced by the education sector include lack of access to quality education, low educational outcomes, high costs of education, lack of adequate infrastructure and teaching materials, overcrowding in classrooms, and gender inequality. Other challenges include lack of access to technology and digital learning materials, outdated curriculum and pedagogical practices, and inadequate teacher training.", "answer_id": "n4ANAbpR3gvLPP8poPfKZ6", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
9
+ {"question_id": 9, "text": "The primary factors that influence consumer behavior include cultural, social, and personal factors. Cultural factors include shared beliefs, values, and customs within a society, while social factors include the influence of family, friends, and peers. Personal factors include an individual's lifestyle, attitudes, beliefs, and past experiences. Other factors such as advertising, marketing, and price also have an effect on consumer behavior.", "answer_id": "STJ36GrgQMcaUi7zaoNPit", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
10
+ {"question_id": 10, "text": "The most effective strategies for conflict resolution in the workplace include:\n1. Establishing open communication: Encouraging open and honest communication between parties is key to resolving any conflict.\n2. Listening actively: Active listening allows both parties to feel heard and allows for a better understanding of the other person's perspective.\n3. Identifying root causes: Understanding the underlying cause of the conflict can help to find more effective solutions that address the core issue.\n4. Focusing on interests, not positions: Focusing on the interests of both parties can help to find creative solutions that satisfy the needs of everyone involved.\n5. Developing mutual respect: Respect is essential to any successful resolution. Building mutual respect can help to create a healthier work environment.\n6. Negotiation: Working together to negotiate a mutually beneficial solution can help to create an agreement that both parties are happy with.\n7. Taking responsibility: Taking responsibility for one\u2019s words and actions can help to create a culture of accountability in the workplace.", "answer_id": "425SwYvqKPAXFGTYKXB7Cs", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
11
+ {"question_id": 11, "text": "The use of single-use plastic bottles has a number of negative implications on the environment. They are not biodegradable and can take hundreds of years to break down. This means they are around for a long time, releasing potentially harmful chemicals into the environment, and they can be easily mistaken for food by animals. Single-use plastic bottles can also be a source of micro-plastics, which are very small particles of plastic that are difficult to clean up and can be consumed by fish and other aquatic life, leading to health problems in humans who consume those fish. On the other hand, reusable bottles are a much more environmentally friendly choice since they do not produce any waste. They are also often made of materials that are safer for human health, such as stainless steel or glass. Reusable bottles are also more cost-effective since they do not need to be replaced as often as single-use plastic bottles.", "answer_id": "VbNAuj6KAkMdLJQXMo22oK", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
12
+ {"question_id": 12, "text": "When designing an inclusive and accessible public transportation system, some factors to consider include providing frequent and reliable service, accessible stops and vehicles, a variety of fare options, clear and consistent signage and information, comfortable seating and space for wheelchairs and other assistive devices, and security and safety protocols. Additionally, public transportation systems should be integrated with other services such as public parking, bike and scooter sharing, and shuttles to provide a comprehensive and accessible experience.", "answer_id": "CNGqAeu2QJbQ4QGzHJDPdq", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
13
+ {"question_id": 13, "text": "Governments can use fiscal and monetary policies to combat economic recessions. Fiscal policies involve the government spending money or cutting taxes in order to stimulate the economy, while monetary policies involve changing the money supply or interest rates to influence economic activity. These policies can be used to stabilize the economy, create jobs, and reduce poverty.", "answer_id": "E8w2qYqnm8iqCrSkUv62sz", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
14
+ {"question_id": 14, "text": "Language and cultural barriers can lead to misunderstandings and make it more difficult to communicate and form relationships in multicultural societies. Without a common language or shared culture, people can have difficulty understanding each other's perspectives and beliefs, which can lead to mistrust and a lack of understanding between different ethnic groups. To overcome these barriers, it is important to make an effort to learn about different cultures, be open-minded, and take the time to understand each other.", "answer_id": "8o5yMymfzo6kzmp9GK5MWr", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
15
+ {"question_id": 15, "text": "Artificial intelligence can be used to improve the quality and efficiency of healthcare delivery in a variety of ways. AI can be used to assist with diagnosing diseases by comparing symptoms and medical history to images of known diseases and medical conditions. AI can also be used to analyze laboratory results and patient records to identify potential problems and develop treatment plans. AI can be used to automate administrative tasks and reduce paperwork, as well as identify potential drug interactions and side effects. AI can also be used to automate appointment reminders, facilitate communication between doctors and patients, and even provide virtual health coaching to help patients manage their conditions.", "answer_id": "kbJVEEsdsSScEq5Y5furr7", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
16
+ {"question_id": 16, "text": "CRISPR-Cas9 is a recently developed gene editing technology that has revolutionized the way scientists are able to edit genomes. The technology uses a guide RNA to direct the Cas9 enzyme to a specific location in the genome, where it will cut the DNA strands. This allows for the insertion or deletion of DNA sequences, which can be used to modify the genetic code of an organism. Potential applications include treating genetic diseases, increasing crop yields, and creating pest-resistant crops. Ethically, the biggest concern is the potential misuse of the technology, which could lead to unintended consequences or be used to alter humanity in ways that could harm us.", "answer_id": "CMUL5ULZuR7YC5EPzCBN2N", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
17
+ {"question_id": 17, "text": "Vaccinations work by stimulating the body's immune system to protect against infectious diseases. Herd immunity is a concept whereby a population is protected against a certain disease when a certain percentage of the population has immunity to the disease, either through vaccination or having already contracted the disease. This is because when enough people are vaccinated, it reduces the spread of the disease and prevents it from spreading to those who are not immune.", "answer_id": "kEmDDQyNqSkyFihYEEBpuR", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
18
+ {"question_id": 18, "text": "Social media platforms can have a significant influence on how people consume and share news. By providing instant access to news sources and allowing users to easily share content, these platforms can shape the public's information landscape and the spread of misinformation. Some potential implications of this are that users may not always be able to discern between reliable and unreliable sources, and misleading information may be more easily spread and go viral. This can lead to confusion and mistrust among users, and can potentially be exploited for nefarious purposes. To mitigate these risks, it is important for users to be aware of the potential biases and agendas of the sources they are consuming, and for social media platforms to take proactive steps to curb the spread of misinformation.", "answer_id": "Qs3grQsqFVGK9EVkCkf9PB", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
19
+ {"question_id": 19, "text": "Cultural, social, and economic factors all play a role in determining an individual's food choices. For example, people growing up in households with limited economic means may be more likely to consume cheaper, less nutritious foods due to economic constraints. Additionally, cultural norms and values can influence food choices, such as the emphasis on large portions typical in some American cultural settings. Finally, social factors such as peer pressure and advertising can also sway an individual's food choices. By understanding and taking into account these various factors, health promotion strategies can be tailored and targeted to help individuals make healthier dietary decisions.", "answer_id": "kzZ6dKN7hkRWjqtdHr7Qns", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
20
+ {"question_id": 20, "text": "Natural selection is the process by which certain traits or characteristics become more or less common in a population over time. This is due to the fact that those individuals with certain advantageous traits are more likely to survive, reproduce, and pass those traits on to their offspring. Over time, this leads to the evolution and adaptation of species to their environment.", "answer_id": "DPPDG6YGFJij2GCmRL66PU", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
21
+ {"question_id": 21, "text": "I am a brave knight, loyal to my king and country, and I have come to serve and protect them both. I am skilled in the arts of war and peace, and I am proud to be here at the royal court.", "answer_id": "D62FjDb4nZANzPpfSfsiyn", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
22
+ {"question_id": 22, "text": "Avast ye maties! To the bottom of the sea we go! In search of hidden riches and fortune, we shall brave the depths and face any danger that comes our way. For glory and honor, let us set sail and find the lost Ark!", "answer_id": "k7E4NNw5kyj9DmvP5Pu2zb", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
23
+ {"question_id": 23, "text": "If I were a Shakespearean character, I would declare my love for someone in a soliloquy with passionate words and metaphors. I would express my love for them with descriptions of their beauty and the joy I feel when I am with them. I would confess my hopes and dreams for our future together and declare my devotion to them. I would make my love for them known in a way that could never be forgotten.", "answer_id": "KFocjVCejYrU3YmLjAqoUF", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
24
+ {"question_id": 24, "text": "As a superhero, my origin story is very special. I was born with special abilities that I had to keep hidden for a long time. I was scared to use them at first, but eventually I discovered that I could use my powers to protect people and make the world a better place. I've had to face a lot of challenges along the way, but I've also had a lot of help from friends and family. Now, I use my abilities to help people in need and fight for justice.", "answer_id": "dq8Sm9djS7e7y9sG9vmMJf", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
25
+ {"question_id": 25, "text": "If I were a time traveler from the year 3000, I would tell people about the incredible advancements in technology, such as the ability to travel through time, space, and dimensions; the development of intelligent robots and autonomous vehicles; the emergence of virtual reality and augmented reality; and the rise of artificial intelligence and machine learning.", "answer_id": "XZ8fG8e6u7CyKd2moK6abe", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
26
+ {"question_id": 26, "text": "The game was a nail-biter, with both teams trading leads throughout the game. With only seconds left on the clock, the home team made a bold move and passed the ball to their star player, who took the ball down the court and made a layup at the buzzer to seal the victory for the home team!", "answer_id": "oKaXHfoK4pXwrefFWXmeA8", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
27
+ {"question_id": 27, "text": "My signature dish is a seamless blend of traditional and modern cooking techniques. I use only the freshest ingredients to create a unique and unforgettable dining experience. The dish is a perfect balance of flavors and textures, with a subtle hint of my personal style. It is a dish that I am proud to call my own.", "answer_id": "ZwiZfvDWm7SETKNBfDk7Mb", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
28
+ {"question_id": 28, "text": "At the summit of Mount Everest, you are filled with a sense of accomplishment and joy. The view from the top is absolutely breathtaking - you can see for miles and miles, with the majestic Himalayan mountain range stretching out in all directions. It is a truly unforgettable experience.", "answer_id": "DxYopRe2LcTJMy3FWu6btd", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
29
+ {"question_id": 29, "text": "As a colonist on Mars, my daily life is filled with challenges. Finding resources and creating a sustainable environment is a priority. I face a number of challenges including extreme temperature fluctuations, limited access to resources, and the difficulty of travelling to and from the planet. Additionally, I must be mindful of my physical and mental health since I am so far from home. Despite these challenges, I am grateful to be able to explore and experience this new world.", "answer_id": "WC3UJVh4jQ5RUkpcRMU98L", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
30
+ {"question_id": 30, "text": "In the post-apocalyptic world, I am a survivor by necessity. I scavenge for food and supplies, and I'm always on the lookout for potential allies. I've encountered a few people who have managed to survive, and together we have formed an alliance to help each other. We hunt for food, build shelter, and work together to stay alive. We also share knowledge and skills, like how to start a fire or how to use a weapon. We look out for each other, and our alliance has strengthened our chances of survival.", "answer_id": "gTvgn6ksDjGGgdprw6AG5A", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
31
+ {"question_id": 31, "text": "There are a few ways to tell if a restaurant is popular among locals or mainly attracts tourists. Firstly, look at the clientele - if the majority of people there are tourists, it's likely that the restaurant is mainly attracting tourists. Secondly, check online reviews - if the reviews are mainly from tourists, then it's likely that the restaurant is popular with tourists. Finally, look at the prices - if the prices are higher than average for the area, it could be a sign that the restaurant is popular with locals. This information can be useful to get an idea of what kind of experience to expect, as locals might know about different aspects of the restaurant that tourists wouldn't.", "answer_id": "3q7giCk2BA3Ye4Tm9HC2iw", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
32
+ {"question_id": 32, "text": "Some subtle clues that suggest someone is pretending to understand a topic or conversation when they are actually confused or uninformed include: not asking any questions or engaging in the conversation, avoiding eye contact, fidgeting or stammering when speaking, repeating questions or comments made by other people, and nodding along without any signs of understanding.", "answer_id": "hRGsxy86v26SC4yAQS29X4", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
33
+ {"question_id": 33, "text": "Some people prefer the tactile and visual experience of using a paper map, and others may prefer to ask for directions from locals in order to get a more personalized experience. Additionally, GPS devices and smartphone apps can sometimes be inaccurate or have limited battery life, while a paper map or asking for directions may be more reliable.", "answer_id": "3n49A5ggJERfXYrLns3ZeU", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
34
+ {"question_id": 34, "text": "One way to tell if someone is genuinely interested in a conversation is to observe their body language and facial expressions. Are they making an effort to maintain eye contact? Are they leaning in and actively listening to what you are saying? Do they ask questions and provide relevant answers? If so, it is likely that they are genuinely interested in the conversation. Additionally, if someone is simply being polite, they may not ask questions or engage in the conversation as much, and may not make an effort to maintain eye contact.", "answer_id": "ErCpFtPuYVru4oTTk4WrxG", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
35
+ {"question_id": 35, "text": "Shopping at a small, locally-owned business can benefit the local community by keeping money in the area and supporting local jobs. Additionally, these businesses tend to offer a more personal experience and higher quality products than large chain stores. Furthermore, shopping at small businesses can help create a sense of place and community, and can help maintain a unique local culture.", "answer_id": "PTNoCRMZWoJk8HaKX7fW45", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
36
+ {"question_id": 36, "text": "There are several ways to assess the credibility of a source of information. Firstly, you can look at the author's credentials and experience in the relevant field. Secondly, you can check the source of the information, such as whether it is from a reliable website or publication. Thirdly, you can look at the evidence presented in the article and whether it is backed up by reliable sources. Finally, you can read other people's reviews or comments about the article to get a better idea of its credibility.", "answer_id": "n8cFs9KENNwZ4z3SR4iXTr", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
37
+ {"question_id": 37, "text": "Some people enjoy the sensation of being scared because it can create a feeling of excitement, enhance their emotional state, and provide a sense of thrill and adventure. Others may avoid these experiences because they are afraid of the unknown, or because they don't enjoy the feeling of being scared. Everyone is different, and some people may be more attracted to thrilling and exciting activities while others may prefer calmer activities.", "answer_id": "GzxL9mmEK5RzKqRbqBMUVC", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
38
+ {"question_id": 38, "text": "By observing the behavior of others in a social situation, one can gain clues as to the cultural norms and expectations of a group. For example, watching how people interact with one another, how they address each other, how they handle disagreements, and how they go about solving problems can provide insight into the cultural values of the group. Additionally, observing body language, facial expressions, and other nonverbal cues can offer clues as to the accepted norms of behavior in a particular culture.", "answer_id": "QpoHFgb9SzwuaXQQUuBUQD", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
39
+ {"question_id": 39, "text": "It is an interesting question, and one that has been debated for quite some time. I think there are valid arguments on both sides. On the one hand, exploring space is a remarkable human endeavor and could lead to tremendous scientific discoveries and technological advances. On the other hand, there are many pressing issues that need to be addressed on Earth, such as poverty, inequality, and climate change. Each side would argue that their cause is more important, and it is ultimately up to each individual to decide which one they feel more strongly about.", "answer_id": "Fxe6MS4GpP3LMDUwzY2cPA", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
40
+ {"question_id": 40, "text": "It is important to strike a balance between job creation and technological progress. Automation can increase efficiency and productivity, but it should not come at the expense of job security and people's livelihoods. Therefore, it is essential to create policies and initiatives that promote both job creation and technological progress. This could include investing in training and education to ensure that people have the skills necessary to compete in the modern job market, as well as incentivizing companies to invest in technologies that create jobs and stimulate economic growth.", "answer_id": "mJiQ2FGR4Xb8kmhZjharkw", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
41
+ {"question_id": 41, "text": "On average, the human eye blinks about 20 times per minute, or about 14,400 times per day. In a lifetime, this means that the average human will blink roughly 50 million times. This may seem like a lot, but it serves an important purpose. Blinking helps to keep the eyes lubricated and prevents them from drying out. It also helps to spread tears over the surface of the eye, washing away foreign particles and keeping the eye clean. Additionally, blinking helps to reduce the risk of eye infections by helping to clear away bacteria and other foreign substances.", "answer_id": "6Kph4RHRKEZ4YUoaHuEhBv", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
42
+ {"question_id": 42, "text": "A grain of salt contains 102.98 atoms. To calculate this, we first need to know the atomic weight of a single atom. The atomic weight of an atom is the number of protons and neutrons in the nucleus of an atom, which determines its atomic mass. The atomic weight of a single atom of salt is 58.943 g/atom. Therefore, a grain of salt contains 102.98 atoms, which is equivalent to 60.98 grams.", "answer_id": "WBwpBQwhxn5kxLDb7MschC", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
43
+ {"question_id": 43, "text": "Approximately 2000 lightning strikes occur on Earth each day. This is because the atmospheric conditions must come together in a particular way for a lightning strike to occur. Firstly, a large amount of electric charge must accumulate in the atmosphere, typically in a storm system. Then, the air must become increasingly unstable, leading to rising air and a strong updraft. This causes an electric breakdown of the air, and then an exchange of electricity occurs from the cloud to the ground, forming a lightning bolt. As these conditions are necessary for a lightning strike to occur, about 2000 lightning strikes happen on Earth each day.", "answer_id": "kf8nahQVci2ZLaYikagB7U", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
44
+ {"question_id": 44, "text": "It would take about 10 million balloons to lift a house like in the movie Up. The balloons would need to be filled with helium in order for the house to be lifted. Each balloon would need to be filled with about 89.1 cubic feet of helium in order to lift 500 pounds. To calculate how many balloons would be needed, simply multiply the weight of the house (264.72 lbs) by the number of cubic feet of helium needed to lift 500 pounds (89.1). Therefore, it would take 10 million balloons to lift a house like in the movie Up.", "answer_id": "Gptgryd4o2dC8V5aqRmeJJ", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
45
+ {"question_id": 45, "text": "According to a 2017 study, over 6.3 billion text messages are sent globally in a minute. This number is expected to grow to 6.8 billion by 2021. This is due to the increasing number of smartphones, better network coverage, and the increasing popularity of texting for communication. Furthermore, the increase in smartphones and better network coverage has also led to the rise of instant messaging applications such as WhatsApp, which has further increased the amount of text messages sent per minute.", "answer_id": "RfBWW8ZhdfTuTMb454Un4o", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
46
+ {"question_id": 46, "text": "It is estimated that over 1 billion words are spoken on Earth every day. This is due to the fact that humans are highly social creatures, and language is the primary way that we communicate with each other. In addition, language is a key component of all cultures and societies, and is constantly being used to exchange information and ideas. Furthermore, language is integral to the way we think and process information, as well as to how we connect with each other on an emotional level. Therefore, it is not surprising that so many words are spoken on a daily basis.", "answer_id": "neGgLYm47JvqN8qkw8VeoW", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
47
+ {"question_id": 47, "text": "On average, about 100 snowflakes fall per second during a typical winter. This number can vary drastically depending on the location and severity of the storm. Snowflakes are made up of tiny ice crystals, which form in the atmosphere when the temperature is below freezing and there is moisture in the air. As air moves and mixes, the crystals come together to form snowflakes. The shape, size and number of snowflakes depend on the atmospheric conditions, such as temperature and humidity, when they form.", "answer_id": "KSePbzLwsYasR3aui4HU8h", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
48
+ {"question_id": 48, "text": "It is estimated that all the books ever written would total about 100 billion pages. This is based on the fact that there are roughly 130 million unique titles in existence, and each book on average contains about 200 pages. This means that if all the books were printed out, the stack of paper would be about 400 miles high. Additionally, if each book was represented by a single page, the total number of pages would be approximately 100 billion. Throughout history, books have served as a way to preserve knowledge and information, as well as to share ideas and stories with others. As technology advances, books are increasingly available in digital formats, which allows for easier access and storage. Regardless of the medium, books will continue to be a valuable source of information for generations to come.", "answer_id": "SWWfLpP2gnWHLbYRZXNsW3", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
49
+ {"question_id": 49, "text": "Since the beginning of life, the Earth has orbited the Sun a total of 4.3 billion times. This is calculated by multiplying the average length of a year (365.25 days) by the number of years since life began (approximately 4.5 billion years). Life on Earth relies on the Sun's energy reaching the Earth's atmosphere and being redistributed by the atmosphere, the land surface, and the oceans. Without this energy, life on Earth would not be possible. Therefore, the Earth's orbit around the Sun is integral to the sustainability of life on Earth.", "answer_id": "WaBwKYhs7eAG22qCGLH2j3", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
50
+ {"question_id": 50, "text": "Since the beginning of recorded music, countless songs have been recorded. The exact number is difficult to quantify, but conservative estimates put the number of songs recorded throughout history at several hundred thousand. This includes songs from all genres, from classical music to modern pop. The invention of recording technology has allowed musicians to preserve their music and share it with the world. This has enabled generations to access and enjoy music from all eras. With the rise of digital music, the number of recordings available has grown exponentially, and new songs are being recorded all the time.", "answer_id": "MfMJeE9om7qyBbqopHouf4", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
51
+ {"question_id": 51, "text": "If the Internet had been invented during the Renaissance period, it likely would have been used primarily by the wealthy and powerful to access knowledge and information, as there was no widespread system of public education at the time. It also would have been a much slower and more expensive process to access information, as there were no high-speed connections or digital networks available.", "answer_id": "TjWPRDM6JFpPF8xeRptCKb", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
52
+ {"question_id": 52, "text": "If the Aztecs had successfully repelled the Spanish conquistadors, their culture and way of life would have likely remained intact. The Spanish would not have been able to colonize Mexico, and their influence on the region would have been much less. It's likely that the Aztecs would have continued to expand their empire and control the areas that are now parts of Central and South America. Additionally, the Native American populations in the area would have likely been able to maintain more of their traditional customs and beliefs, and their languages may not have been as heavily impacted by Spanish.", "answer_id": "iR2tYTsWTFENEP7Qy9RgtX", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
53
+ {"question_id": 53, "text": "If the Black Death had not occurred in the 14th century, the population of Europe may have continued to grow unabated, leading to more rapid urbanization and economic growth. It's likely that the Renaissance would have begun sooner, and the scientific revolution may have occurred earlier as well. Without the Black Death, there may have been no need for the industrial revolution, or at least it may have occurred later and on a smaller scale. Additionally, the spread of diseases may have been slower, leading to greater population density in certain areas.", "answer_id": "AZdS8xAi3GwAmCqkNSnnwv", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
54
+ {"question_id": 54, "text": "If Isaac Newton had focused on biology instead of physics, he might have made important discoveries in the field of medicine and biology. He may have studied the human body and its functions in greater detail, and possibly even made important breakthroughs in treating diseases. He may also have studied the behavior of different animals in the wild and made important observations about their habits and habitats.", "answer_id": "VmwifF2JD5osYKDTqv2ZRS", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
55
+ {"question_id": 55, "text": "If the Beatles had never formed as a band, the music world would have been drastically different. The British Invasion of the 1960s might not have happened, or it could have happened in a much different way. The cultural impact of the Beatles, including their influence on other musicians, fashion and society, may never have existed.", "answer_id": "mUL5UPj3qDGaCriEjL2U3B", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
56
+ {"question_id": 56, "text": "If Alan Turing had not cracked the Enigma code during World War II, the outcome of the war could have been very different. It's possible that the Allies would have eventually defeated the Axis powers, but it likely would have taken much longer and cost many more lives. With the Enigma code cracked, the Allies were able to gain a critical advantage over the Axis powers and ultimately win the war.", "answer_id": "dVdwUoVrAQJDuWxiodykiw", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
57
+ {"question_id": 57, "text": "If the Suez Canal had never been constructed, it would have major implications for international trade and navigation. The Suez Canal is an important shipping route connecting the Mediterranean Sea to the Red Sea, which in turn connects to the Indian Ocean. This allows for a shorter and more direct route for ships traveling between Europe and Asia, reducing travel time significantly. Without the Suez Canal, ships would have to travel around the Cape of Good Hope at the southern tip of Africa, which is much more time consuming and costly. This would make international trade more difficult and expensive, and could hinder global economic growth.", "answer_id": "EiNn9jjfy7dga6xfCtLtF8", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
58
+ {"question_id": 58, "text": "If the Maya civilization had not mysteriously collapsed, it is likely that their influence and power would have continued to grow. They would have likely continued to build large cities and monuments, develop their writing and astronomical systems, and expand their trade networks. Additionally, their advances in agriculture and water management may have led to increased food production and a larger population.", "answer_id": "eqG9f2R9hXVyZrZMpcqAYq", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
59
+ {"question_id": 59, "text": "If Christopher Columbus had not discovered the Americas, the indigenous people of the Americas would have continued to live in relative isolation from the rest of the world. It is likely that European colonization and the consequences of that would have been delayed, leading to a different modern world. It is also possible that other Europeans would have made the voyage across the Atlantic and discovered the Americas instead.", "answer_id": "ex42CLEzDVC2TxAvMCtYQJ", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
60
+ {"question_id": 60, "text": "If Vincent van Gogh had been successful during his lifetime, he may have been able to achieve the recognition and financial stability that he desired. He may have been able to travel and gain more exposure for his artwork, as well as connect with other like-minded individuals. He may have been able to create more artwork and expand his portfolio. He may also have been able to find a mentor or patron who could have helped him further his career.", "answer_id": "RfFANSwHeCm3Pqe2AJ86Dk", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
61
+ {"question_id": 61, "text": "The program should read a text file line by line. It should then check each line for the specified word, counting up each time the word is found. Finally, it should output the number of occurrences of the word.", "answer_id": "nzxdtWmgQhRkHeVAanVogZ", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
62
+ {"question_id": 62, "text": "The following Python function can be used to find the longest common subsequence of two input strings using dynamic programming:\ndef longest_common_subsequence(s1, s2):\n\"\"\"This function finds the longest common subsequence of two strings.\n\n Arguments:\n s1 {string} -- The first string\n s2 {string} -- The second string\n\n Returns:\n {int} -- The length of the longest common subsequence\n\n\"\"\"\n n1 = len(s1)\n n2 = len(s2)\n \n # Creating an empty matrix to store the results\n results = [[None]*(n2 + 1) for _ in range(n1 + 1)]\n \n # Initializing the first row to 0\n for i in range(n1 + 1):\n results[i][0] = 0\n \n # Filling the first row with 0\n for i in range(n1):\n results[i][0] = 0\n \n # Calculating the longest common subsequence\n for i in range(n1 + 1):\n for j in range(n2 + 1):\n if (i == 0 or j == 0):\n results[i][j] = 0\n elif (s1[i-1] == s2[j-1] and (results[i-1][j-1] < (i-j) or results[i-1][j-1] == 0)):\n results[i][j] = max(results[i-1][j-1] + 1, i+j)\n else:\n results[i][j] = 0\n \n return results[n1][n2]", "answer_id": "kRo6z5qmeiSM8Xz6BPJHLC", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
63
+ {"question_id": 63, "text": "The following regular expression can be used to validate an email address in Python:\n^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.[a-zA-Z0-9-.]+$", "answer_id": "7kj8odEE79UBdR4pT6u6TY", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
64
+ {"question_id": 64, "text": "Here is a program to find the nth Fibonacci number using dynamic programming:\n1. Set a variable 'n' to the nth Fibonacci number.\n2. Set two other variables 'a' and 'b' to 0 and 1 respectively.\n3. Set a loop from 0 to (n-1).\n4. In the loop, set another loop from 0 to (a+b).\n5. In the inner loop, if 'a' is even and 'b' is odd, set 'a' to a+1 and 'b' to b+1.\n6. If 'a' is odd and 'b' is even, set 'a' to a-1 and 'b' to b-1.\n7. After the inner loop, set 'n' to 'a' and break out of the outer loop.\n8. Print the value of 'n'.\nThis program uses the idea of dynamic programming to compute the nth Fibonacci number. The program first sets two auxiliary variables 'a' and 'b' to 0 and 1 respectively. Then it sets a loop from 0 to (n-1). In the loop, it sets another loop from 0 to (a+b). In the inner loop, if 'a' is even and 'b' is odd, it sets 'a' to a+1 and 'b' to b+1. If 'a' is odd and 'b' is even, it sets 'a' to a-1 and 'b' to b-1. After the inner loop, it sets 'n' to 'a' and breaks out of the outer loop. Finally, it prints the value of 'n'.", "answer_id": "UULZ4GLz9j9ot7tN7x3ySD", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
65
+ {"question_id": 65, "text": "A binary search algorithm can be used to find a specific element in a sorted array by repeatedly splitting the array in half and examining the middle element to determine whether the target element is greater than, less than, or equal to it. If the target element is found, the algorithm immediately returns the index of the element. If it isn\u2019t, the array is split in half, with one half discarded based on the comparison. The process is repeated until the element is found or the array is completely searched.", "answer_id": "Rh8P3bxSJx4p6nCZkSZBrh", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
66
+ {"question_id": 66, "text": "Implementing a queue using two stacks in Python is relatively simple. First, you need to create two stacks, which will act as the queue. Next, you need to define an enqueue and dequeue method, which will add and remove items from the queue. Finally, you need to create an instance of the queue class and start using it.", "answer_id": "R3aYUUtY46ht4ToU8US96x", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
67
+ {"question_id": 67, "text": "The program can be implemented using two nested for loops. The inner loop can be used to iterate through the elements of the first array, while the outer loop can be used to iterate through the elements of the second array. If the two elements being compared are the same, the program should mark those elements with a \"found\" flag and move on. Once the inner loop completes, the program should check if the \"found\" flag is set for all elements in the second array. If it is, then the common elements have been found.", "answer_id": "MQJFbH3KoFaSMmPgmK9LPa", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
68
+ {"question_id": 68, "text": "The value of f(2) is 1.", "answer_id": "dBWqryuzG6BHRdPgrWWpbH", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
69
+ {"question_id": 69, "text": "x = 6.", "answer_id": "W74byrCbE7XiJWjcPTZS7x", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
70
+ {"question_id": 70, "text": "The length of the line segment is 8 units.", "answer_id": "7WC5quGKdv8hADCxWaWZ57", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
71
+ {"question_id": 71, "text": "Yes, I can help you write a formal email to a potential business partner proposing a joint venture. Here's a template that you can use:\nSubject Line: Proposing a Joint Venture\nDear [Name],\nI am writing to propose a joint venture between our two companies. I believe that by combining our resources, we can create a powerful and profitable partnership. Our combined experience and knowledge in the industry will help us to create something unique and special.\nI am confident that our partnership will bring mutual benefits, such as increased profits and access to new markets. Please let me know if you are interested, and we can discuss the details.\nThank you for your time and consideration. I look forward to hearing from you soon.\nSincerely,\n[Your Name]", "answer_id": "AZER7D3RKZ9F9SXHiMjdqM", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
72
+ {"question_id": 72, "text": "Yes, I can help you write a resignation letter to your current employer. The letter should be written in a polite and professional tone and should express your appreciation for the opportunities and experiences you have gained during your employment. It should also include a clear statement of your intention to resign and the date of your last day. Here is a template you can use:\nDear [Employer],\nI am writing to inform you of my intention to resign from my position as [Position], effective [date]. I have appreciated my time here and I am grateful for the opportunities and experiences I have gained during my employment. \nI thank you for your support and understanding.\nSincerely, [Your Name]", "answer_id": "MSrdDafr77UvSHCnsPMSP3", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
73
+ {"question_id": 73, "text": "The letter of recommendation should be structured in a formal manner. Begin by introducing the student and explaining your relationship to them. Describe their qualifications, including their academic performance and relevant skills, and mention any particular accomplishments or awards they have received. Explain why the student is a strong candidate for the program and how they will make a positive contribution. End the letter by reaffirming your recommendation and offering your contact information for further questions.", "answer_id": "hxkjUkDkXhGP78Vo74B4WE", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
74
+ {"question_id": 74, "text": "Dear valued customers, \nWe are excited to announce the launch of our new software solution \u2013 a revolutionary product designed to transform the way you do business! Our new software is an intuitive and powerful tool that can help you streamline processes, save time and money, and increase overall productivity. \nAt [Company Name], we are committed to helping you stay ahead of the competition, and we believe that our new software solution is the perfect tool to help you achieve your goals. Our experienced team of professionals has worked hard to ensure that this product meets the highest standards of quality, and we are confident that it will make a real difference for your business. \nWe invite you to join us in this journey of transformation and discover how our new software can help you achieve your vision. Sign up now and get a free demo to learn more about this revolutionary product. \nThank you for your trust and support.", "answer_id": "hv9jpG9PxeJ9AnFdNzH3Jv", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
75
+ {"question_id": 75, "text": "mer,\nWe apologize for the delay you experienced with your order and thank you for your patience. We have identified the cause of the issue and have taken steps to ensure it does not happen again. We value your business and hope to be able to serve you better in the future.", "answer_id": "X5BVHT5pFpWxB88SzERrEG", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
76
+ {"question_id": 76, "text": "The video will explore the history and cultural significance of jazz music. We will begin by discussing its roots in West African cultural music traditions and its journey through the 19th and 20th centuries, focusing on the key musicians, composers, and genres that have defined the genre. We will also explore the impact of jazz on popular culture, such as in film and literature, as well as its influence on other musical styles. Finally, we will examine the legacy of jazz and its continued importance today.", "answer_id": "Af2SufKwvDBriNXnZX9UoP", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
77
+ {"question_id": 77, "text": "I have composed a travel blog post about my recent trip to Hawaii. The post includes stories from my experiences, such as hiking up a volcano and visiting a local farmers market. I also included plenty of photos to give the reader a sense of place. I highlighted some of the unique cultural experiences, such as an evening spent listening to live Hawaiian music, as well as must-see attractions, like the beautiful beaches and lush rainforests. I ended the post with a personal reflection on the trip and why it was so special to me.", "answer_id": "HVnqVdSGRwQfqSsiYxtTTn", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
78
+ {"question_id": 78, "text": "I recently watched the science fiction movie \"The Future is Blue\" and it was an incredible cinematic experience. The movie follows the story of a scientist who travels back in time to save the world from a deadly virus. The plot was interesting and engaging, with plenty of twists and turns to keep you guessing. The characters were also well developed and likable, making it easy to get invested in their story. The special effects were stunning and added a lot of visual oomph to the movie. All in all, I highly recommend this movie - it's an exciting and thought-provoking ride from start to finish.", "answer_id": "8hcmmjzyMyhFJ8ZwqJJfvw", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
79
+ {"question_id": 79, "text": "The podcast episode would begin by introducing the topic of how streaming platforms have changed the music industry. This would include discussing the impact of streaming services like Spotify and Apple Music on music consumption, the effect on musicians' ability to make money, and how streaming has changed the way people find and discover music. The episode would then explore the effects of streaming on the music industry from multiple angles, such as the impact on musicians and songwriters, how streaming has changed the sound of popular music, and the implications for the music industry's future. Finally, the episode would conclude by discussing how streaming has both benefited and hurt the music industry, and what the future holds for music consumption.", "answer_id": "CU6gGfJqQmwZNxEG2n2Uda", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
80
+ {"question_id": 80, "text": "The symphony concert was a wonderful experience. The orchestra played beautifully, with a rich and soulful sound that filled the hall. The audience was captivated, and you could feel the energy of the music in the air. The performance was a perfect balance of classic and modern pieces, and the conductor's passion was contagious. The atmosphere was festive, and the encore performance was a perfect way to end the evening. All in all, it was a brilliant concert and a delight for the ears.", "answer_id": "5SWbCoLV6NRrq5RtWQK4jp", "model_id": "alpaca-13b:v1", "metadata": {"huggingface_argument": {"do_sample": true, "temperature": 0.7, "max_new_tokens": 1024}}}
llava/eval/table/answer/answer_bard.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
llava/eval/table/answer/answer_gpt35.jsonl ADDED
The diff for this file is too large to render. See raw diff