Transformers
PyTorch
English
mplug_owl2
Inference Endpoints
teowu commited on
Commit
68671c7
1 Parent(s): 50fc0fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +161 -0
README.md CHANGED
@@ -1,3 +1,164 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - teowu/Q-Instruct
5
+ language:
6
+ - en
7
+ library_name: transformers
8
  ---
9
+
10
+ `bibtex
11
+ @misc{wu2023qinstruct,
12
+ title={Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models},
13
+ author={Haoning Wu and Zicheng Zhang and Erli Zhang and Chaofeng Chen and Liang Liao and Annan Wang and Kaixin Xu and Chunyi Li and Jingwen Hou and Guangtao Zhai and Geng Xue and Wenxiu Sun and Qiong Yan and Weisi Lin},
14
+ year={2023},
15
+ eprint={2311.06783},
16
+ archivePrefix={arXiv},
17
+ primaryClass={cs.CV}
18
+ }
19
+ `
20
+
21
+
22
+ # Model Card for Model ID
23
+
24
+ <!-- Provide a quick summary of what the model is/does. -->
25
+
26
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
27
+
28
+ ## Model Details
29
+
30
+ ### Model Description
31
+
32
+ <!-- Provide a longer summary of what this model is. -->
33
+
34
+
35
+
36
+ - **Developed by:** Q-Future Project @S-Lab NTU, led by @teowu
37
+ - **Model type:** Multi-modality Causal Language Model
38
+ - **Language(s) (NLP):** English
39
+ - **License:** Apache License
40
+ - **Finetuned from model [optional]:** mPLUG-Owl2
41
+
42
+ ### Model Sources [optional]
43
+
44
+ <!-- Provide the basic links for the model. -->
45
+
46
+ - **Repository:** https://github.com/Q-Future/Q-Instruct
47
+ - **Paper [optional]:** https://arxiv.org/abs/2311.06783
48
+ - **Demo [optional]:** https://huggingface.co/spaces/teowu/Q-Instruct-on-mPLUG-Owl-2
49
+
50
+ ## Uses
51
+
52
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
53
+
54
+ ### Direct Use
55
+
56
+ Install:
57
+
58
+ ```shell
59
+ git clone https://github.com/X-PLUG/mPLUG-Owl.git
60
+ cd mPLUG_Owl/mPLUG_Owl2/
61
+ pip install -e .
62
+ ```
63
+
64
+ Use:
65
+
66
+ ```python
67
+ import os
68
+ os.environ["CUDA_VISIBLE_DEVICES"] = "0"
69
+ from mplug_owl2.mm_utils import get_model_name_from_path
70
+ from eval_scripts.mplug_owl_2.run_mplug_owl2 import eval_model
71
+ model_path = "teowu/mplug_owl2_7b_448_qinstruct_preview_v0.1"
72
+ prompt = "Rate the quality of the image. Think step by step."
73
+ image_file = "fig/sausage.jpg"
74
+ args = type('Args', (), {
75
+ "model_path": model_path,
76
+ "model_base": None,
77
+ "model_name": get_model_name_from_path(model_path),
78
+ "query": prompt,
79
+ "conv_mode": None,
80
+ "image_file": image_file,
81
+ "sep": ",",
82
+ })()
83
+ eval_model(args)
84
+ ```
85
+
86
+ ### Downstream Use [optional]
87
+
88
+ Not Yet Supported.
89
+
90
+ ### Out-of-Scope Use
91
+
92
+ This model should be used for low-level visual perception and understanding tasks. It is not intended as a general-purpose visual assistant.
93
+
94
+
95
+ ## Bias, Risks, and Limitations
96
+
97
+ See our paper section F.
98
+
99
+ ### Recommendations
100
+
101
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
102
+
103
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
104
+
105
+ ## How to Get Started with the Model
106
+
107
+ Use the code below to get started with the model.
108
+
109
+ [More Information Needed]
110
+
111
+ ## Training Details
112
+
113
+ ### Training Data
114
+
115
+
116
+ https://huggingface.co/datasets/teowu/Q-Instruct
117
+
118
+ ### Training Procedure
119
+
120
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
121
+
122
+ #### Preprocessing [optional]
123
+
124
+ [More Information Needed]
125
+
126
+
127
+ #### Training Hyperparameters
128
+
129
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
130
+
131
+ #### Speeds, Sizes, Times [optional]
132
+
133
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
134
+
135
+ [More Information Needed]
136
+
137
+ ## Evaluation
138
+
139
+ TBA.
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** NVIDIA A100 80G
155
+ - **Hours used:** 256 GPU Hours (32 GPU*8 hours)
156
+ - **Cloud Provider:** N/A
157
+ - **Compute Region:** Asia Pacific
158
+ - **Carbon Emitted:** N/A
159
+
160
+
161
+
162
+ ## Model Card Contact
163
+
164
+ Haoning Wu, @teowu