File size: 4,928 Bytes
d6d3025 68671c7 d6d3025 68671c7 20c555d 68671c7 20c555d 68671c7 472d09a 68671c7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
---
license: apache-2.0
datasets:
- teowu/Q-Instruct
language:
- en
library_name: transformers
---
```bibtex
@misc{wu2023qinstruct,
title={Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models},
author={Haoning Wu and Zicheng Zhang and Erli Zhang and Chaofeng Chen and Liang Liao and Annan Wang and Kaixin Xu and Chunyi Li and Jingwen Hou and Guangtao Zhai and Geng Xue and Wenxiu Sun and Qiong Yan and Weisi Lin},
year={2023},
eprint={2311.06783},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{ye2023mplugowl2,
title={mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration},
author={Qinghao Ye and Haiyang Xu and Jiabo Ye and Ming Yan and Anwen Hu and Haowei Liu and Qi Qian and Ji Zhang and Fei Huang and Jingren Zhou},
year={2023},
eprint={2311.04257},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Q-Future Project @S-Lab NTU, led by @teowu
- **Model type:** Multi-modality Causal Language Model
- **Language(s) (NLP):** English
- **License:** Apache License
- **Finetuned from model [optional]:** mPLUG-Owl2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Q-Future/Q-Instruct
- **Paper [optional]:** https://arxiv.org/abs/2311.06783
- **Demo [optional]:** https://huggingface.co/spaces/teowu/Q-Instruct-on-mPLUG-Owl-2
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
Install:
```shell
git clone https://github.com/X-PLUG/mPLUG-Owl.git
cd mPLUG_Owl/mPLUG_Owl2/
pip install -e .
```
Use:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from mplug_owl2.mm_utils import get_model_name_from_path
from eval_scripts.mplug_owl_2.run_mplug_owl2 import eval_model
model_path = "teowu/mplug_owl2_7b_448_qinstruct_preview_v0.1"
prompt = "Rate the quality of the image. Think step by step."
image_file = "fig/sausage.jpg"
args = type('Args', (), {
"model_path": model_path,
"model_base": None,
"model_name": get_model_name_from_path(model_path),
"query": prompt,
"conv_mode": None,
"image_file": image_file,
"sep": ",",
})()
eval_model(args)
```
### Downstream Use [optional]
Not Yet Supported.
### Out-of-Scope Use
This model should be used for low-level visual perception and understanding tasks. It is not intended as a general-purpose visual assistant.
## Bias, Risks, and Limitations
See our paper section F.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
https://huggingface.co/datasets/teowu/Q-Instruct
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
TBA.
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** NVIDIA A100 80G
- **Hours used:** 256 GPU Hours (32 GPU*8 hours)
- **Cloud Provider:** N/A
- **Compute Region:** Asia Pacific
- **Carbon Emitted:** N/A
## Model Card Contact
Haoning Wu, @teowu |