minicpm-guidance / README.md
Cuiunbo's picture
Update README.md
80a2f49 verified
|
raw
history blame
No virus
3.24 kB
---
license: apache-2.0
datasets:
- yiye2023/GUIChat
- yiye2023/GUIEnv
- yiye2023/GUIAct
language:
- en
tags:
- GUI
- Agent
- minicpm
---
# πŸ“±πŸ–₯️ GUIDance: Vision Langauge Models as Your Screen Guide
Introducing the GUIDance, Model that trained on GUICourse! πŸŽ‰
By leveraging extensive OCR pretraining with grounding ability, we unlock the potential of parsing-free methods for GUIAgent.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f706dfe94ed998c463ed66/5d4rJFWjKn-c-iOXJKYXF.png)
# News
- 2024-07-09: πŸš€ We released MiniCPM-GUIDance on huggingface.
- 2024-03-09: πŸ“¦ We have open-sourced guicourse, [GUIAct](https://huggingface.co/datasets/yiye2023/GUIAct),[GUIChat](https://huggingface.co/datasets/yiye2023/GUIChat), [GUIEnv](https://huggingface.co/datasets/yiye2023/GUIEnv)
# ToDo
[ ] Update detailed task type prompt
[ ] Batch inference
# Example
Pip install all dependencies:
```
Pillow==10.1.0
timm==0.9.10
torch==2.1.2
torchvision==0.16.2
transformers==4.40.0
sentencepiece==0.1.99
flash_attn==2.4.2
```
First you are suggested to git clone this huggingface repo or download repo with huggingface_cli.
```
git lfs install
git clone https://huggingface.co/RhapsodyAI/minicpm-guidance
```
or
```
huggingface-cli download RhapsodyAI/minicpm-guidance
```
```python
from transformers import AutoProcessor, AutoTokenizer, AutoModel
from PIL import Image
import torch
MODEL_PATH = '/path/to/minicpm-guidance'
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(MODEL_PATH, trust_remote_code=True)
# model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True, attn_implementation="eager", torch_dtype=torch.bfloat16)
model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True, torch_dtype=torch.bfloat16)
model.cuda().eval()
# Currently only support batch=1
example_messages = [
[
{
"role": "user",
"content": Image.open("/home/jeeves/cuiunbo/minicpmv/examples/test.png").convert('RGB')
},
{
"role": "user",
"content": "What this is?"
}
]
]
input = processor(example_messages, padding_side="right")
for key in input:
if isinstance(a[key], list):
for i in range(len(a[key])):
if isinstance(a[key][i], torch.Tensor):
input[key][i] = a[key][i].cuda()
if isinstance(input[key], torch.Tensor):
input[key] = input[key].cuda()
with torch.no_grad():
outputs = model.generate(input, max_new_tokens=64, do_sample=False, num_beams=3)
text = tokenizer.decode(outputs[0].cpu().tolist())
text = tokenizer.batch_decode(outputs.cpu().tolist())
for i in text:
print('-'*20)
print(i)
```
# Citation
If you find our work useful, please consider cite us:
```
@misc{,
title={GUICourse: From General Vision Language Models to Versatile GUI Agents},
author={Wentong Chen and Junbo Cui and Jinyi Hu and Yujia Qin and Junjie Fang and Yue Zhao and Chongyi Wang and Jun Liu and Guirong Chen and Yupeng Huo and Yuan Yao and Yankai Lin and Zhiyuan Liu and Maosong Sun},
year={2024},
journal={arXiv preprint arXiv:2406.11317},
}
```