Inoichan's picture
Update README.md
cd1a7b3
|
raw
history blame
No virus
3.58 kB
---
language:
- ja
tags:
- heron
- vision
- image-captioning
- VQA
pipeline_tag: image-to-text
license:
- apache-2.0
inference: false
---
# Heron GIT Japanese ELYZA Llama 2 Fast 7B
![heron](./heron_image.png)
## Model Details
Heron GIT Japanese ELYZA Llama 2 Fast 7B is a vision-language model that can converse about input images.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository).
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor
from heron.models.git_llm.git_llama import GitLlamaForCausalLM
device_id = 0
# prepare a pretrained model
model = GitLlamaForCausalLM.from_pretrained('turing-motors/heron-chat-git-ELYZA-fast-7b-v0')
model.eval()
model.to(f"cuda:{device_id}")
# prepare a processor
processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ELYZA-fast-7b-v0')
# prepare inputs
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = f"##human: これは何の写真ですか?\n##gpt: "
# do preprocessing
inputs = processor(
text,
image,
return_tensors="pt",
truncation=True,
)
inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()}
# set eos token
eos_token_id_list = [
processor.tokenizer.pad_token_id,
processor.tokenizer.eos_token_id,
]
# do inference
with torch.no_grad():
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list)
# print result
print(processor.tokenizer.batch_decode(out))
```
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100)
* **Lamguage Model**: [ELYZA Japanese Llama-2 7B fast instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct)
* **Language(s)**: Japanese
* **License**: This model is licensed under [the LLAMA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE).
### Training
This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome using LoRA.
### Training Dataset
- [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA)
- [Japanese STAIR Captions](http://captions.stair.center/)
- [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{GitElyzaFast,
url = {[https://huggingface.co/turing-motors/heron-chat-git-ELYZA-fast-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-ELYZA-fast-7b-v0)},
title = {Heron GIT Japanese ELYZA Llama 2 Fast 7B},
author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-7b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura},
year={2023},
}
```
---
license: llama2
---