tanahhh commited on
Commit
51cd4df
1 Parent(s): 979811f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ja
4
+ tags:
5
+ - heron
6
+ - vision
7
+ - image-captioning
8
+ - VQA
9
+ pipeline_tag: image-to-text
10
+ license:
11
+ - cc-by-nc-4.0
12
+ inference: false
13
+ ---
14
+ # Heron BLIP Japanese StableLM Base 7B v1
15
+
16
+ ![heron](./heron_image.png)
17
+
18
+
19
+ ## Model Details
20
+ Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br>
21
+ This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
22
+
23
+
24
+ ## Usage
25
+
26
+ Follow [the installation guide](https://github.com/turingmotors/heron/).
27
+
28
+ ```python
29
+ import torch
30
+ from heron.models.video_blip import VideoBlipForConditionalGeneration, VideoBlipProcessor
31
+ from transformers import LlamaTokenizer
32
+
33
+ device_id = 0
34
+ device = f"cuda:{device_id}"
35
+
36
+ MODEL_NAME = "turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1"
37
+
38
+ model = VideoBlipForConditionalGeneration.from_pretrained(
39
+ MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True
40
+ )
41
+
42
+ model = model.half()
43
+ model.eval()
44
+ model.to(device)
45
+
46
+ # prepare a processor
47
+ processor = VideoBlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
48
+ tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁'])
49
+ processor.tokenizer = tokenizer
50
+
51
+ import requests
52
+ from PIL import Image
53
+
54
+ # prepare inputs
55
+ url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
56
+ image = Image.open(requests.get(url, stream=True).raw)
57
+
58
+ text = f"##human: この画像の面白い点は何ですか?\n##gpt: "
59
+
60
+ # do preprocessing
61
+ inputs = processor(
62
+ text=text,
63
+ images=image,
64
+ return_tensors="pt",
65
+ truncation=True,
66
+ )
67
+
68
+ inputs = {k: v.to(device) for k, v in inputs.items()}
69
+ inputs["pixel_values"] = inputs["pixel_values"].to(device, torch.float16)
70
+
71
+ # set eos token
72
+ eos_token_id_list = [
73
+ processor.tokenizer.pad_token_id,
74
+ processor.tokenizer.eos_token_id,
75
+ int(tokenizer.convert_tokens_to_ids("##"))
76
+ ]
77
+
78
+ # do inference
79
+ with torch.no_grad():
80
+ out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list, no_repeat_ngram_size=2)
81
+
82
+ # print result
83
+ print(processor.tokenizer.batch_decode(out))
84
+ ```
85
+
86
+
87
+ ## Model Details
88
+ * **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
89
+ * **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597)
90
+ * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)
91
+ * **Language(s)**: Japanese
92
+
93
+ ### Training
94
+ This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) and Japanese Visual Genome using LoRA.
95
+
96
+ ### Training Dataset
97
+
98
+ - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA)
99
+
100
+
101
+ ## Use and Limitations
102
+
103
+ ### Intended Use
104
+
105
+ This model is intended for use in chat-like applications and for research purposes.
106
+
107
+ ### Limitations
108
+
109
+ The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
110
+
111
+ ## How to cite
112
+ ```bibtex
113
+ @misc{BlipJapaneseStableLM,
114
+ url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)},
115
+ title = {Heron BLIP Japanese StableLM Base 7B},
116
+ author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi}
117
+ }
118
+ ```
119
+
120
+ ## Citations
121
+
122
+ ```bibtex
123
+ @misc{JapaneseInstructBLIPAlpha,
124
+ url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)},
125
+ title = {Japanese InstructBLIP Alpha},
126
+ author = {Shing, Makoto and Akiba, Takuya}
127
+ }
128
+ ```
129
+
130
+ ---
131
+ license: cc-by-nc-4.0
132
+ ---