shauray commited on
Commit
cb7a1b5
1 Parent(s): 65c4151

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ language:
4
+ - en
5
+ tags:
6
+ - 'LLaMA '
7
+ - MultiModal
8
+ ---
9
+ *This is a Hugging Face friendly Model, the original can be found https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-preview*
10
+ <br>
11
+ # LLaVA 13B Model Card
12
+
13
+ ## Model details
14
+
15
+ **Model type:**
16
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
17
+ It is an auto-regressive language model, based on the transformer architecture.
18
+
19
+ **Model date:**
20
+ LLaVA-LLaMA-2-13B-Chat-Preview was trained in July 2023.
21
+
22
+ **Paper or resources for more information:**
23
+ https://llava-vl.github.io/
24
+
25
+ ## License
26
+ Llama 2 is licensed under the LLAMA 2 Community License,
27
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
28
+
29
+ **Where to send questions or comments about the model:**
30
+ https://github.com/haotian-liu/LLaVA/issues
31
+
32
+ ## Intended use
33
+ **Primary intended uses:**
34
+ The primary use of LLaVA is research on large multimodal models and chatbots.
35
+
36
+ **Primary intended users:**
37
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
38
+
39
+ ## Training dataset
40
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
41
+ - 80K GPT-generated multimodal instruction-following data.
42
+
43
+ ## Evaluation dataset
44
+ A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
45
+ We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
46
+ See https://llava-vl.github.io/ for more details.
47
+
48
+ ## Usage
49
+ usage is as follows
50
+
51
+ ```python
52
+ from transformers import LlavaProcessor, LlavaLlamaForCausalLM
53
+ PATH_TO_CONVERTED_WEIGHTS = "shauray/Llava-Llama-2-13B-hf"
54
+ model = LlavaLlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
55
+ processor = LlavaProcessor.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
56
+ url = "https://llava-vl.github.io/static/images/view.jpg"
57
+ image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
58
+ prompt = "How would you best describe the image given?"
59
+ inputs = processor(text=prompt, images=image, return_tensors="pt")
60
+ # Generate
61
+ generate_ids = model.generate(**inputs, max_length=30)
62
+ tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
63
+ """The photograph shows a wooden dock floating on the water, with mountains in the background. It is an idyllic scene that captures both
64
+ nature and human-made structures at their finest moments of beauty or tranquility depending upon one's perspective as they gaze into it"""
65
+ ```