luodian commited on
Commit
fb59574
·
verified ·
1 Parent(s): 889091a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -61
README.md CHANGED
@@ -256,7 +256,7 @@ extra_gated_fields:
256
  ---
257
 
258
 
259
- # StarCoder
260
 
261
  ![banner](https://i.postimg.cc/pL17YtG4/WX20240508-220230-2x.png)
262
 
@@ -273,91 +273,96 @@ Play with the model on the [LLaVA OneVision Chat](https://llava-onevision.lmms-l
273
 
274
  ## Model Summary
275
 
276
- The LLaVA-OneVision models are 0.5/7/72B parameter models trained on [LLaVA-OneVision] from (https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), based on Qwen2 language model with a context window of 32K tokens.
277
 
278
  - **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
279
- - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
280
- - **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
281
- - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
282
- - **Languages:** 80+ Programming languages
283
 
284
 
285
  ## Use
286
 
287
  ### Intended use
288
 
289
- The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
290
 
291
  **Feel free to share your generations in the Community tab!**
292
 
293
  ### Generation
294
  ```python
295
- # pip install -q transformers
296
- from transformers import AutoModelForCausalLM, AutoTokenizer
297
-
298
- checkpoint = "bigcode/starcoder"
299
- device = "cuda" # for GPU usage or "cpu" for CPU usage
300
-
301
- tokenizer = AutoTokenizer.from_pretrained(checkpoint)
302
- model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
303
-
304
- inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
305
- outputs = model.generate(inputs)
306
- print(tokenizer.decode(outputs[0]))
307
- ```
308
-
309
- ### Fill-in-the-middle
310
- Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
311
-
312
- ```python
313
- input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
314
- inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
315
- outputs = model.generate(inputs)
316
- print(tokenizer.decode(outputs[0]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
317
  ```
318
 
319
- ### Attribution & Other Requirements
320
-
321
- The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
322
-
323
- # Limitations
324
-
325
- The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. At this time there is no mechanism to detect content previously generated by the model. See [the paper](https://arxiv.org/pdf/2305.06161.pdf) for an in-depth discussion of the model limitations.
326
-
327
  # Training
328
 
329
  ## Model
330
 
331
- - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
332
- - **Pretraining steps:** 250k
333
- - **Pretraining tokens:** 1 trillion
 
 
334
  - **Precision:** bfloat16
335
 
336
- ## Hardware
337
-
338
- - **GPUs:** 512 Tesla A100
339
- - **Training time:** 24 days (320,256 GPU hours pretraining + 11,208 GPU hours Python fine-tuning)
340
- - **Training FLOPS:** 8.46E+22
341
 
342
- ## Software
343
-
344
- - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
345
  - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
346
- - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
347
-
348
- # License
349
- The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
350
-
351
- **Email contact@bigcode-project.org with questions related to the license agreement and for appeals relating to use restrictions.**
352
 
353
  # Citation
354
  ```
355
- @article{li2023starcoder,
356
- title={StarCoder: may the source be with you!},
357
- author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
358
- year={2023},
359
- eprint={2305.06161},
360
- archivePrefix={arXiv},
361
- primaryClass={cs.CL}
362
  }
363
  ```
 
256
  ---
257
 
258
 
259
+ # LLaVA-OneVision
260
 
261
  ![banner](https://i.postimg.cc/pL17YtG4/WX20240508-220230-2x.png)
262
 
 
273
 
274
  ## Model Summary
275
 
276
+ The LLaVA-OneVision models are 0.5/7/72B parameter models trained on [LLaVA-OneVision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), based on Qwen2 language model with a context window of 32K tokens.
277
 
278
  - **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
279
+ - **Project Website:** [llava-onevision.lmms-lab.com](llava-onevision.lmms-lab.com)
280
+ - **Paper:** [LLaVA-OneVision]()
281
+ - **Point of Contact:** [Bo Li](mailto:drluodian@gmail.com)
282
+ - **Languages:** English, Chinese
283
 
284
 
285
  ## Use
286
 
287
  ### Intended use
288
 
289
+ The model was trained on [LLaVA-OneVision Dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) and have the ability to interact with images, multi-image and videos.
290
 
291
  **Feel free to share your generations in the Community tab!**
292
 
293
  ### Generation
294
  ```python
295
+ # pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
296
+ from llava.model.builder import load_pretrained_model
297
+ from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
298
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
299
+ from llava.conversation import conv_templates, SeparatorStyle
300
+
301
+ from PIL import Image
302
+ import requests
303
+ import copy
304
+ import torch
305
+
306
+ import sys
307
+ import warnings
308
+
309
+ warnings.filterwarnings("ignore")
310
+ pretrained = "lmms-lab/llava-onevision-qwen2-0.5b-si"
311
+ model_name = "llava_qwen"
312
+ device = "cuda"
313
+ device_map = "auto"
314
+ tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args
315
+
316
+ model.eval()
317
+
318
+ url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
319
+ image = Image.open(requests.get(url, stream=True).raw)
320
+ image_tensor = process_images([image], image_processor, model.config)
321
+ image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]
322
+
323
+ conv_template = "qwen_1_5" # Make sure you use correct chat template for different models
324
+ question = DEFAULT_IMAGE_TOKEN + "\nWhat is shown in this image?"
325
+ conv = copy.deepcopy(conv_templates[conv_template])
326
+ conv.append_message(conv.roles[0], question)
327
+ conv.append_message(conv.roles[1], None)
328
+ prompt_question = conv.get_prompt()
329
+
330
+ input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
331
+ image_sizes = [image.size]
332
+
333
+
334
+ cont = model.generate(
335
+ input_ids,
336
+ images=image_tensor,
337
+ image_sizes=image_sizes,
338
+ do_sample=False,
339
+ temperature=0,
340
+ max_new_tokens=4096,
341
+ )
342
+ text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
343
+ print(text_outputs)
344
  ```
345
 
 
 
 
 
 
 
 
 
346
  # Training
347
 
348
  ## Model
349
 
350
+ - **Architecture:** SO400M + Qwen2
351
+ - **Pretraining Stage:** LCS-558K, 1 epoch, projector
352
+ - **Mid Stage:** A mixture of 4.7M high-quality synthetic data, 1 epoch, full model
353
+ - **Final-Image Stage:** A mixture of 3.6M single-image data, 1 epoch, full model
354
+ - **OneVision Stage:** A mixture of 1.6M single-image/multi-image/video data, 1 epoch, full model
355
  - **Precision:** bfloat16
356
 
357
+ ## Hardware & Software
 
 
 
 
358
 
359
+ - **GPUs:** 256 * Nvidia Tesla A100 (for whole model series training)
360
+ - **Orchestration:** [Huggingface Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
 
361
  - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
 
 
 
 
 
 
362
 
363
  # Citation
364
  ```
365
+ @article{li2024llavaonevision,
366
+ title={LLaVA-OneVision},
 
 
 
 
 
367
  }
368
  ```