Edit model card

Model Card for LLaVa-8x7B

The LLaVa-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The LLaVa-8x7B outperforms Llama 3 70B on most benchmarks we tested.

Warning

This repo contains weights that are compatible with vLLM serving of the model as well as Hugging Face transformers library. It is based on the original LLaVa torrent release, but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.

Run the model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Satyam-Singh/LLava-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)

text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:

In half-precision

Note float16 precision only works on GPU devices

Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Satyam-Singh/LLava-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)

+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)

text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Lower precision using (8-bit & 4-bit) using bitsandbytes

Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Satyam-Singh/LLava-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)

+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)

text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Load the model with Flash Attention 2

Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Satyam-Singh/LLava-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)

+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)

text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Notice

LLava-8x7B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.

Love From The LLaVa AI & UniVerse Unique AI Team

Downloads last month
37
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Satyam-Singh/LLaVa-Large-Language-Virtual-Assistant-v2.1.6 2

Collection including Satyam-Singh/LLaVa-Large-Language-Virtual-Assistant-v2.1.6