jyhong836's picture
Upload README.md with huggingface_hub
d1cbaa0
metadata
license: mit

Compressed LLM Model Zone

The models are prepared by Visual Informatics Group @ University of Texas at Austin (VITA-group).

License: MIT License

Setup environment

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install transformers==4.31.0
pip install huggingface_hub accelerate

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = 'llama-2-7b'
comp_degree = 0.1
comp_method = 'sparsegpt_unstructured'
model_path = f'vita-group/comp-{arch}_{comp_method}_s{comp_degree}'
model = AutoModelForCausalLM.from_pretrained(
        model_path, 
        torch_dtype=torch.float16, 
        low_cpu_mem_usage=True, 
        device_map="auto"
    )
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b')
input_ids = tokenizer('Hello! I am a VITA-compressed-LLM chatbot!', return_tensors='pt').input_ids
outputs = model.generate(input_ids)
Base Model Model Size Compression Method Compression Degree
0 Llama-2 7b magnitude_unstructured s0.1
1 Llama-2 7b magnitude_unstructured s0.2
2 Llama-2 7b magnitude_unstructured s0.3
3 Llama-2 7b magnitude_unstructured s0.5
4 Llama-2 7b magnitude_unstructured s0.6
5 Llama-2 7b sparsegpt_unstructured s0.1
6 Llama-2 7b sparsegpt_unstructured s0.2
7 Llama-2 7b sparsegpt_unstructured s0.3
8 Llama-2 7b sparsegpt_unstructured s0.5
9 Llama-2 7b sparsegpt_unstructured s0.6
10 Llama-2 7b wanda_unstructured s0.1
11 Llama-2 7b wanda_unstructured s0.2
12 Llama-2 7b wanda_unstructured s0.3
13 Llama-2 7b wanda_unstructured s0.5
14 Llama-2 7b wanda_unstructured s0.6