lrl-modelcloud's picture
Create README.md
e4aa287 verified
|
raw
history blame
No virus
1.24 kB
metadata
tags:
  - gptq
  - 4bit
  - int4
  - gptqmodel
  - modelcloud
  - llama-3.1
  - 70b
  - instruct

This model has been quantized using GPTQModel.

  • bits: 4
  • group_size: 128
  • desc_act: true
  • static_groups: false
  • sym: true
  • lm_head: false
  • damp_percent: 0.01
  • true_sequential: true
  • model_name_or_path: ""
  • model_file_base_name: "model"
  • quant_method: "gptq"
  • checkpoint_format: "gptq"
  • meta
    • quantizer: "gptqmodel:0.9.9-dev0"

Here is an example:

from transformers import AutoTokenizer
from gptqmodel import GPTQModel

model_name = "ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit"

prompt = [{"role": "user", "content": "I am in Shanghai, preparing to visit the natural history museum. Can you tell me the best way to"}]

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = GPTQModel.from_quantized(model_name)

input_tensor = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_ids=input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)

print(result)