Edit model card

The GPTQ version of phi-2-super

Usage

import transformers
import torch

if __name__ == "__main__":
    # model_name = "abacaj/phi-2-super"
    model_name = "./models/phi-2-super-gptq"
    tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)

    model = transformers.AutoModelForCausalLM.from_pretrained(
        model_name,
        torch_dtype=torch.float16,
        device_map="cuda",
    ).eval()

    messages = [{"role": "user", "content": "Hello, who are you?"}]
    inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(
        model.device
    )
    input_ids_cutoff = inputs.size(dim=1)

    with torch.no_grad():
        generated_ids = model.generate(
            input_ids=inputs,
            use_cache=True,
            max_new_tokens=512,
            temperature=0.2,
            top_p=0.95,
            do_sample=True,
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.pad_token_id,
        )

    completion = tokenizer.decode(
        generated_ids[0][input_ids_cutoff:],
        skip_special_tokens=True,
    )

    print(completion)
Downloads last month
0
Safetensors
Model size
601M params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with liuxiong332/phi-2-super-gptq.
This model can be loaded on Inference API (serverless).