Edit model card

This is a generative model converted to fp16 format based on ai-forever/ruGPT-3.5-13B

Examples of usage

from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM

model = AutoGPTQForCausalLM.from_quantized('Gaivoronsky/ruGPT-3.5-13B-8bit', device="cuda:0", use_triton=False)
tokenizer = AutoTokenizer.from_pretrained('Gaivoronsky/ruGPT-3.5-13B-8bit')

request = "Человек: Сколько весит жираф? Помощник: "
encoded_input = tokenizer(request, return_tensors='pt', \
                          add_special_tokens=False).to('cuda')
output = model.generate(
    **encoded_input,
    num_beams=2,
    do_sample=True,
    max_new_tokens=100
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
34
Inference Examples
Inference API (serverless) has been turned off for this model.