rinna/youri-7b-gptq

rinna-icon

Overview

rinna/youri-7b-gptq is the quantized model for rinna/youri-7b using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.


Benchmarking

Please refer to rinna's LM benchmark page.

How to use the model

import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-gptq", use_safetensors=True)

text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        input_ids=token_ids.to(model.device),
        max_new_tokens=200,
        min_new_tokens=200,
        do_sample=True,
        temperature=1.0,
        top_p=0.95,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)

Tokenization

The model uses the original llama-2 tokenizer.


How to cite

@misc{rinna-youri-7b-gptq,
    title = {rinna/youri-7b-gptq},
    author = {Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei},
    url = {https://huggingface.co/rinna/youri-7b-gptq}
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    pages = {13898--13905},
    url = {https://aclanthology.org/2024.lrec-main.1213},
    note = {\url{https://arxiv.org/abs/2404.01657}}
}

License

The llama2 license

Downloads last month
16
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for rinna/youri-7b-gptq

Finetuned
rinna/youri-7b
Quantized
(5)
this model

Datasets used to train rinna/youri-7b-gptq

Collection including rinna/youri-7b-gptq