metadata
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama2
datasets:
- mc4
- cc100
- oscar
- wikipedia
- EleutherAI/pile
language:
- ja
- en
inference: false
rinna/youri-7b-gptq
Overview
rinna/youri-7b-gptq
is the quantized model for rinna/youri-7b
using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.
Library
Refer to the original model for library details.
Model architecture
Refer to the original model for architecture details.
Continual pre-training
Refer to the original model for pre-training details.
Authors
Benchmarking
Please refer to rinna's LM benchmark page.
How to use the model
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-gptq", use_safetensors=True)
text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids=token_ids.to(model.device),
max_new_tokens=200,
min_new_tokens=200,
do_sample=True,
temperature=1.0,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
Tokenization
The model uses the original llama-2 tokenizer.
How to cite
@misc{RinnaYouri7bGPTQ,
url={https://huggingface.co/rinna/youri-7b-gptq},
title={rinna/youri-7b-gptq},
author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
}