|
--- |
|
license: apache-2.0 |
|
language: |
|
- ja |
|
--- |
|
|
|
# Mixtral-8x7B-v0.1-japanese |
|
|
|
Mixtral-8x7B-v0.1-japaneseは[Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)をベースに語彙拡張継続事前学習したモデルです。 |
|
詳細は[ABEJAのテックブログ](https://tech-blog.abeja.asia/)を参照してください。 |
|
|
|
|
|
# 使い方 |
|
``` python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "abeja/Mixtral-8x7B-v0.1-japanese" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.float16, |
|
use_cache=True, |
|
device_map="auto", |
|
) |
|
model.eval() |
|
|
|
text = "人とAIが協調するためには、" |
|
input_ids = tokenizer.encode(input_text, return_tensors="pt") |
|
|
|
with torch.no_grad(): |
|
output_ids = model.generate( |
|
token_ids.to(model.device), |
|
max_new_tokens=256, |
|
pad_token_id=tokenizer.pad_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
) |
|
output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True) |
|
print(output) |
|
``` |
|
|
|
# 開発者 |
|
Kentaro Nakanishi |
|
Keisuke Fujimoto |
|
Kyo Hattori |
|
Shinya Otani |
|
Shogo Muranushi |
|
(*)アルファベット順 |
|
|
|
|