SabbatH 2x7B
Model Description
SabbatH 2x7B is a Japanese language model that has been created by combining two models: Antler-RP-ja-westlake-chatvector and Hameln-japanese-mistral-7B, using a Mixture of Experts (MoE) approach. It also used chatntq-ja-7b-v1.0 as a base model.
Usage
Ensure you are using Transformers 4.34.0 or newer.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Elizezen/SabbatH-2x7B")
model = AutoModelForCausalLM.from_pretrained(
"Elizezen/SabbatH-2x7B",
torch_dtype="auto",
)
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
input_ids = tokenizer.encode(
"ๅพ่ผฉใฏ็ซใงใใใๅๅใฏใพใ ใชใใใใใชๅพ่ผฉใฏไปใ",
add_special_tokens=True,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=512,
temperature=1,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
"""
output example:
็ชใใๅคใ่ฆใฆใใใ
ใใใใ้ณฅใ้ฃใใงใใใ
็ฎใซใคใใฎใฏ้็ฉบใจใๅคงใใใฎ็ฝใ็พฝใฐใใ้ณใ ใฃใใใใไปฅๅคใซไฝใใชใใๅพ่ผฉใฏ็ชใใ้ใใฆ้จๅฑใ่ฆๅใใใใ็นๅฅๅคใใฃใๆงๅญใใชใใใใงใใใ
ใใใฆใจโฆใ
ๆใ ใใใฎใพใพใใฃใจๅคใซๅฑ
็ถใใใฐ้คใใใใฏ่ฒฐใใใใ็ฅใใใใใใไปฅไธใซ็นๅฅใใใซๅฟ
่ฆใช็ฉใฏ็กใใๅฝผๅฅณใใกใฏๅฎถไบใใใฆใใๆงๅญใ ใใ้ฃๆๅบซใฎไธญ่บซใพใงๆๆกใใใใฆใใชใ็บใ่ชๅใใ้คใญใ ใใซ่กใใใจใๅบๆฅใชใใใใใใใๅฟ
่ฆๆงใๆใใชใใฃใใ
ใโฆโฆโฆใ
ๅพ่ผฉใฏ่ใ่พผใใงใใพใฃใใไฝใใใใฐ่ฏใใฎใ ใใใ๏ผ็ ใใซใคใใซใฏๆฉ้ใใๆ้ใงใใ็บใใใใๅบๆฅใชใใใใฎๅฎถใง็ๆดปใใฆ๏ผๆฅ็ฎใซ็ชๅ
ฅใใใใจใใฆใใๅพ่ผฉใฏ้ๆนใซๆฎใใฆใใใ
ใตใจๆใฃใไบใใใใไบบ้็ใ่ฆๅญฆใใใฎใไธใคใฎๆใงใฏใชใใ ใใใ๏ผใใ่ใใๅพ่ผฉใฏ็ชใใๅคใซๅบใฆใฟใใๅฐ้ขใซ้ใ็ซใกใๅจใใ่ฆๆธกใใฆใฟใใจโฆ
ใใใใใใใฏใ
็ฎใฎๅใซๅฐใใชไบบใๅฑ
ใใ่ไธญใๆฒใใฃใฆใใใใใงใใ็บใๅนดๅฏใใใ็ฅใใชใใใใใช
"""
Intended Use
The primary purpose of this language model is to assist in generating novels. While it can handle various prompts, it may not excel in providing instruction-based responses. Note that the model's responses are not censored, and occasionally sensitive content may be generated.
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.