Edit model card


A cute robot wearing a kimono writes calligraphy with one single brush

A cute robot wearing a kimono writes calligraphy with one single brush — Stable Diffusion XL

Model Description

japanese-stablelm-base-beta-7b is a 7B-parameter decoder-only language model based on Llama-2-7b that has been fine-tuned on a diverse collection of Japanese data, with the intent of maximizing downstream performance on Japanese language tasks.

For an instruction-following model, check Japanese-StableLM-Instruct-Beta-7B. The base and instruct models are also available in larger 70b sizes. For a model that has faster inference times, see Japanese-StableLM-Base-JA_Vocab-Beta-7B, or the instruction-following version.


First install additional dependencies in requirements.txt:

pip install -r requirements.txt

Then start generating text with japanese-stablelm-base-beta-7b by using the following code snippet:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "stabilityai/japanese-stablelm-base-beta-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)

# The next line may need to be modified depending on the environment
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")

prompt = """
AI で科学研究を加速するには、

input_ids = tokenizer.encode(

# this is for reproducibility.
# feel free to change to get different result
seed = 23  

tokens = model.generate(

out = tokenizer.decode(tokens[0], skip_special_tokens=True)

We suggest playing with different generation config (top_p, repetition_penalty etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.

Model Details

  • Model type: japanese-stablelm-base-beta-7b model is an auto-regressive language model based on the Llama2 transformer architecture.
  • Language(s): Japanese
  • License: Llama2 Community License.
  • Contact: For questions and comments about the model, please join Stable Community Japan. For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.

Training Dataset

Roughly 100B tokens from a mixture of the following corpora were used for continued pre-training.

Use and Limitations

Intended Use

The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.

Limitations and bias

The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.


This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by Takuya Akiba and Meng Lee. The members of the team are as follows:


We thank Meta Research for releasing Llama 2 under an open license for others to build on.

We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.

We are also appreciative of AI Novelist/Sta (Bit192, Inc.) and the numerous contributors from Stable Community Japan for assisting us in gathering a large amount of high-quality Japanese textual data for model training.

How to cite

      title={Japanese StableLM Base Beta 7B}, 
      author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki}
Downloads last month
Model size
6.74B params
Tensor type

Datasets used to train stabilityai/japanese-stablelm-base-beta-7b

Collection including stabilityai/japanese-stablelm-base-beta-7b