Edit model card

Sarashina1-7B

This repository provides Japanese language models trained by SB Intuitions.

How to use

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
 
model = AutoModelForCausalLM.from_pretrained("sbintuitions/sarashina1-7b", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina1-7b")
# If you want to use slow tokenizer
# tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina1-7b", use_fast=False, revision="slow-tokenizer")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
set_seed(123)
 
text = generator(
    "おはようございます、今日の天気は",
    max_length=30,
    do_sample=True,
    pad_token_id=tokenizer.pad_token_id,
    num_return_sequences=3,
)
 
for t in text:
  print(t)
 
# These examples are generated by sarashina1-7b parameters model
# {'generated_text': 'おはようございます、今日の天気は晴れ!!最高気温は15度、最低気温は7度です。今日も1日頑張りましょー♪写真は、去年'}
# {'generated_text': 'おはようございます、今日の天気は曇り:cloud:です。 雨予報なので、洗濯物は家の中へ。 :city_sunrise:の見える時間。 今日は'}
# {'generated_text': 'おはようございます、今日の天気は、晴れ、気温も10度以上に上がるそうです、お日様が当たっていると15度くらいになると思います、朝の'}

Configuration

Parameters Vocab size Training tokens Architecture Position type Layers Hidden dim Attention heads
7B 51200 1.0T GPTNeoX RoPE 32 4096 32
13B 51200 1.0T GPTNeoX RoPE 40 5120 40
65B 51200 800B GPTNeoX RoPE 80 8192 64

Training Corpus

We used a Japanese portion of the Common Crawl corpus, which is the largest Web corpus, as our training dataset. To clean the training corpus, we used CCNet and HojiChar. After cleaning, our corpus contains about 550B tokens.

Tokenization

We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We do not apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer.

Ethical Considerations and Limitations

Sarashina1 has not been tuned to follow an instruction yet. Therefore, sarashina1 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs. Before using sarashina1, we would like developers to tune models based on human preferences and safety considerations.

License

MIT License

Downloads last month
1,275
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including sbintuitions/sarashina1-7b