Edit model card

drawing

TinyLlama + Japanese pre-training (50,004 steps)

How to use

Hugggingface

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("lightblue/karasu-1.1B")
model = AutoModelForCausalLM.from_pretrained("lightblue/karasu-1.1B", torch_dtype=torch.bfloat16, device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})

prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)

pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)

VLLM

from vllm import LLM, SamplingParams

sampling_params = SamplingParams(temperature=0.0, max_tokens=100)
llm = LLM(model="lightblue/karasu-1.1B")

messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
prompts = [prompt]

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Base checkpoint

TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T

Training datasets (total ~3B)

A filtered then sampled set from

  • OSCAR (Japanese)
  • mC4 (Japanese)

Developed by

Lightblue technology logo

Engineers

Peter Devine

Sho Higuchi

Advisors

Yuuki Yamanaka

Atom Sonoda

Project manager

Shunichi Taniguchi

Tomioka Wataru

Dataset evaluator

Renju Aoki

Downloads last month
918

Datasets used to train lightblue/karasu-1.1B

Collection including lightblue/karasu-1.1B