QuantFactory/Mistral-Nemo-Japanese-Instruct-2408-GGUF

This is quantized version of cyberagent/Mistral-Nemo-Japanese-Instruct-2408 created using llama.cpp

Original Model Card

Mistral-Nemo-Japanese-Instruct-2408

Model Description

This is a Japanese continually pre-trained model based on mistralai/Mistral-Nemo-Instruct-2407.

Usage

Make sure to update your transformers installation via pip install --upgrade transformers.

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model = AutoModelForCausalLM.from_pretrained("cyberagent/Mistral-Nemo-Japanese-Instruct-2408", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/Mistral-Nemo-Japanese-Instruct-2408")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

messages = [
    {"role": "system", "content": "あなたは親切なAIアシスタントです。"},
    {"role": "user", "content": "AIによって私たちの暮らしはどのように変わりますか?"}
]

input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
output_ids = model.generate(input_ids,
                            max_new_tokens=1024,
                            temperature=0.5,
                            streamer=streamer)

Prompt Format

ChatML Format

<s><|im_start|>system
あなたは親切なAIアシスタントです。<|im_end|>
<|im_start|>user
AIによって私たちの暮らしはどのように変わりますか?<|im_end|>
<|im_start|>assistant

License

Apache-2.0

Author

Ryosuke Ishigami

How to cite

@misc{cyberagent-mistral-nemo-japanese-instruct-2408,
      title={Mistral-Nemo-Japanese-Instruct-2408},
      url={https://huggingface.co/cyberagent/Mistral-Nemo-Japanese-Instruct-2408},
      author={Ryosuke Ishigami},
      year={2024},
}
Downloads last month
1,668
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for QuantFactory/Mistral-Nemo-Japanese-Instruct-2408-GGUF

Quantized
(55)
this model