File size: 1,929 Bytes
7277d2f c4cff17 7277d2f 37f2cd9 7277d2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: apache-2.0
language:
- ja
- en
tags:
- japanese
- causal-lm
inference: false
---
# CyberAgentLM2-7B (CALM2-7B)
## Model Description
CyberAgentLM2 is a decoder-only language model pre-trained on the 1.3T tokens of publicly available Japanese and English datasets.
Variant: [CyberAgentLM2-Chat](https://huggingface.co/cyberagent/calm2-7b-chat)
## Requirements
- transformers >= 4.34.1
- accelerate
## Usage
```python
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
assert transformers.__version__ >= "4.34.1"
model = AutoModelForCausalLM.from_pretrained("cyberagent/calm2-7b", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/calm2-7b")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "AIによって私達の暮らしは、"
token_ids = tokenizer.encode(prompt, return_tensors="pt")
output_ids = model.generate(
input_ids=token_ids.to(model.device),
max_new_tokens=100,
do_sample=True,
temperature=0.9,
streamer=streamer,
)
```
## Model Details
* **Model size**: 7B
* **Trained tokens**: 1.3T tokens
* **Context length**: 4096
* **Model type**: Transformer-based Language Model
* **Language(s)**: Japanese, English
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **License**: Apache-2.0
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```tex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` |