|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- ja |
|
programming_language: |
|
- C |
|
- C++ |
|
- C# |
|
- Go |
|
- Java |
|
- JavaScript |
|
- Lua |
|
- PHP |
|
- Python |
|
- Ruby |
|
- Rust |
|
- Scala |
|
- TypeScript |
|
pipeline_tag: text-generation |
|
inference: false |
|
--- |
|
|
|
# llm-jp-13b-v1.0-mdsfmt |
|
|
|
This repository provides large language models (Megatron-DeepSpeed format) developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan. **Hugging Face Transformers format models are available [here](https://huggingface.co/llm-jp).** |
|
|
|
| Model Variant | |
|
| :--- | |
|
|**Pre-trained models** <span style="color:red">(Megatron-DeepSpeed format)</span>| |
|
| [llm-jp-13b-v1.0-mdsfmt](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt) | |
|
| [llm-jp-13b-v1.0-mdsfmt-itr87870](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt-itr87870) | |
|
| [llm-jp-1.3b-v1.0-mdsfmt](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0-mdsfmt) | |
|
| [llm-jp-1.3b-v1.0-mdsfmt-itr87430](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0-mdsfmt-itr87430) | |
|
|
|
|
|
`llm-jp-13b-v1.0-mdsfmt-itr87870` |
|
and `llm-jp-1.3b-v1.0-mdsfmt-itr87430` |
|
were originally trained with approximately 270B+ tokens. |
|
`llm-jp-13b-v1.0-mdsfmt` |
|
and `llm-jp-1.3b-v1.0-mdsfmt` |
|
are models further trained by additional (potentially) high-quality 27B tokens data from `llm-jp-13b-v1.0-mdsfmt-itr87870` and `llm-jp-1.3b-v1.0-mdsfmt-itr87430`, respectively for finalizing the pre-training. |
|
|
|
|
|
## Model Details |
|
|
|
- **Model type:** Transformer-based Language Model |
|
- **Total seen tokens:** 300B |
|
|
|
|Model|Params|Layers|Hidden size|Heads|Context length| |
|
|:---:|:---:|:---:|:---:|:---:|:---:| |
|
|13b model|13b|40|5120|40|2048| |
|
|1.3b model|1.3b|24|2048|16|2048| |
|
|
|
|
|
## Training |
|
|
|
- **Pre-training:** |
|
- **Hardware:** 96 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) |
|
- **Software:** Megatron-DeepSpeed |
|
|
|
|
|
## Tokenizer |
|
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model. |
|
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1). |
|
Please refer to the [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure. |
|
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0` |
|
- **Training algorithm:** SentencePiece Unigram byte-fallback |
|
- **Training data:** A subset of the datasets for model pre-training |
|
- **Vocabulary size:** 50,570 (mixed vocabulary of Japanese, English, and source code) |
|
|
|
|
|
## Datasets |
|
|
|
### Pre-training |
|
|
|
The models have been pre-trained using a blend of the following datasets. |
|
|
|
| Language | Dataset | Tokens | |
|
|:---:|:---:|:---:| |
|
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.5B |
|
||[mC4](https://huggingface.co/datasets/mc4)|136B |
|
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|5B |
|
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B |
|
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B |
|
|
|
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens. |
|
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data. |
|
|
|
|
|
## Evaluation |
|
|
|
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) for the evaluation. |
|
|
|
|
|
## Risks and Limitations |
|
|
|
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. |
|
|
|
|
|
## Send Questions to |
|
|
|
llm-jp(at)nii.ac.jp |
|
|
|
|
|
## License |
|
|
|
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
|
|
|
|
|
## Model Card Authors |
|
*The names are listed in alphabetical order.* |
|
|
|
Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto. |