losyer8 commited on
Commit
66f29c4
1 Parent(s): 69fb1a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -10
README.md CHANGED
@@ -23,7 +23,7 @@ inference: false
23
 
24
  # llm-jp-13b-v1.0-mdsfmt
25
 
26
- This repository provides large language models (Megatron-DeepSpeed format) developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan. **transformers format models are available [here](https://huggingface.co/llm-jp).**
27
 
28
  | Model Variant |
29
  | :--- |
@@ -34,10 +34,12 @@ This repository provides large language models (Megatron-DeepSpeed format) devel
34
  | [llm-jp-1.3b-v1.0-mdsfmt-itr87430](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0-mdsfmt-itr87430) |
35
 
36
 
37
- [llm-jp-13b-v1.0-mdsfmt-itr87870](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt-itr87870)
38
- and [llm-jp-1.3b-v1.0-mdsfmt-itr87430](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0-mdsfmt-itr87430) were originally trained with approximately 270B+ tokens.
39
- [llm-jp-13b-v1.0-mdsfmt](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt) and [llm-jp-1.3b-v1.0-mdsfmt](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0-mdsfmt)
40
- are models that have been further trained by adding high-quality 27B tokens.
 
 
41
 
42
 
43
  ## Model Details
@@ -60,8 +62,8 @@ and [llm-jp-1.3b-v1.0-mdsfmt-itr87430](https://huggingface.co/llm-jp/llm-jp-1.3b
60
 
61
  ## Tokenizer
62
  The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
63
- The vocab entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
64
- Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for the details of vocab constuction steps.
65
  - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
66
  - **Training algorithm:** SentencePiece Unigram byte-fallback
67
  - **Training data:** A subset of the datasets for model pre-training
@@ -72,7 +74,7 @@ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-
72
 
73
  ### Pre-training
74
 
75
- The models have been pre-trained on approximately 287.5B tokens, sourced from a blend of the following datasets.
76
 
77
  | Language | Dataset | Tokens |
78
  |:---:|:---:|:---:|
@@ -82,7 +84,8 @@ The models have been pre-trained on approximately 287.5B tokens, sourced from a
82
  ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
83
  |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
84
 
85
- Pretraining was done by 10-hold shards that consists approx. 27-28B tokens. We further finalized the pretraining with additional cleaned 27B tokens data.
 
86
 
87
 
88
  ## Evaluation
@@ -108,4 +111,4 @@ llm-jp(at)nii.ac.jp
108
  ## Model Card Authors
109
  *The names are listed in alphabetical order.*
110
 
111
- Namgi Han, Hirokazu Kiyomaru, Hiroshi Matsuda, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.
 
23
 
24
  # llm-jp-13b-v1.0-mdsfmt
25
 
26
+ This repository provides large language models (Megatron-DeepSpeed format) developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan. **Hugging Face Transformers format models are available [here](https://huggingface.co/llm-jp).**
27
 
28
  | Model Variant |
29
  | :--- |
 
34
  | [llm-jp-1.3b-v1.0-mdsfmt-itr87430](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0-mdsfmt-itr87430) |
35
 
36
 
37
+ `llm-jp-13b-v1.0-mdsfmt-itr87870`
38
+ and `llm-jp-1.3b-v1.0-mdsfmt-itr87430`
39
+ were originally trained with approximately 270B+ tokens.
40
+ `llm-jp-13b-v1.0-mdsfmt`
41
+ and `llm-jp-1.3b-v1.0-mdsfmt`
42
+ are models further trained by additional (potentially) high-quality 27B tokens data from `llm-jp-13b-v1.0-mdsfmt-itr87870` and `llm-jp-1.3b-v1.0-mdsfmt-itr87430`, respectively for finalizing the pre-training.
43
 
44
 
45
  ## Model Details
 
62
 
63
  ## Tokenizer
64
  The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
65
+ The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
66
+ Please refer to the [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
67
  - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
68
  - **Training algorithm:** SentencePiece Unigram byte-fallback
69
  - **Training data:** A subset of the datasets for model pre-training
 
74
 
75
  ### Pre-training
76
 
77
+ The models have been pre-trained using a blend of the following datasets.
78
 
79
  | Language | Dataset | Tokens |
80
  |:---:|:---:|:---:|
 
84
  ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
85
  |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
86
 
87
+ The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
88
+ We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data.
89
 
90
 
91
  ## Evaluation
 
111
  ## Model Card Authors
112
  *The names are listed in alphabetical order.*
113
 
114
+ Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.