sho-takase commited on
Commit
82650c2
1 Parent(s): 9248683

initial commit

Browse files
Files changed (5) hide show
  1. README.md +54 -0
  2. config.json +25 -0
  3. pytorch_model.bin +3 -0
  4. spiece.model +3 -0
  5. tokenizer_config.json +15 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # japanese-large-lm-3.6b
2
+
3
+ This repository provides a 3.6B parameters Japanese language model, trained by [LINE Corporation](https://linecorp.com/ja/).
4
+
5
+ ## How to use
6
+
7
+ ```
8
+ import torch
9
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
10
+
11
+ model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-3.6b", torch_dtype=torch.float16)
12
+ tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-3.6b", use_fast=False)
13
+ generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
14
+ set_seed(101)
15
+
16
+ text = generator(
17
+ "おはようございます、今日の天気は",
18
+ max_length=30,
19
+ do_sample=True,
20
+ pad_token_id=tokenizer.pad_token_id,
21
+ num_return_sequences=5,
22
+ )
23
+
24
+ for t in text:
25
+ print(t)
26
+
27
+ # 下記は生成される出力の例
28
+ # [{'generated_text': 'おはようございます、今日の天気は雨模様ですね。梅雨のこの時期の 朝は洗濯物が乾きにくいなど、主婦にとっては悩みどころですね。 では、'},
29
+ # {'generated_text': 'おはようございます、今日の天気は晴れ。 気温は8°C位です。 朝晩は結構冷え込むようになりました。 寒くなってくると、...'},
30
+ # {'generated_text': 'おはようございます、今日の天気は曇りです。 朝起きたら雪が軽く積もっていた。 寒さもそれほどでもありません。 日中は晴れるみたいですね。'},
31
+ # {'generated_text': 'おはようございます、今日の天気は☁のち☀です。 朝の気温5°C、日中も21°Cと 暖かい予報です'},
32
+ # {'generated_text': 'おはようございます、今日の天気は晴天ですが涼しい1日です、気温は午後になり低くなり25°Cくらい、風も強いようですので、'}]
33
+ ```
34
+
35
+ ## Model architecture
36
+ | Model | Vocab size | Architecture | Position type | Layers | Hidden dim | Attention heads |
37
+ | :---: | :--------: | :----------- | :-----------: | :----: | :--------: | :-------------: |
38
+ | 1.7B | 51200 | GPT2 | Absolute | 24 | 2304 | 24 |
39
+ | 3.6B | 51200 | GPTNeoX | RoPE | 30 | 3072 | 32 |
40
+
41
+ ## Training Corpus
42
+ Our training corpus consists of the Japanese portions of publicly available corpus such as C4, CC-100, and Oscar.
43
+ We also incorporated the Web texts crawled by in-house system.
44
+ The total size of our training corpus is about 650 GB.
45
+ The trained model achieves 7.50 perplexity on the internal validation sets of Japanese C4,
46
+
47
+ ## Tokenization
48
+ We use a sentencepiece tokenizer with a unigram language model and byte-fallback.
49
+ We **do not** apply pre-tokenization with Japanese tokenizer.
50
+ Thus, a user may directly feed raw sentences into the tokenizer.
51
+
52
+
53
+ ## License
54
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GPTNeoXForCausalLM"
4
+ ],
5
+ "bos_token_id": 2,
6
+ "classifier_dropout": 0.1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "gelu",
9
+ "hidden_size": 3072,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 12288,
12
+ "layer_norm_eps": 1e-05,
13
+ "max_position_embeddings": 2048,
14
+ "model_type": "gpt_neox",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 30,
17
+ "rotary_emb_base": 10000,
18
+ "rotary_pct": 1.0,
19
+ "tie_word_embeddings": true,
20
+ "torch_dtype": "float16",
21
+ "transformers_version": "4.29.2",
22
+ "use_cache": true,
23
+ "use_parallel_residual": false,
24
+ "vocab_size": 51200
25
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baa5571b7827fa31387b9d98877ccee8f5e9859633c7f4136f6c37ab4c4c41a1
3
+ size 7237734117
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c5c56a739832923347681ed8a03a9cbf5afb6d1fe60089a5b01dd2dd063ab71
3
+ size 1208648
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "extra_ids": 0,
3
+ "do_lower_case": false,
4
+ "keep_accents": true,
5
+ "bos_token": "<s>",
6
+ "eos_token": "</s>",
7
+ "unk_token": "<unk>",
8
+ "pad_token": "<pad>",
9
+ "mask_token": "<mask>",
10
+ "cls_token": "<cls>",
11
+ "sep_token": "<sep>",
12
+ "sp_model_kwargs": {},
13
+ "special_tokens_map_file": null,
14
+ "tokenizer_class": "T5Tokenizer"
15
+ }