Takeshi Kojima commited on
Commit
98be71c
1 Parent(s): 61fb8bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
- # tsubaki-10b
6
 
7
  # Overview
8
  This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters.
@@ -26,8 +26,8 @@ This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 bi
26
 
27
  | Variant | Link |
28
  | :-- | :--|
29
- | tsubaki-10b-instruction-sft | https://huggingface.co/Kojima777/tsubaki-10b-instruction-sft |
30
- | tsubaki-10b | https://huggingface.co/Kojima777/tsubaki-10b |
31
 
32
  * **Authors**
33
 
@@ -43,8 +43,8 @@ This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 bi
43
 
44
  | Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
45
  | :-- | :-- | :-- | :-- | :-- | :-- |
46
- | tsubaki-10b-instruction-sft | 79.04 | 74.35 | 65.65 | 96.06 | 80.09 |
47
- | tsubaki-10b | 67.27 | 65.86 | 54.19 | 84.49 | 64.54 |
48
 
49
  ---
50
 
@@ -54,8 +54,8 @@ This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 bi
54
  import torch
55
  from transformers import AutoTokenizer, AutoModelForCausalLM
56
 
57
- tokenizer = AutoTokenizer.from_pretrained("Kojima777/tsubaki-10b", use_fast=False)
58
- model = AutoModelForCausalLM.from_pretrained("Kojima777/tsubaki-10b")
59
 
60
  if torch.cuda.is_available():
61
  model = model.to("cuda")
 
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
+ # weblab-10b
6
 
7
  # Overview
8
  This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters.
 
26
 
27
  | Variant | Link |
28
  | :-- | :--|
29
+ | weblab-10b-instruction-sft | https://huggingface.co/Kojima777/weblab-10b-instruction-sft |
30
+ | weblab-10b | https://huggingface.co/Kojima777/weblab-10b |
31
 
32
  * **Authors**
33
 
 
43
 
44
  | Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
45
  | :-- | :-- | :-- | :-- | :-- | :-- |
46
+ | weblab-10b-instruction-sft | 79.04 | 74.35 | 65.65 | 96.06 | 80.09 |
47
+ | weblab-10b | 67.27 | 65.86 | 54.19 | 84.49 | 64.54 |
48
 
49
  ---
50
 
 
54
  import torch
55
  from transformers import AutoTokenizer, AutoModelForCausalLM
56
 
57
+ tokenizer = AutoTokenizer.from_pretrained("Kojima777/weblab-10b", use_fast=False)
58
+ model = AutoModelForCausalLM.from_pretrained("Kojima777/weblab-10b")
59
 
60
  if torch.cuda.is_available():
61
  model = model.to("cuda")