Text Generation
Transformers
PyTorch
Safetensors
Japanese
English
gpt_neox
text-generation-inference
tianyuz commited on
Commit
fb6e99b
1 Parent(s): 9f59c67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -43,6 +43,7 @@ This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 bil
43
  | Variant | Link |
44
  | :-- | :--|
45
  | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
 
46
  | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
47
  | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
48
  | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
@@ -67,11 +68,12 @@ This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 bil
67
 
68
  | Model | 4-task average accuracy | 6-task average accuracy |
69
  | :-- | :-- | :-- |
70
- | bilingual-gpt-neox-4b-instruction-sft | 59.25 | 60.59 |
 
71
  | **bilingual-gpt-neox-4b** | **56.12** | **51.83** |
72
  | japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
73
  | japanese-gpt-neox-3.6b | 55.07 | 50.32 |
74
-
75
  * **English benchmark**
76
 
77
  Using the [EleutherAI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master), we found the bilingual-gpt-neox-4b performs comparably with English/multilingual models of similar sizes.
 
43
  | Variant | Link |
44
  | :-- | :--|
45
  | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
46
+ | Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo |
47
  | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
48
  | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
49
  | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
 
68
 
69
  | Model | 4-task average accuracy | 6-task average accuracy |
70
  | :-- | :-- | :-- |
71
+ | bilingual-gpt-neox-4b-instruction-ppo | 61.01 | 61.16 |
72
+ | bilingual-gpt-neox-4b-instruction-sft | 61.02 | 61.69 |
73
  | **bilingual-gpt-neox-4b** | **56.12** | **51.83** |
74
  | japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
75
  | japanese-gpt-neox-3.6b | 55.07 | 50.32 |
76
+
77
  * **English benchmark**
78
 
79
  Using the [EleutherAI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master), we found the bilingual-gpt-neox-4b performs comparably with English/multilingual models of similar sizes.