duoqi commited on
Commit
9d4d4d1
·
verified ·
1 Parent(s): fd2228a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,6 +7,6 @@ license: apache-2.0
7
  tags:
8
  - llm
9
  ---
10
- A Llama version for Nanbeige2-16B-Chat, which could be loaded by LlamaForCausalLM.
11
 
12
- Nanbeige-16B is a 16 billion parameter language model developed by Nanbeige LLM Lab. It uses 2.5T Tokens for pre-training. The training data includes a large amount of high-quality internet corpus, various books, code, etc. It has achieved good results on various authoritative evaluation data sets.
 
7
  tags:
8
  - llm
9
  ---
10
+ Introduction
11
 
12
+ The [Nanbeige2-16B-Chat](https://huggingface.co/Nanbeige/Nanbeige2-16B-Chat) is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Direct Preference Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets.