Chinese Chat Patch for Llama2-base-13B

Introduction

本项目旨在提供一个给Llama2-base-13B的中文对话LoRA补丁。LoRA参数放在sft_lora_model中。合并完后的模型即外层pytorch.bin。

该模型使用LLama2-base 13B 作为基底模型,使用带embedding 和 LM head 的Lora训练方式训练。模型已完成参数合并,可直接使用。也可以手动将sft_lora_model同Llama2-base 13B 进行合并。

训练数据使用BELLE项目中采样的50万SFT数据进行SFT训练。

Since the LLama2-chat model struggles to confine its responses to Chinese language when prompted with Chinese questions, the primary objective of this model is to provide a LLama2-chat 13B model that can engage in question and answer interactions in Chinese.

The objective of this project is to provide a Chinese dialogue/chat LoRA patch for Llama2-base-13B. The LoRA parameters are stored in the sft_lora_model. The merged resulting model is the pytorch.bin file.

The model utilizes LLama2-base 13B as its base model and is trained using the Lora training approach with the embedding and LM head. The model has undergone the Lora param merge and is now ready for direct use. It is also possible to manually merge the ./sft_lora_model with the Llama2-base 13B model to obtain the combined model.

The training data is sampled from BELLE project, which consists of 500,000 SFT samples.

Train Detail

一些训练上的细节:

  1. 训练框架:该模型使用了修改过的Chinese-LLaMA-Alpaca项目进行训练。
  2. Tokenizer:该模型使用了Chinese-Alpaca-Plus模型的tokenizer.model。这是因为LLama2本身的tokenizer.model同LLama1是一摸一样的。因此理论上可以完全复用Chinese-LLaMa项目的tokenizer而不会产生如何错位问题。
  3. 训练参数:由于模型需要resize embedding,多出来的embedding等于随即初始化,因此训练前期deepspeed及其容易因“OVERFLOW”而开始reduce loss scale。频繁reduce 后会直接导致scale过小溢出,从而导致训练崩溃。此时不应降低学习率,warmup 等超参,而是应该放大到Pretrain 规模。如此才能让随即初始化的embedding快速走上正轨。
  4. 训练资源:8卡V100。40个小时
  5. 训练起始的loss:8.4258
  6. 训练终止的loss:1.4241

Some details in training:

  1. Trianing Framework: This model is trained on modified Chinese-LLaMA-Alpaca Framework.
  2. Tokenizer: This model utilizes the tokenizer.model from the Chinese-Alpaca-Plus model. The reason for this choice is that the tokenizer.model in LLama2 is identical to the one used in LLama1. As a result, it is theoretically feasible to entirely reuse the tokenizer from the Chinese-LLaMa project without encountering any issues related to token misalignment.
  3. Training Parameters: Due to the need to resize the embeddings, the excess embeddings are randomly initialized. As a consequence, during the initial stages of training, Deepspeed is prone to reducing the loss scale due to "OVERFLOW" issues. Frequent reductions can lead to an overly small scale, causing overflow and eventually crashing the training process. In such situations, it is not advisable to lower the learning rate, warm-up, or other hyperparameters. Instead, the recommended approach is to upscale the training parameters to Pretrain scale. This allows the randomly initialized embeddings to quickly converge to the right path.
  4. Training Resource: 8*V100, 21 hours.
  5. Initial Loss: 8.4258
  6. Train Loss: 1.4241

Licence

本仓库的模型依照 Apache-2.0 协议开源,模型的权重的使用则需要遵循LLama2MODEL LICENCE

This repository's models are open-sourced under the Apache-2.0 license, and their weight usage must adhere to LLama2 MODEL LICENCE license.

Future Work

将会在近期逐步放出

  1. 更大SFT数据规模训练下的模型。
  2. 13B及以下的LLama2 同LLama2-chat的模型,以供大家对比。

I will release the following models:

  1. Models trained on larger data scale.
  2. Models trained on LLama2 and LLama2-chat (under the 13B, since I only have V100), for comparison.
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.