#!/bin/bash

<< 'COMMENT'
In the previous section, we went through how to construct training and testing data properly. In this tutorial, we will actually fine-tune the model.
在上一节中，我们介绍了如何正确构建训练和测试数据。在本教程中，我们实际上将微调模型。

Fine-tune
Below are the arguments for fine-tuning:
微调
以下是微调的参数：

The following arguments are for model:
以下参数适用于 model：

model_name_or_path: The model checkpoint for initialization.  用于初始化的模型检查点
config_name: Pretrained config name or path if not the same as model_name. 预训练的配置名称或路径（如果与 model_name 不同）。
tokenizer_name: Pretrained tokenizer name or path if not the same as model_name. 预训练的分词器名称或路径（如果与 model_name 不同）。
cache_dir: Where do you want to store the pre-trained models downloaded from s3. 您希望将从 s3 下载的预训练模型存储在哪里。
trust_remote_code: Trust remote code 信任远程代码
token: The token to use when accessing the model. 访问模型时使用的令牌。

The following arguments are for data:
以下参数用于 data：

train_data: One or more paths to training data. query: str, pos: List[str], neg: List[str] are required in the training data. Argument type: multiple.
            训练数据的一个或多个路径。query： str， pos： List[str]， neg： List[str] 在训练数据中是必需的。参数类型：多个。
cache_path: Where do you want to store the cached data. 您希望将缓存数据存储在何处。
train_group_size: (No metadata provided) 未提供元数据
query_max_len: The maximum total input sequence length after tokenization for query. Sequences longer than this will be truncated. 
            分词化后用于查询的最大总输入序列长度。超过此长度的序列将被截断。
passage_max_len: The maximum total input sequence length after tokenization for passage. Sequences longer than this will be truncated.
            分词化后用于传代的最大总输入序列长度。超过此长度的序列将被截断。
pad_to_multiple_of: If set will pad the sequence to be a multiple of the provided value. 如果设置，则会将序列填充为所提供值的倍数
max_example_num_per_dataset: The max number of examples for each dataset. 每个数据集的最大样本数。
query_instruction_for_retrieval: Instruction for query.  查询说明。
query_instruction_format: Format for query instruction. 查询指令的格式。
knowledge_distillation: Use knowledge distillation when pos_scores: List[float] and neg_scores: List[float] are in features of training data.
            当 pos_scores时，使用知识蒸馏：List[float] 和 neg_scores： List[float] 位于训练数据的特征中。
passage_instruction_for_retrieval: Instruction for passage. 通行说明。
passage_instruction_format: Format for passage instruction. 段落说明的格式
shuffle_ratio: The ratio of shuffling the text. 随机播放文本的比率。
same_dataset_within_batch: All samples in the same batch comes from the same dataset. 同一批次中的所有样本都来自同一数据集。
small_threshold: The threshold of small dataset. All small dataset in the same directory will be merged into one dataset. 
            小数据集的阈值。同一目录下的所有小数据集将合并为一个数据集。
drop_threshold: The threshold for dropping merged small dataset. If the number of examples in the merged small dataset is less than this threshold, it will be dropped. 丢弃合并的小数据集的阈值。如果合并的小数据集中的样本数小于此阈值，则将被丢弃。

And the following extra arguments:
以及以下额外的参数：

negatives_cross_device: Share negatives across devices. 跨设备共享负片。
temperature: Temperature used for similarity score. 用于相似性分数的 Temperature。
fix_position_embedding: Freeze the parameters of position embeddings. 冻结位置嵌入的参数。
sentence_pooling_method: The pooling method. Available options: cls, mean, last_token. Default: cls. 池化方法。可用选项：cls、mean last_token。默认值：cls。
normalize_embeddings: Whether to normalize the embeddings. 是否规范化嵌入。
sub_batch_size: Sub batch size for training. 用于训练的子批量大小。
kd_loss_type: The loss type for knowledge distillation. Available options: kl_div, m3_kd_loss. Default: kl_div.
        知识蒸馏的损失类型。可用选项：kl_div、m3_kd_loss。默认值：kl_div。
COMMENT

echo "Start Fine-tune..."

torchrun --nproc_per_node 2 \
	-m FlagEmbedding.finetune.embedder.encoder_only.base \
	--model_name_or_path /data1/models/BAAI/bge-large-zh-v1.5 \
    --cache_dir /data1/mjb/flage-embedding-learing/7_Fine-tuning/cache/model \
    --train_data /data1/mjb/flage-embedding-learing/7_Fine-tuning/ft_data/training.json \
    --cache_path /data1/mjb/flage-embedding-learing/7_Fine-tuning/cache/data \
    --train_group_size 8 \
    --query_max_len 512 \
    --passage_max_len 512 \
    --pad_to_multiple_of 8 \
    --query_instruction_for_retrieval 'Represent this sentence for searching relevant passages: ' \
    --query_instruction_format '{}{}' \
    --knowledge_distillation False \
	--output_dir /data1/mjb/flage-embedding-learing/7_Fine-tuning/test_encoder_only_base_bge-large-en-v1.5 \
    --overwrite_output_dir \
    --learning_rate 1e-5 \
    --fp16 \
    --num_train_epochs 2 \
    --per_device_train_batch_size 2 \
    --dataloader_drop_last True \
    --warmup_ratio 0.1 \
    --gradient_checkpointing \
    --deepspeed /data1/mjb/flage-embedding-learing/7_Fine-tuning/config/ds_stage0.json \
    --logging_steps 1 \
    --save_steps 1000 \
    --negatives_cross_device \
    --temperature 0.02 \
    --sentence_pooling_method cls \
    --normalize_embeddings True \
    --kd_loss_type kl_div