--- license: llama3 datasets: - REILX/extracted_tagengo_gpt4 - TigerResearch/sft_zh - alexl83/AlpacaDataCleaned - LooksJuicy/ruozhiba - silk-road/alpaca-data-gpt4-chinese - databricks/databricks-dolly-15k - microsoft/orca-math-word-problems-200k - Sao10K/Claude-3-Opus-Instruct-5K language: - zh - en tags: - text-generation-inference - llama - chat - sft - lora --- ### 数据集 使用以下8个数据集 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636f54b95d2050767e4a6317/OkuVQ1lWXRAKyel2Ef0Fz.png) 对Llama-3-8B-Instruct进行微调。 ### 基础模型: - https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct ### 训练工具 https://github.com/hiyouga/LLaMA-Factory ### 测评方式: 使用opencompass(https://github.com/open-compass/OpenCompass/ ), 测试工具基于CEval和MMLU对微调之后的模型和原始模型进行测试。
测试模型分别为: - Llama-3-8B - Llama-3-8B-Instruct - Llama-3-8B-Instruct-750Mb-lora, 使用8DataSets数据集对Llama-3-8B-Instruct模型进行sft方式lora微调 ### 测试机器 8*A800 ### 8DataSets数据集: 大约750Mb的微调数据集 - https://huggingface.co/datasets/REILX/extracted_tagengo_gpt4 - https://huggingface.co/datasets/TigerResearch/sft_zh - https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese - https://huggingface.co/datasets/LooksJuicy/ruozhiba - https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k - https://huggingface.co/datasets/alexl83/AlpacaDataCleaned - https://huggingface.co/datasets/Sao10K/Claude-3-Opus-Instruct-5K ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 300 - num_epochs: 1.0