## Setup Notes For this model, a VM with 2 T4 GPUs was used. When comparing to the VM with a single T4 GPU, training was around 1.5x (maybe more) faster. To get the training to work on the 2 GPUs (utilize both GPUS simultaneously), the following command was used to initiate training. WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'yahma/alpaca-cleaned' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 8 Note 1. Micro batch size was increased from the default 4 to 8. Note that increasing it further is possible based on other training that has been performed. This was a first attempt. Note 2. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository. ## Log (sqltest) chrisdono@deep-learning-duo-t4-3:~/alpaca-lora$ WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llam a-7b-hf' --data_path 'yahma/alpaca-cleaned' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 8 WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your appli cation as needed. ***************************************** ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path s... warn(msg) CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.5 /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path s... warn(msg) CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 113 CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so... CUDA SETUP: Detected CUDA version 113 CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so... Training Alpaca-LoRA model with params: base_model: decapoda-research/llama-7b-hf data_path: yahma/alpaca-cleaned output_dir: ./lora-alpaca batch_size: 128 micro_batch_size: 8 num_epochs: 1 learning_rate: 0.0003 cutoff_len: 256 val_set_size: 2000 lora_r: 8 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: ['q_proj', 'v_proj'] train_on_inputs: True add_eos_token: False group_by_length: False wandb_project: wandb_run_name: wandb_watch: wandb_log_model: resume_from_checkpoint: False prompt template: alpaca Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [01:24<00:00, 2.57s/it] Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [01:24<00:00, 2.57s/it] The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. Found cached dataset json (/home/chrisdono/.cache/huggingface/datasets/yahma___json/yahma--alpaca-cleaned-5d24553f76c14acc/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e 6e) 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 13.91it/s] trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199 Loading cached split indices for dataset at /home/chrisdono/.cache/huggingface/datasets/yahma___json/yahma--alpaca-cleaned-5d24553f76c14acc/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7 af1cf934bed8e233e6e/cache-45a7f72cdaee9ff3.arrow and /home/chrisdono/.cache/huggingface/datasets/yahma___json/yahma--alpaca-cleaned-5d24553f76c14acc/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797 e5a6b2dd7af1cf934bed8e233e6e/cache-c14794386159bdb7.arrow Found cached dataset json (/home/chrisdono/.cache/huggingface/datasets/yahma___json/yahma--alpaca-cleaned-5d24553f76c14acc/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e 6e) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 330.68it/s] trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199 Loading cached split indices for dataset at /home/chrisdono/.cache/huggingface/datasets/yahma___json/yahma--alpaca-cleaned-5d24553f76c14acc/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7 af1cf934bed8e233e6e/cache-45a7f72cdaee9ff3.arrow and /home/chrisdono/.cache/huggingface/datasets/yahma___json/yahma--alpaca-cleaned-5d24553f76c14acc/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797 e5a6b2dd7af1cf934bed8e233e6e/cache-c14794386159bdb7.arrow {'loss': 1.8867, 'learning_rate': 2.9999999999999997e-05, 'epoch': 0.03} {'loss': 1.8339, 'learning_rate': 5.6999999999999996e-05, 'epoch': 0.05} {'loss': 1.6664, 'learning_rate': 8.699999999999999e-05, 'epoch': 0.08} {'loss': 1.3046, 'learning_rate': 0.000117, 'epoch': 0.1} {'loss': 1.115, 'learning_rate': 0.000147, 'epoch': 0.13} {'loss': 1.0706, 'learning_rate': 0.00017399999999999997, 'epoch': 0.15} {'loss': 1.0269, 'learning_rate': 0.000204, 'epoch': 0.18} {'loss': 1.0012, 'learning_rate': 0.000234, 'epoch': 0.21} {'loss': 0.9608, 'learning_rate': 0.00026399999999999997, 'epoch': 0.23} {'loss': 0.9563, 'learning_rate': 0.000294, 'epoch': 0.26} {'loss': 0.9512, 'learning_rate': 0.00029166666666666664, 'epoch': 0.28} {'loss': 0.9505, 'learning_rate': 0.00028125, 'epoch': 0.31} {'loss': 0.9326, 'learning_rate': 0.0002708333333333333, 'epoch': 0.33} {'loss': 0.9229, 'learning_rate': 0.00026041666666666666, 'epoch': 0.36} 37%|███████████████████████████████████████████████████████▎ | 145/388 [1:44:04<2:54:41, 43.14s/it] {'loss': 0.918, 'learning_rate': 0.00025, 'epoch': 0.39} {'loss': 0.9128, 'learning_rate': 0.00023958333333333332, 'epoch': 0.41} {'loss': 0.9021, 'learning_rate': 0.00022916666666666664, 'epoch': 0.44} {'loss': 0.9115, 'learning_rate': 0.00021874999999999998, 'epoch': 0.46} {'loss': 0.8915, 'learning_rate': 0.00020833333333333332, 'epoch': 0.49} {'loss': 0.8993, 'learning_rate': 0.00019791666666666663, 'epoch': 0.51} {'eval_loss': 0.9055714011192322, 'eval_runtime': 179.4765, 'eval_samples_per_second': 11.144, 'eval_steps_per_second': 0.696, 'epoch': 0.51} {'loss': 0.9015, 'learning_rate': 0.00018749999999999998, 'epoch': 0.54} {'loss': 0.9008, 'learning_rate': 0.00017708333333333332, 'epoch': 0.57} {'loss': 0.8846, 'learning_rate': 0.00016666666666666666, 'epoch': 0.59} {'loss': 0.8976, 'learning_rate': 0.00015625, 'epoch': 0.62} {'loss': 0.8936, 'learning_rate': 0.00014583333333333332, 'epoch': 0.64} {'loss': 0.8883, 'learning_rate': 0.00013541666666666666, 'epoch': 0.67} {'loss': 0.8839, 'learning_rate': 0.000125, 'epoch': 0.69} {'loss': 0.8922, 'learning_rate': 0.00011458333333333332, 'epoch': 0.72} 73%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 285/388 [3:27:30<1:13:45, 42.96s/it] {'loss': 0.8916, 'learning_rate': 0.00010416666666666666, 'epoch': 0.75} {'loss': 0.8845, 'learning_rate': 9.374999999999999e-05, 'epoch': 0.77} {'loss': 0.8804, 'learning_rate': 8.333333333333333e-05, 'epoch': 0.8} {'loss': 0.8831, 'learning_rate': 7.291666666666666e-05, 'epoch': 0.82} {'loss': 0.8753, 'learning_rate': 6.25e-05, 'epoch': 0.85} {'loss': 0.8818, 'learning_rate': 5.208333333333333e-05, 'epoch': 0.87} {'loss': 0.8935, 'learning_rate': 4.1666666666666665e-05, 'epoch': 0.9} {'loss': 0.8688, 'learning_rate': 3.125e-05, 'epoch': 0.93} {'loss': 0.8873, 'learning_rate': 2.0833333333333333e-05, 'epoch': 0.95} {'loss': 0.8869, 'learning_rate': 1.0416666666666666e-05, 'epoch': 0.98} 98%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 382/388 [4:36:54<04:16, 42.78s/it] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 388/388 [4:41:13<00:00, 43.06s/it] {'train_runtime': 16873.8448, 'train_samples_per_second': 2.949, 'train_steps_per_second': 0.023, 'train_loss': 0.9972113518370795, 'epoch': 1.0} 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 388/388 [4:41:13<00:00, 43.49s/it]