{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#### Step 3: Model Fine-tuning\n", "In this notebook, you'll fine-tune the Meta Llama 2 7B large language model, deploy the fine-tuned model, and test it's text generation and domain knowledge capabilities. \n", "\n", "Fine-tuning refers to the process of taking a pre-trained language model and retraining it for a different but related task using specific data. This approach is also known as transfer learning, which involves transferring the knowledge learned from one task to another. Large language models (LLMs) like Llama 2 7B are trained on massive amounts of unlabeled data and can be fine-tuned on domain domain datasets, making the model perform better on that specific domain.\n", "\n", "Input: A train and an optional validation directory. Each directory contains a CSV/JSON/TXT file.\n", "For CSV/JSON files, the train or validation data is used from the column called 'text' or the first column if no column called 'text' is found.\n", "The number of files under train and validation should equal to one.\n", "\n", "- **You'll choose your dataset below based on the domain you've chosen**\n", "\n", "Output: A trained model that can be deployed for inference.\\\n", "After you've fine-tuned the model, you'll evaluate it with the same input you used in project step 2: model evaluation. \n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Set up\n", "\n", "---\n", "Install and import the necessary packages. Restart the kernel after executing the cell below. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: sagemaker in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (2.219.0)\n", "Collecting sagemaker\n", " Using cached sagemaker-2.221.0-py3-none-any.whl.metadata (14 kB)\n", "Collecting datasets\n", " Using cached datasets-2.19.1-py3-none-any.whl.metadata (19 kB)\n", "Requirement already satisfied: attrs<24,>=23.1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (23.2.0)\n", "Requirement already satisfied: boto3<2.0,>=1.33.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (1.34.101)\n", "Requirement already satisfied: cloudpickle==2.2.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (2.2.1)\n", "Requirement already satisfied: google-pasta in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (0.2.0)\n", "Requirement already satisfied: numpy<2.0,>=1.9.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (1.22.4)\n", "Requirement already satisfied: protobuf<5.0,>=3.12 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (4.25.3)\n", "Requirement already satisfied: smdebug-rulesconfig==1.0.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (1.0.1)\n", "Requirement already satisfied: importlib-metadata<7.0,>=1.4.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (6.11.0)\n", "Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (21.3)\n", "Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (2.2.1)\n", "Requirement already satisfied: pathos in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (0.3.2)\n", "Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (0.7.7)\n", "Requirement already satisfied: PyYAML~=6.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (6.0.1)\n", "Requirement already satisfied: jsonschema in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (4.21.1)\n", "Requirement already satisfied: platformdirs in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (4.2.0)\n", "Requirement already satisfied: tblib<4,>=1.7.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (3.0.0)\n", "Requirement already satisfied: urllib3<3.0.0,>=1.26.8 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (2.2.1)\n", "Requirement already satisfied: requests in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (2.31.0)\n", "Requirement already satisfied: docker in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (6.1.3)\n", "Requirement already satisfied: tqdm in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (4.66.2)\n", "Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from sagemaker) (5.9.8)\n", "Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from datasets) (3.13.3)\n", "Requirement already satisfied: pyarrow>=12.0.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from datasets) (15.0.2)\n", "Requirement already satisfied: pyarrow-hotfix in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from datasets) (0.6)\n", "Requirement already satisfied: dill<0.3.9,>=0.3.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from datasets) (0.3.8)\n", "Collecting xxhash (from datasets)\n", " Using cached xxhash-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)\n", "Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from datasets) (0.70.16)\n", "Requirement already satisfied: fsspec<=2024.3.1,>=2023.1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from fsspec[http]<=2024.3.1,>=2023.1.0->datasets) (2024.3.1)\n", "Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from datasets) (3.9.3)\n", "Collecting huggingface-hub>=0.21.2 (from datasets)\n", " Using cached huggingface_hub-0.23.1-py3-none-any.whl.metadata (12 kB)\n", "Requirement already satisfied: botocore<1.35.0,>=1.34.101 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from boto3<2.0,>=1.33.3->sagemaker) (1.34.101)\n", "Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from boto3<2.0,>=1.33.3->sagemaker) (1.0.1)\n", "Requirement already satisfied: s3transfer<0.11.0,>=0.10.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from boto3<2.0,>=1.33.3->sagemaker) (0.10.1)\n", "Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from aiohttp->datasets) (1.3.1)\n", "Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from aiohttp->datasets) (1.4.1)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from aiohttp->datasets) (6.0.5)\n", "Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from aiohttp->datasets) (1.9.4)\n", "Requirement already satisfied: async-timeout<5.0,>=4.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from aiohttp->datasets) (4.0.3)\n", "Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from huggingface-hub>=0.21.2->datasets) (4.10.0)\n", "Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from importlib-metadata<7.0,>=1.4.0->sagemaker) (3.17.0)\n", "Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from packaging>=20.0->sagemaker) (3.1.2)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from requests->sagemaker) (3.3.2)\n", "Requirement already satisfied: idna<4,>=2.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from requests->sagemaker) (3.6)\n", "Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from requests->sagemaker) (2024.2.2)\n", "Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from docker->sagemaker) (1.7.0)\n", "Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from google-pasta->sagemaker) (1.16.0)\n", "Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from jsonschema->sagemaker) (2023.12.1)\n", "Requirement already satisfied: referencing>=0.28.4 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from jsonschema->sagemaker) (0.34.0)\n", "Requirement already satisfied: rpds-py>=0.7.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from jsonschema->sagemaker) (0.18.0)\n", "Requirement already satisfied: python-dateutil>=2.8.2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from pandas->sagemaker) (2.9.0)\n", "Requirement already satisfied: pytz>=2020.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from pandas->sagemaker) (2024.1)\n", "Requirement already satisfied: tzdata>=2022.7 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from pandas->sagemaker) (2024.1)\n", "Requirement already satisfied: ppft>=1.7.6.8 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from pathos->sagemaker) (1.7.6.8)\n", "Requirement already satisfied: pox>=0.3.4 in /home/ec2-user/anaconda3/envs/python3/lib/python3.10/site-packages (from pathos->sagemaker) (0.3.4)\n", "Using cached sagemaker-2.221.0-py3-none-any.whl (1.5 MB)\n", "Using cached datasets-2.19.1-py3-none-any.whl (542 kB)\n", "Using cached huggingface_hub-0.23.1-py3-none-any.whl (401 kB)\n", "Using cached xxhash-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (194 kB)\n", "Installing collected packages: xxhash, huggingface-hub, datasets, sagemaker\n", " Attempting uninstall: sagemaker\n", " Found existing installation: sagemaker 2.219.0\n", " Uninstalling sagemaker-2.219.0:\n", " Successfully uninstalled sagemaker-2.219.0\n", "Successfully installed datasets-2.19.1 huggingface-hub-0.23.1 sagemaker-2.221.0 xxhash-3.4.1\n" ] } ], "source": [ "!pip install --upgrade sagemaker datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Select the model to fine-tune" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "tags": [] }, "outputs": [], "source": [ "model_id, model_version = \"meta-textgeneration-llama-2-7b\", \"2.*\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the cell below, choose the training dataset text for the domain you've chosen and update the code in the cell below: \n", "\n", "To create a finance domain expert model: \n", "\n", "- `\"training\": f\"s3://genaiwithawsproject2024/training-datasets/finance\"`\n", "\n", "To create a medical domain expert model: \n", "\n", "- `\"training\": f\"s3://genaiwithawsproject2024/training-datasets/medical\"`\n", "\n", "To create an IT domain expert model: \n", "\n", "- `\"training\": f\"s3://genaiwithawsproject2024/training-datasets/it\"`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sagemaker.config INFO - Not applying SDK defaults from location: /etc/xdg/sagemaker/config.yaml\n", "sagemaker.config INFO - Not applying SDK defaults from location: /home/ec2-user/.config/sagemaker/config.yaml\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Using model 'meta-textgeneration-llama-2-7b' with wildcard version identifier '*'. You can pin to version '4.1.0' for more stable results. Note that models may have different input/output signatures after a major version upgrade.\n", "INFO:sagemaker:Creating training-job with name: meta-textgeneration-llama-2-7b-2024-05-22-11-21-47-115\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "2024-05-22 11:21:47 Starting - Starting the training job...\n", "2024-05-22 11:22:06 Pending - Training job waiting for capacity...\n", "2024-05-22 11:22:20 Pending - Preparing the instances for training...\n", "2024-05-22 11:22:51 Downloading - Downloading input data.....................\n", "2024-05-22 11:27:42 Training - Training image download completed. Training in progress..\u001b[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device\u001b[0m\n", "\u001b[34mbash: no job control in this shell\u001b[0m\n", "\u001b[34m2024-05-22 11:27:44,151 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training\u001b[0m\n", "\u001b[34m2024-05-22 11:27:44,169 sagemaker-training-toolkit INFO No Neurons detected (normal if no neurons installed)\u001b[0m\n", "\u001b[34m2024-05-22 11:27:44,178 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.\u001b[0m\n", "\u001b[34m2024-05-22 11:27:44,181 sagemaker_pytorch_container.training INFO Invoking user training script.\u001b[0m\n", "\u001b[34m2024-05-22 11:27:53,654 sagemaker-training-toolkit INFO Installing dependencies from requirements.txt:\u001b[0m\n", "\u001b[34m/opt/conda/bin/python3.10 -m pip install -r requirements.txt\u001b[0m\n", "\u001b[34mProcessing ./lib/accelerate/accelerate-0.21.0-py3-none-any.whl (from -r requirements.txt (line 1))\u001b[0m\n", "\u001b[34mProcessing ./lib/bitsandbytes/bitsandbytes-0.39.1-py3-none-any.whl (from -r requirements.txt (line 2))\u001b[0m\n", "\u001b[34mProcessing ./lib/black/black-23.7.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 3))\u001b[0m\n", "\u001b[34mProcessing ./lib/brotli/Brotli-1.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (from -r requirements.txt (line 4))\u001b[0m\n", "\u001b[34mProcessing ./lib/datasets/datasets-2.14.1-py3-none-any.whl (from -r requirements.txt (line 5))\u001b[0m\n", "\u001b[34mProcessing ./lib/docstring-parser/docstring_parser-0.16-py3-none-any.whl (from -r requirements.txt (line 6))\u001b[0m\n", "\u001b[34mProcessing ./lib/fire/fire-0.5.0.tar.gz\u001b[0m\n", "\u001b[34mPreparing metadata (setup.py): started\u001b[0m\n", "\u001b[34mPreparing metadata (setup.py): finished with status 'done'\u001b[0m\n", "\u001b[34mProcessing ./lib/huggingface-hub/huggingface_hub-0.20.3-py3-none-any.whl (from -r requirements.txt (line 8))\u001b[0m\n", "\u001b[34mProcessing ./lib/inflate64/inflate64-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 9))\u001b[0m\n", "\u001b[34mProcessing ./lib/loralib/loralib-0.1.1-py3-none-any.whl (from -r requirements.txt (line 10))\u001b[0m\n", "\u001b[34mProcessing ./lib/multivolumefile/multivolumefile-0.2.3-py3-none-any.whl (from -r requirements.txt (line 11))\u001b[0m\n", "\u001b[34mProcessing ./lib/mypy-extensions/mypy_extensions-1.0.0-py3-none-any.whl (from -r requirements.txt (line 12))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cublas-cu12/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 13))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cuda-cupti-cu12/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 14))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cuda-nvrtc-cu12/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 15))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cuda-runtime-cu12/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 16))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cudnn-cu12/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 17))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cufft-cu12/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 18))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-curand-cu12/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 19))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cusolver-cu12/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 20))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-cusparse-cu12/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 21))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-nccl-cu12/nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 22))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-nvjitlink-cu12/nvidia_nvjitlink_cu12-12.3.101-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 23))\u001b[0m\n", "\u001b[34mProcessing ./lib/nvidia-nvtx-cu12/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (from -r requirements.txt (line 24))\u001b[0m\n", "\u001b[34mProcessing ./lib/pathspec/pathspec-0.11.1-py3-none-any.whl (from -r requirements.txt (line 25))\u001b[0m\n", "\u001b[34mProcessing ./lib/peft/peft-0.4.0-py3-none-any.whl (from -r requirements.txt (line 26))\u001b[0m\n", "\u001b[34mProcessing ./lib/py7zr/py7zr-0.20.5-py3-none-any.whl (from -r requirements.txt (line 27))\u001b[0m\n", "\u001b[34mProcessing ./lib/pybcj/pybcj-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 28))\u001b[0m\n", "\u001b[34mProcessing ./lib/pycryptodomex/pycryptodomex-3.18.0-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 29))\u001b[0m\n", "\u001b[34mProcessing ./lib/pyppmd/pyppmd-1.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 30))\u001b[0m\n", "\u001b[34mProcessing ./lib/pyzstd/pyzstd-0.15.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 31))\u001b[0m\n", "\u001b[34mProcessing ./lib/safetensors/safetensors-0.4.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 32))\u001b[0m\n", "\u001b[34mProcessing ./lib/scipy/scipy-1.11.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 33))\u001b[0m\n", "\u001b[34mProcessing ./lib/shtab/shtab-1.7.1-py3-none-any.whl (from -r requirements.txt (line 34))\u001b[0m\n", "\u001b[34mProcessing ./lib/termcolor/termcolor-2.3.0-py3-none-any.whl (from -r requirements.txt (line 35))\u001b[0m\n", "\u001b[34mProcessing ./lib/texttable/texttable-1.6.7-py2.py3-none-any.whl (from -r requirements.txt (line 36))\u001b[0m\n", "\u001b[34mProcessing ./lib/tokenize-rt/tokenize_rt-5.1.0-py2.py3-none-any.whl (from -r requirements.txt (line 37))\u001b[0m\n", "\u001b[34mProcessing ./lib/tokenizers/tokenizers-0.15.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 38))\u001b[0m\n", "\u001b[34mProcessing ./lib/torch/torch-2.2.0-cp310-cp310-manylinux1_x86_64.whl (from -r requirements.txt (line 39))\u001b[0m\n", "\u001b[34mProcessing ./lib/transformers/transformers-4.38.0-py3-none-any.whl (from -r requirements.txt (line 40))\u001b[0m\n", "\u001b[34mProcessing ./lib/triton/triton-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r requirements.txt (line 41))\u001b[0m\n", "\u001b[34mProcessing ./lib/trl/trl-0.8.1-py3-none-any.whl (from -r requirements.txt (line 42))\u001b[0m\n", "\u001b[34mProcessing ./lib/typing-extensions/typing_extensions-4.8.0-py3-none-any.whl (from -r requirements.txt (line 43))\u001b[0m\n", "\u001b[34mProcessing ./lib/tyro/tyro-0.7.3-py3-none-any.whl (from -r requirements.txt (line 44))\u001b[0m\n", "\u001b[34mProcessing ./lib/sagemaker_jumpstart_script_utilities/sagemaker_jumpstart_script_utilities-1.1.9-py2.py3-none-any.whl (from -r requirements.txt (line 45))\u001b[0m\n", "\u001b[34mProcessing ./lib/sagemaker_jumpstart_huggingface_script_utilities/sagemaker_jumpstart_huggingface_script_utilities-1.2.3-py2.py3-none-any.whl (from -r requirements.txt (line 46))\u001b[0m\n", "\u001b[34mRequirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.10/site-packages (from accelerate==0.21.0->-r requirements.txt (line 1)) (1.24.4)\u001b[0m\n", "\u001b[34mRequirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.10/site-packages (from accelerate==0.21.0->-r requirements.txt (line 1)) (23.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: psutil in /opt/conda/lib/python3.10/site-packages (from accelerate==0.21.0->-r requirements.txt (line 1)) (5.9.5)\u001b[0m\n", "\u001b[34mRequirement already satisfied: pyyaml in /opt/conda/lib/python3.10/site-packages (from accelerate==0.21.0->-r requirements.txt (line 1)) (6.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: click>=8.0.0 in /opt/conda/lib/python3.10/site-packages (from black==23.7.0->-r requirements.txt (line 3)) (8.1.4)\u001b[0m\n", "\u001b[34mRequirement already satisfied: platformdirs>=2 in /opt/conda/lib/python3.10/site-packages (from black==23.7.0->-r requirements.txt (line 3)) (3.8.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: tomli>=1.1.0 in /opt/conda/lib/python3.10/site-packages (from black==23.7.0->-r requirements.txt (line 3)) (2.0.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: pyarrow>=8.0.0 in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (14.0.2)\u001b[0m\n", "\u001b[34mRequirement already satisfied: dill<0.3.8,>=0.3.0 in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (0.3.6)\u001b[0m\n", "\u001b[34mRequirement already satisfied: pandas in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (2.0.3)\u001b[0m\n", "\u001b[34mRequirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (2.31.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: tqdm>=4.62.1 in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (4.65.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: xxhash in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (3.4.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: multiprocess in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (0.70.14)\u001b[0m\n", "\u001b[34mRequirement already satisfied: fsspec>=2021.11.1 in /opt/conda/lib/python3.10/site-packages (from fsspec[http]>=2021.11.1->datasets==2.14.1->-r requirements.txt (line 5)) (2023.6.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: aiohttp in /opt/conda/lib/python3.10/site-packages (from datasets==2.14.1->-r requirements.txt (line 5)) (3.9.3)\u001b[0m\n", "\u001b[34mRequirement already satisfied: six in /opt/conda/lib/python3.10/site-packages (from fire==0.5.0->-r requirements.txt (line 7)) (1.16.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: filelock in /opt/conda/lib/python3.10/site-packages (from huggingface-hub==0.20.3->-r requirements.txt (line 8)) (3.12.2)\u001b[0m\n", "\u001b[34mRequirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch==2.2.0->-r requirements.txt (line 39)) (1.12)\u001b[0m\n", "\u001b[34mRequirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch==2.2.0->-r requirements.txt (line 39)) (3.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: jinja2 in /opt/conda/lib/python3.10/site-packages (from torch==2.2.0->-r requirements.txt (line 39)) (3.1.2)\u001b[0m\n", "\u001b[34mRequirement already satisfied: regex!=2019.12.17 in /opt/conda/lib/python3.10/site-packages (from transformers==4.38.0->-r requirements.txt (line 40)) (2023.12.25)\u001b[0m\n", "\u001b[34mRequirement already satisfied: rich>=11.1.0 in /opt/conda/lib/python3.10/site-packages (from tyro==0.7.3->-r requirements.txt (line 44)) (13.4.2)\u001b[0m\n", "\u001b[34mRequirement already satisfied: aiosignal>=1.1.2 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets==2.14.1->-r requirements.txt (line 5)) (1.3.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: attrs>=17.3.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets==2.14.1->-r requirements.txt (line 5)) (23.1.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: frozenlist>=1.1.1 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets==2.14.1->-r requirements.txt (line 5)) (1.4.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: multidict<7.0,>=4.5 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets==2.14.1->-r requirements.txt (line 5)) (6.0.5)\u001b[0m\n", "\u001b[34mRequirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets==2.14.1->-r requirements.txt (line 5)) (1.9.4)\u001b[0m\n", "\u001b[34mRequirement already satisfied: async-timeout<5.0,>=4.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets==2.14.1->-r requirements.txt (line 5)) (4.0.3)\u001b[0m\n", "\u001b[34mRequirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.10/site-packages (from requests>=2.19.0->datasets==2.14.1->-r requirements.txt (line 5)) (3.1.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.10/site-packages (from requests>=2.19.0->datasets==2.14.1->-r requirements.txt (line 5)) (3.4)\u001b[0m\n", "\u001b[34mRequirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/lib/python3.10/site-packages (from requests>=2.19.0->datasets==2.14.1->-r requirements.txt (line 5)) (1.26.15)\u001b[0m\n", "\u001b[34mRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.10/site-packages (from requests>=2.19.0->datasets==2.14.1->-r requirements.txt (line 5)) (2024.2.2)\u001b[0m\n", "\u001b[34mRequirement already satisfied: markdown-it-py>=2.2.0 in /opt/conda/lib/python3.10/site-packages (from rich>=11.1.0->tyro==0.7.3->-r requirements.txt (line 44)) (3.0.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: pygments<3.0.0,>=2.13.0 in /opt/conda/lib/python3.10/site-packages (from rich>=11.1.0->tyro==0.7.3->-r requirements.txt (line 44)) (2.15.1)\u001b[0m\n", "\u001b[34mRequirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.10/site-packages (from jinja2->torch==2.2.0->-r requirements.txt (line 39)) (2.1.3)\u001b[0m\n", "\u001b[34mRequirement already satisfied: python-dateutil>=2.8.2 in /opt/conda/lib/python3.10/site-packages (from pandas->datasets==2.14.1->-r requirements.txt (line 5)) (2.8.2)\u001b[0m\n", "\u001b[34mRequirement already satisfied: pytz>=2020.1 in /opt/conda/lib/python3.10/site-packages (from pandas->datasets==2.14.1->-r requirements.txt (line 5)) (2023.3)\u001b[0m\n", "\u001b[34mRequirement already satisfied: tzdata>=2022.1 in /opt/conda/lib/python3.10/site-packages (from pandas->datasets==2.14.1->-r requirements.txt (line 5)) (2023.3)\u001b[0m\n", "\u001b[34mRequirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->torch==2.2.0->-r requirements.txt (line 39)) (1.3.0)\u001b[0m\n", "\u001b[34mRequirement already satisfied: mdurl~=0.1 in /opt/conda/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich>=11.1.0->tyro==0.7.3->-r requirements.txt (line 44)) (0.1.0)\u001b[0m\n", "\u001b[34mhuggingface-hub is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.\u001b[0m\n", "\u001b[34mscipy is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.\u001b[0m\n", "\u001b[34mBuilding wheels for collected packages: fire\u001b[0m\n", "\u001b[34mBuilding wheel for fire (setup.py): started\u001b[0m\n", "\u001b[34mBuilding wheel for fire (setup.py): finished with status 'done'\u001b[0m\n", "\u001b[34mCreated wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116932 sha256=2a5173559197e576b1fab65e23e1fe5c01dd107705a98910c658104e3f10f8da\u001b[0m\n", "\u001b[34mStored in directory: /root/.cache/pip/wheels/db/3d/41/7e69dca5f61e37d109a4457082ffc5c6edb55ab633bafded38\u001b[0m\n", "\u001b[34mSuccessfully built fire\u001b[0m\n", "\u001b[34mInstalling collected packages: texttable, Brotli, bitsandbytes, typing-extensions, triton, tokenize-rt, termcolor, shtab, sagemaker-jumpstart-script-utilities, sagemaker-jumpstart-huggingface-script-utilities, safetensors, pyzstd, pyppmd, pycryptodomex, pybcj, pathspec, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, mypy-extensions, multivolumefile, loralib, inflate64, docstring-parser, py7zr, nvidia-cusparse-cu12, nvidia-cudnn-cu12, fire, black, tyro, tokenizers, nvidia-cusolver-cu12, transformers, torch, datasets, accelerate, trl, peft\u001b[0m\n", "\u001b[34mAttempting uninstall: typing-extensions\u001b[0m\n", "\u001b[34mFound existing installation: typing_extensions 4.7.1\u001b[0m\n", "\u001b[34mUninstalling typing_extensions-4.7.1:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled typing_extensions-4.7.1\u001b[0m\n", "\u001b[34mAttempting uninstall: triton\u001b[0m\n", "\u001b[34mFound existing installation: triton 2.0.0.dev20221202\u001b[0m\n", "\u001b[34mUninstalling triton-2.0.0.dev20221202:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled triton-2.0.0.dev20221202\u001b[0m\n", "\u001b[34mAttempting uninstall: tokenizers\u001b[0m\n", "\u001b[34mFound existing installation: tokenizers 0.13.3\u001b[0m\n", "\u001b[34mUninstalling tokenizers-0.13.3:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled tokenizers-0.13.3\u001b[0m\n", "\u001b[34mAttempting uninstall: transformers\u001b[0m\n", "\u001b[34mFound existing installation: transformers 4.28.1\u001b[0m\n", "\u001b[34mUninstalling transformers-4.28.1:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled transformers-4.28.1\u001b[0m\n", "\u001b[34mAttempting uninstall: torch\u001b[0m\n", "\u001b[34mFound existing installation: torch 2.0.0\u001b[0m\n", "\u001b[34mUninstalling torch-2.0.0:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled torch-2.0.0\u001b[0m\n", "\u001b[34mAttempting uninstall: datasets\u001b[0m\n", "\u001b[34mFound existing installation: datasets 2.16.1\u001b[0m\n", "\u001b[34mUninstalling datasets-2.16.1:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled datasets-2.16.1\u001b[0m\n", "\u001b[34mAttempting uninstall: accelerate\u001b[0m\n", "\u001b[34mFound existing installation: accelerate 0.19.0\u001b[0m\n", "\u001b[34mUninstalling accelerate-0.19.0:\u001b[0m\n", "\u001b[34mSuccessfully uninstalled accelerate-0.19.0\u001b[0m\n", "\u001b[34mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\u001b[0m\n", "\u001b[34mfastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.2.0 which is incompatible.\u001b[0m\n", "\u001b[34mSuccessfully installed Brotli-1.0.9 accelerate-0.21.0 bitsandbytes-0.39.1 black-23.7.0 datasets-2.14.1 docstring-parser-0.16 fire-0.5.0 inflate64-0.3.1 loralib-0.1.1 multivolumefile-0.2.3 mypy-extensions-1.0.0 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.3.101 nvidia-nvtx-cu12-12.1.105 pathspec-0.11.1 peft-0.4.0 py7zr-0.20.5 pybcj-1.0.1 pycryptodomex-3.18.0 pyppmd-1.0.0 pyzstd-0.15.9 safetensors-0.4.2 sagemaker-jumpstart-huggingface-script-utilities-1.2.3 sagemaker-jumpstart-script-utilities-1.1.9 shtab-1.7.1 termcolor-2.3.0 texttable-1.6.7 tokenize-rt-5.1.0 tokenizers-0.15.2 torch-2.2.0 transformers-4.38.0 triton-2.2.0 trl-0.8.1 typing-extensions-4.8.0 tyro-0.7.3\u001b[0m\n", "\u001b[34mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,303 sagemaker-training-toolkit INFO Waiting for the process to finish and give a return code.\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,303 sagemaker-training-toolkit INFO Done waiting for a return code. Received 0 from exiting process.\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,342 sagemaker-training-toolkit INFO No Neurons detected (normal if no neurons installed)\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,372 sagemaker-training-toolkit INFO No Neurons detected (normal if no neurons installed)\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,401 sagemaker-training-toolkit INFO No Neurons detected (normal if no neurons installed)\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,412 sagemaker-training-toolkit INFO Invoking user script\u001b[0m\n", "\u001b[34mTraining Env:\u001b[0m\n", "\u001b[34m{\n", " \"additional_framework_parameters\": {},\n", " \"channel_input_dirs\": {\n", " \"code\": \"/opt/ml/input/data/code\",\n", " \"training\": \"/opt/ml/input/data/training\"\n", " },\n", " \"current_host\": \"algo-1\",\n", " \"current_instance_group\": \"homogeneousCluster\",\n", " \"current_instance_group_hosts\": [\n", " \"algo-1\"\n", " ],\n", " \"current_instance_type\": \"ml.g5.2xlarge\",\n", " \"distribution_hosts\": [],\n", " \"distribution_instance_groups\": [],\n", " \"framework_module\": \"sagemaker_pytorch_container.training:main\",\n", " \"hosts\": [\n", " \"algo-1\"\n", " ],\n", " \"hyperparameters\": {\n", " \"add_input_output_demarcation_key\": \"True\",\n", " \"chat_dataset\": \"False\",\n", " \"enable_fsdp\": \"True\",\n", " \"epoch\": \"5\",\n", " \"instruction_tuned\": \"False\",\n", " \"int8_quantization\": \"False\",\n", " \"learning_rate\": \"0.0001\",\n", " \"lora_alpha\": \"32\",\n", " \"lora_dropout\": \"0.05\",\n", " \"lora_r\": \"8\",\n", " \"max_input_length\": \"-1\",\n", " \"max_train_samples\": \"-1\",\n", " \"max_val_samples\": \"-1\",\n", " \"per_device_eval_batch_size\": \"1\",\n", " \"per_device_train_batch_size\": \"4\",\n", " \"preprocessing_num_workers\": \"None\",\n", " \"seed\": \"10\",\n", " \"target_modules\": \"q_proj,v_proj\",\n", " \"train_data_split_seed\": \"0\",\n", " \"validation_split_ratio\": \"0.2\"\n", " },\n", " \"input_config_dir\": \"/opt/ml/input/config\",\n", " \"input_data_config\": {\n", " \"code\": {\n", " \"TrainingInputMode\": \"File\",\n", " \"S3DistributionType\": \"FullyReplicated\",\n", " \"RecordWrapperType\": \"None\"\n", " },\n", " \"training\": {\n", " \"TrainingInputMode\": \"File\",\n", " \"S3DistributionType\": \"FullyReplicated\",\n", " \"RecordWrapperType\": \"None\"\n", " }\n", " },\n", " \"input_dir\": \"/opt/ml/input\",\n", " \"instance_groups\": [\n", " \"homogeneousCluster\"\n", " ],\n", " \"instance_groups_dict\": {\n", " \"homogeneousCluster\": {\n", " \"instance_group_name\": \"homogeneousCluster\",\n", " \"instance_type\": \"ml.g5.2xlarge\",\n", " \"hosts\": [\n", " \"algo-1\"\n", " ]\n", " }\n", " },\n", " \"is_hetero\": false,\n", " \"is_master\": true,\n", " \"is_modelparallel_enabled\": null,\n", " \"is_smddpmprun_installed\": true,\n", " \"job_name\": \"meta-textgeneration-llama-2-7b-2024-05-22-11-21-47-115\",\n", " \"log_level\": 20,\n", " \"master_hostname\": \"algo-1\",\n", " \"model_dir\": \"/opt/ml/model\",\n", " \"module_dir\": \"/opt/ml/input/data/code/sourcedir.tar.gz\",\n", " \"module_name\": \"transfer_learning\",\n", " \"network_interface_name\": \"eth0\",\n", " \"num_cpus\": 8,\n", " \"num_gpus\": 1,\n", " \"num_neurons\": 0,\n", " \"output_data_dir\": \"/opt/ml/output/data\",\n", " \"output_dir\": \"/opt/ml/output\",\n", " \"output_intermediate_dir\": \"/opt/ml/output/intermediate\",\n", " \"resource_config\": {\n", " \"current_host\": \"algo-1\",\n", " \"current_instance_type\": \"ml.g5.2xlarge\",\n", " \"current_group_name\": \"homogeneousCluster\",\n", " \"hosts\": [\n", " \"algo-1\"\n", " ],\n", " \"instance_groups\": [\n", " {\n", " \"instance_group_name\": \"homogeneousCluster\",\n", " \"instance_type\": \"ml.g5.2xlarge\",\n", " \"hosts\": [\n", " \"algo-1\"\n", " ]\n", " }\n", " ],\n", " \"network_interface_name\": \"eth0\"\n", " },\n", " \"user_entry_point\": \"transfer_learning.py\"\u001b[0m\n", "\u001b[34m}\u001b[0m\n", "\u001b[34mEnvironment variables:\u001b[0m\n", "\u001b[34mSM_HOSTS=[\"algo-1\"]\u001b[0m\n", "\u001b[34mSM_NETWORK_INTERFACE_NAME=eth0\u001b[0m\n", "\u001b[34mSM_HPS={\"add_input_output_demarcation_key\":\"True\",\"chat_dataset\":\"False\",\"enable_fsdp\":\"True\",\"epoch\":\"5\",\"instruction_tuned\":\"False\",\"int8_quantization\":\"False\",\"learning_rate\":\"0.0001\",\"lora_alpha\":\"32\",\"lora_dropout\":\"0.05\",\"lora_r\":\"8\",\"max_input_length\":\"-1\",\"max_train_samples\":\"-1\",\"max_val_samples\":\"-1\",\"per_device_eval_batch_size\":\"1\",\"per_device_train_batch_size\":\"4\",\"preprocessing_num_workers\":\"None\",\"seed\":\"10\",\"target_modules\":\"q_proj,v_proj\",\"train_data_split_seed\":\"0\",\"validation_split_ratio\":\"0.2\"}\u001b[0m\n", "\u001b[34mSM_USER_ENTRY_POINT=transfer_learning.py\u001b[0m\n", "\u001b[34mSM_FRAMEWORK_PARAMS={}\u001b[0m\n", "\u001b[34mSM_RESOURCE_CONFIG={\"current_group_name\":\"homogeneousCluster\",\"current_host\":\"algo-1\",\"current_instance_type\":\"ml.g5.2xlarge\",\"hosts\":[\"algo-1\"],\"instance_groups\":[{\"hosts\":[\"algo-1\"],\"instance_group_name\":\"homogeneousCluster\",\"instance_type\":\"ml.g5.2xlarge\"}],\"network_interface_name\":\"eth0\"}\u001b[0m\n", "\u001b[34mSM_INPUT_DATA_CONFIG={\"code\":{\"RecordWrapperType\":\"None\",\"S3DistributionType\":\"FullyReplicated\",\"TrainingInputMode\":\"File\"},\"training\":{\"RecordWrapperType\":\"None\",\"S3DistributionType\":\"FullyReplicated\",\"TrainingInputMode\":\"File\"}}\u001b[0m\n", "\u001b[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data\u001b[0m\n", "\u001b[34mSM_CHANNELS=[\"code\",\"training\"]\u001b[0m\n", "\u001b[34mSM_CURRENT_HOST=algo-1\u001b[0m\n", "\u001b[34mSM_CURRENT_INSTANCE_TYPE=ml.g5.2xlarge\u001b[0m\n", "\u001b[34mSM_CURRENT_INSTANCE_GROUP=homogeneousCluster\u001b[0m\n", "\u001b[34mSM_CURRENT_INSTANCE_GROUP_HOSTS=[\"algo-1\"]\u001b[0m\n", "\u001b[34mSM_INSTANCE_GROUPS=[\"homogeneousCluster\"]\u001b[0m\n", "\u001b[34mSM_INSTANCE_GROUPS_DICT={\"homogeneousCluster\":{\"hosts\":[\"algo-1\"],\"instance_group_name\":\"homogeneousCluster\",\"instance_type\":\"ml.g5.2xlarge\"}}\u001b[0m\n", "\u001b[34mSM_DISTRIBUTION_INSTANCE_GROUPS=[]\u001b[0m\n", "\u001b[34mSM_IS_HETERO=false\u001b[0m\n", "\u001b[34mSM_MODULE_NAME=transfer_learning\u001b[0m\n", "\u001b[34mSM_LOG_LEVEL=20\u001b[0m\n", "\u001b[34mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main\u001b[0m\n", "\u001b[34mSM_INPUT_DIR=/opt/ml/input\u001b[0m\n", "\u001b[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config\u001b[0m\n", "\u001b[34mSM_OUTPUT_DIR=/opt/ml/output\u001b[0m\n", "\u001b[34mSM_NUM_CPUS=8\u001b[0m\n", "\u001b[34mSM_NUM_GPUS=1\u001b[0m\n", "\u001b[34mSM_NUM_NEURONS=0\u001b[0m\n", "\u001b[34mSM_MODEL_DIR=/opt/ml/model\u001b[0m\n", "\u001b[34mSM_MODULE_DIR=/opt/ml/input/data/code/sourcedir.tar.gz\u001b[0m\n", "\u001b[34mSM_TRAINING_ENV={\"additional_framework_parameters\":{},\"channel_input_dirs\":{\"code\":\"/opt/ml/input/data/code\",\"training\":\"/opt/ml/input/data/training\"},\"current_host\":\"algo-1\",\"current_instance_group\":\"homogeneousCluster\",\"current_instance_group_hosts\":[\"algo-1\"],\"current_instance_type\":\"ml.g5.2xlarge\",\"distribution_hosts\":[],\"distribution_instance_groups\":[],\"framework_module\":\"sagemaker_pytorch_container.training:main\",\"hosts\":[\"algo-1\"],\"hyperparameters\":{\"add_input_output_demarcation_key\":\"True\",\"chat_dataset\":\"False\",\"enable_fsdp\":\"True\",\"epoch\":\"5\",\"instruction_tuned\":\"False\",\"int8_quantization\":\"False\",\"learning_rate\":\"0.0001\",\"lora_alpha\":\"32\",\"lora_dropout\":\"0.05\",\"lora_r\":\"8\",\"max_input_length\":\"-1\",\"max_train_samples\":\"-1\",\"max_val_samples\":\"-1\",\"per_device_eval_batch_size\":\"1\",\"per_device_train_batch_size\":\"4\",\"preprocessing_num_workers\":\"None\",\"seed\":\"10\",\"target_modules\":\"q_proj,v_proj\",\"train_data_split_seed\":\"0\",\"validation_split_ratio\":\"0.2\"},\"input_config_dir\":\"/opt/ml/input/config\",\"input_data_config\":{\"code\":{\"RecordWrapperType\":\"None\",\"S3DistributionType\":\"FullyReplicated\",\"TrainingInputMode\":\"File\"},\"training\":{\"RecordWrapperType\":\"None\",\"S3DistributionType\":\"FullyReplicated\",\"TrainingInputMode\":\"File\"}},\"input_dir\":\"/opt/ml/input\",\"instance_groups\":[\"homogeneousCluster\"],\"instance_groups_dict\":{\"homogeneousCluster\":{\"hosts\":[\"algo-1\"],\"instance_group_name\":\"homogeneousCluster\",\"instance_type\":\"ml.g5.2xlarge\"}},\"is_hetero\":false,\"is_master\":true,\"is_modelparallel_enabled\":null,\"is_smddpmprun_installed\":true,\"job_name\":\"meta-textgeneration-llama-2-7b-2024-05-22-11-21-47-115\",\"log_level\":20,\"master_hostname\":\"algo-1\",\"model_dir\":\"/opt/ml/model\",\"module_dir\":\"/opt/ml/input/data/code/sourcedir.tar.gz\",\"module_name\":\"transfer_learning\",\"network_interface_name\":\"eth0\",\"num_cpus\":8,\"num_gpus\":1,\"num_neurons\":0,\"output_data_dir\":\"/opt/ml/output/data\",\"output_dir\":\"/opt/ml/output\",\"output_intermediate_dir\":\"/opt/ml/output/intermediate\",\"resource_config\":{\"current_group_name\":\"homogeneousCluster\",\"current_host\":\"algo-1\",\"current_instance_type\":\"ml.g5.2xlarge\",\"hosts\":[\"algo-1\"],\"instance_groups\":[{\"hosts\":[\"algo-1\"],\"instance_group_name\":\"homogeneousCluster\",\"instance_type\":\"ml.g5.2xlarge\"}],\"network_interface_name\":\"eth0\"},\"user_entry_point\":\"transfer_learning.py\"}\u001b[0m\n", "\u001b[34mSM_USER_ARGS=[\"--add_input_output_demarcation_key\",\"True\",\"--chat_dataset\",\"False\",\"--enable_fsdp\",\"True\",\"--epoch\",\"5\",\"--instruction_tuned\",\"False\",\"--int8_quantization\",\"False\",\"--learning_rate\",\"0.0001\",\"--lora_alpha\",\"32\",\"--lora_dropout\",\"0.05\",\"--lora_r\",\"8\",\"--max_input_length\",\"-1\",\"--max_train_samples\",\"-1\",\"--max_val_samples\",\"-1\",\"--per_device_eval_batch_size\",\"1\",\"--per_device_train_batch_size\",\"4\",\"--preprocessing_num_workers\",\"None\",\"--seed\",\"10\",\"--target_modules\",\"q_proj,v_proj\",\"--train_data_split_seed\",\"0\",\"--validation_split_ratio\",\"0.2\"]\u001b[0m\n", "\u001b[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate\u001b[0m\n", "\u001b[34mSM_CHANNEL_CODE=/opt/ml/input/data/code\u001b[0m\n", "\u001b[34mSM_CHANNEL_TRAINING=/opt/ml/input/data/training\u001b[0m\n", "\u001b[34mSM_HP_ADD_INPUT_OUTPUT_DEMARCATION_KEY=True\u001b[0m\n", "\u001b[34mSM_HP_CHAT_DATASET=False\u001b[0m\n", "\u001b[34mSM_HP_ENABLE_FSDP=True\u001b[0m\n", "\u001b[34mSM_HP_EPOCH=5\u001b[0m\n", "\u001b[34mSM_HP_INSTRUCTION_TUNED=False\u001b[0m\n", "\u001b[34mSM_HP_INT8_QUANTIZATION=False\u001b[0m\n", "\u001b[34mSM_HP_LEARNING_RATE=0.0001\u001b[0m\n", "\u001b[34mSM_HP_LORA_ALPHA=32\u001b[0m\n", "\u001b[34mSM_HP_LORA_DROPOUT=0.05\u001b[0m\n", "\u001b[34mSM_HP_LORA_R=8\u001b[0m\n", "\u001b[34mSM_HP_MAX_INPUT_LENGTH=-1\u001b[0m\n", "\u001b[34mSM_HP_MAX_TRAIN_SAMPLES=-1\u001b[0m\n", "\u001b[34mSM_HP_MAX_VAL_SAMPLES=-1\u001b[0m\n", "\u001b[34mSM_HP_PER_DEVICE_EVAL_BATCH_SIZE=1\u001b[0m\n", "\u001b[34mSM_HP_PER_DEVICE_TRAIN_BATCH_SIZE=4\u001b[0m\n", "\u001b[34mSM_HP_PREPROCESSING_NUM_WORKERS=None\u001b[0m\n", "\u001b[34mSM_HP_SEED=10\u001b[0m\n", "\u001b[34mSM_HP_TARGET_MODULES=q_proj,v_proj\u001b[0m\n", "\u001b[34mSM_HP_TRAIN_DATA_SPLIT_SEED=0\u001b[0m\n", "\u001b[34mSM_HP_VALIDATION_SPLIT_RATIO=0.2\u001b[0m\n", "\u001b[34mPYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python310.zip:/opt/conda/lib/python3.10:/opt/conda/lib/python3.10/lib-dynload:/opt/conda/lib/python3.10/site-packages\u001b[0m\n", "\u001b[34mInvoking script with the following command:\u001b[0m\n", "\u001b[34m/opt/conda/bin/python3.10 transfer_learning.py --add_input_output_demarcation_key True --chat_dataset False --enable_fsdp True --epoch 5 --instruction_tuned False --int8_quantization False --learning_rate 0.0001 --lora_alpha 32 --lora_dropout 0.05 --lora_r 8 --max_input_length -1 --max_train_samples -1 --max_val_samples -1 --per_device_eval_batch_size 1 --per_device_train_batch_size 4 --preprocessing_num_workers None --seed 10 --target_modules q_proj,v_proj --train_data_split_seed 0 --validation_split_ratio 0.2\u001b[0m\n", "\u001b[34m2024-05-22 11:29:09,452 sagemaker-training-toolkit INFO Exceptions not imported for SageMaker TF as Tensorflow is not installed.\u001b[0m\n", "\u001b[34m===================================BUG REPORT===================================\u001b[0m\n", "\u001b[34mWelcome to bitsandbytes. For bug reports, please run\u001b[0m\n", "\u001b[34mpython -m bitsandbytes\n", " and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\u001b[0m\n", "\u001b[34m================================================================================\u001b[0m\n", "\u001b[34mbin /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so\u001b[0m\n", "\u001b[34m/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib'), PosixPath('/usr/local/nvidia/lib64')}\n", " warn(msg)\u001b[0m\n", "\u001b[34mCUDA SETUP: CUDA runtime path found: /opt/conda/lib/libcudart.so\u001b[0m\n", "\u001b[34mCUDA SETUP: Highest compute capability among GPUs detected: 8.6\u001b[0m\n", "\u001b[34mCUDA SETUP: Detected CUDA version 118\u001b[0m\n", "\u001b[34mCUDA SETUP: Loading binary /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...\u001b[0m\n", "\u001b[34mINFO:root:Using pre-trained artifacts in SAGEMAKER_ADDITIONAL_S3_DATA_PATH=/opt/ml/additonals3data\u001b[0m\n", "\u001b[34mINFO:root:Identify file serving.properties in the un-tar directory /opt/ml/additonals3data. Copying it over to /opt/ml/model for model deployment after training is finished.\u001b[0m\n", "\u001b[34mINFO:root:Invoking the training command ['torchrun', '--nnodes', '1', '--nproc_per_node', '1', 'llama_finetuning.py', '--model_name', '/opt/ml/additonals3data', '--num_gpus', '1', '--pure_bf16', '--dist_checkpoint_root_folder', 'model_checkpoints', '--dist_checkpoint_folder', 'fine-tuned', '--batch_size_training', '4', '--micro_batch_size', '4', '--train_file', '/opt/ml/input/data/training', '--lr', '0.0001', '--do_train', '--output_dir', 'saved_peft_model', '--num_epochs', '5', '--use_peft', '--peft_method', 'lora', '--max_train_samples', '-1', '--max_val_samples', '-1', '--seed', '10', '--per_device_eval_batch_size', '1', '--max_input_length', '-1', '--preprocessing_num_workers', '--None', '--validation_split_ratio', '0.2', '--train_data_split_seed', '0', '--num_workers_dataloader', '0', '--weight_decay', '0.1', '--lora_r', '8', '--lora_alpha', '32', '--lora_dropout', '0.05', '--target_modules', 'q_proj,v_proj', '--enable_fsdp', '--add_input_output_demarcation_key'].\u001b[0m\n", "\u001b[34m===================================BUG REPORT===================================\u001b[0m\n", "\u001b[34mWelcome to bitsandbytes. For bug reports, please run\u001b[0m\n", "\u001b[34mpython -m bitsandbytes\n", " and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\u001b[0m\n", "\u001b[34m================================================================================\u001b[0m\n", "\u001b[34mbin /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so\u001b[0m\n", "\u001b[34m/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib'), PosixPath('/usr/local/nvidia/lib64')}\n", " warn(msg)\u001b[0m\n", "\u001b[34mCUDA SETUP: CUDA runtime path found: /opt/conda/lib/libcudart.so.11.0\u001b[0m\n", "\u001b[34mCUDA SETUP: Highest compute capability among GPUs detected: 8.6\u001b[0m\n", "\u001b[34mCUDA SETUP: Detected CUDA version 118\u001b[0m\n", "\u001b[34mCUDA SETUP: Loading binary /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...\u001b[0m\n", "\u001b[34mINFO:root:Local rank is 0. Rank is 0\u001b[0m\n", "\u001b[34mINFO:root:Setting torch device = 0\u001b[0m\n", "\u001b[34mINFO:root:Loading the tokenizer.\u001b[0m\n", "\u001b[34m--> Running with torch dist debug set to detail\u001b[0m\n", "\u001b[34mINFO:root:Loading the data.\u001b[0m\n", "\u001b[34mINFO:root:Both instruction_tuned and chat_dataset are set to False.Assuming domain adaptation dataset format.\u001b[0m\n", "\u001b[34mDownloading data files: 0%| | 0/1 [00:00 Model /opt/ml/additonals3data\u001b[0m\n", "\u001b[34m--> /opt/ml/additonals3data has 6738.415616 Million params\u001b[0m\n", "\u001b[34mtrainable params: 4,194,304 || all params: 6,742,609,920 || trainable%: 0.06220594176090199\u001b[0m\n", "\u001b[34mbFloat16 enabled for mixed precision - using bfSixteen policy\u001b[0m\n", "\u001b[34m--> applying fsdp activation checkpointing...\u001b[0m\n", "\u001b[34mINFO:root:--> Training Set Length = 4\u001b[0m\n", "\u001b[34mINFO:root:--> Validation Set Length = 1\u001b[0m\n", "\u001b[34m/opt/conda/lib/python3.10/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.\n", " warnings.warn(\u001b[0m\n", "\u001b[34mTraining Epoch0: 0%|#033[34m #033[0m| 0/1 [00:00 {response}\")\n", " print(\"\\n==================================\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can run the same prompts on the fine-tuned model to evaluate it's domain knowledge. \n", "\n", "**Replace \"inputs\"** in the next cell with the input to send the model based on the domain you've chosen. \n", "\n", "**For financial domain:**\n", "\n", " \"inputs\": \"Replace with sentence below from text\" \n", "- \"The investment tests performed indicate\"\n", "- \"the relative volume for the long out of the money options, indicates\"\n", "- \"The results for the short in the money options\"\n", "- \"The results are encouraging for aggressive investors\"\n", "\n", "**For medical domain:** \n", "\n", " \"inputs\": \"Replace with sentence below from text\" \n", "- \"Myeloid neoplasms and acute leukemias derive from\"\n", "- \"Genomic characterization is essential for\"\n", "- \"Certain germline disorders may be associated with\"\n", "- \"In contrast to targeted approaches, genome-wide sequencing\"\n", "\n", "**For IT domain:** \n", "\n", " \"inputs\": \"Replace with sentence below from text\" \n", "- \"Traditional approaches to data management such as\"\n", "- \"A second important aspect of ubiquitous computing environments is\"\n", "- \"because ubiquitous computing is intended to\" \n", "- \"outline the key aspects of ubiquitous computing from a data management perspective.\"" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "outline the key aspects of ubiquitous computing from a data management perspective.\n", "> [{'generated_text': '\\nUbiquitous computing is a vision for the future in which computers are embedded in everyday objects and become invisible. As a result, users will be able to interact with their environment in a natural and seamless way.\\nThis book provides an overview of the key aspects of ubiquitous computing'}]\n", "\n", "==================================\n", "\n" ] } ], "source": [ "payload = {\n", " \"inputs\": \"outline the key aspects of ubiquitous computing from a data management perspective.\",\n", " \"parameters\": {\n", " \"max_new_tokens\": 64,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.6,\n", " \"return_full_text\": False,\n", " },\n", "}\n", "try:\n", " response = finetuned_predictor.predict(payload, custom_attributes=\"accept_eula=true\")\n", " print_response(payload, response)\n", "except Exception as e:\n", " print(e)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Do the outputs from the fine-tuned model provide domain-specific insightful and relevant content? You can continue experimenting with the inputs of the model to test it's domain knowledge. \n", "\n", "**Use the output from this notebook to fill out the \"model fine-tuning\" section of the project documentation report**\n", "\n", "**After you've filled out the report, run the cells below to delete the model deployment** \n", "\n", "`IF YOU FAIL TO RUN THE CELLS BELOW YOU WILL RUN OUT OF BUDGET TO COMPLETE THE PROJECT`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "finetuned_predictor.delete_model()\n", "finetuned_predictor.delete_endpoint()" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.14" } }, "nbformat": 4, "nbformat_minor": 4 }