File size: 8,255 Bytes
b386992
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "faa97138-7ee4-4aef-942f-961b321f05d7",
   "metadata": {},
   "source": [
    "# Evaluating a NeMo checkpoint on an arbitrary task"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a4ea639-7a9d-4f06-90cf-be4e9476dacc",
   "metadata": {},
   "source": [
    "This notebook demonstrates how to extend the evaluation capabilities within the NeMo Framework container.\n",
    "It guides you through the advanced configuration of an evaluation job.\n",
    "\n",
    "For an introduction to evaluation with NVIDIA Evals Factory and the NeMo Framework, please refer to the tutorial [\"Evaluating a NeMo checkpoint with lm-eval\"](mmlu.ipynb) first.\n",
    "\n",
    "In this tutorial, we will evaluate an LLM on the [WikiText-2](https://arxiv.org/abs/1609.07843) benchmark, which is available in the [NVIDIA Evals Factory lm-eval](https://pypi.org/project/nvidia-lm-eval/) package.\n",
    "The evaluation utilizes the log-probabilities of the context tokens to assess how likely the input text is, according to the model.\n",
    "\n",
    "> Note: It is recommended to run this notebook within a [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo), as it includes all necessary dependencies."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "23282cea-9b37-465f-a3f9-7e8caf25ce34",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import signal\n",
    "import subprocess\n",
    "\n",
    "from nemo.collections.llm import api\n",
    "from nemo.collections.llm.evaluation.api import EvaluationConfig, EvaluationTarget\n",
    "from nemo.collections.llm.evaluation.base import list_available_evaluations\n",
    "from nemo.utils import logging\n",
    "\n",
    "logging.setLevel(logging.INFO)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60de9e72-96a0-477e-93e3-7e1eb75b4c93",
   "metadata": {},
   "source": [
    "## 1. Deploying the model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6eecf707-d1ec-4ae4-b8d7-1d248647b520",
   "metadata": {},
   "source": [
    "We will start from deploying the model.\n",
    "\n",
    "First, you need to prepare a NeMo 2 checkpoint of the model you would like to evaluate.\n",
    "For the purpose of this tutorial, we will use the Llama 3.2 1B Instruct checkpoint, which you can download from the [NGC Catalog](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/llama-3_2-1b-instruct).\n",
    "Ensure that you mount the directory containing the checkpoint when starting the container.\n",
    "In this tutorial, we assume that the checkpoint is accessible under the path `\"/checkpoints/llama-3_2-1b-instruct_v2.0\"`.\n",
    "\n",
    "> Note: You can learn more about deployment and available server endpoints from the [\"Evaluating a NeMo checkpoint with lm-eval\"](mmlu.ipynb) tutorial. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cf964980-69ba-447d-a6d8-1412726c768a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# modify this variable to point to your checkpoint\n",
    "# this notebook uses https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/llama-3_2-1b-instruct\n",
    "CHECKPOINT_PATH = \"/checkpoints/llama-3_2-1b-instruct_v2.0\"\n",
    "\n",
    "# if you are not using NeMo FW container, modify this path to point to scripts directory\n",
    "SCRIPTS_PATH = \"/opt/NeMo/scripts\"\n",
    "\n",
    "# modify this path if you would like to save results in a different directory\n",
    "WORKSPACE = \"/workspace\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dca87531-5e91-4857-a06f-b2cac4b6f61c",
   "metadata": {},
   "outputs": [],
   "source": [
    "deploy_script = f\"{SCRIPTS_PATH}/deploy/nlp/deploy_in_fw_oai_server_eval.py\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e1ae4669-6218-47d9-9a02-70c24fbb25d9",
   "metadata": {},
   "outputs": [],
   "source": [
    "deploy_process = subprocess.Popen(\n",
    "    [\"python\", deploy_script, \"--nemo_checkpoint\", CHECKPOINT_PATH], \n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "78f35cf1-3fa3-4ceb-8162-234edb2f2beb",
   "metadata": {},
   "outputs": [],
   "source": [
    "base_url = \"http://0.0.0.0:8886\"\n",
    "model_name = \"triton_model\"\n",
    "\n",
    "completions_url = f\"{base_url}/v1/completions/\"\n",
    "chat_url = f\"{base_url}/v1/chat/completions/\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54c72002-36ce-4984-9d68-51a94c949195",
   "metadata": {},
   "source": [
    "## 2. Defining a custom evaluation workflow"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0efea168-8c76-4291-b850-7ec05dd6b7b2",
   "metadata": {},
   "source": [
    "NVIDIA Evals Factory packages include pre-defined evaluation configurations.\n",
    "These configurations represent some of the most commonly used evaluation settings and simplify running the most frequently used benchmarks.\n",
    "\n",
    "They can be listed using the `list_available_evaluations` function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "03d30f41-069c-4156-86f7-010319896bba",
   "metadata": {},
   "outputs": [],
   "source": [
    "list_available_evaluations()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f721bf5-36ae-41c3-b42f-7494b1651b2b",
   "metadata": {},
   "source": [
    "However, users are not limited to this short list of benchmarks.\n",
    "If you would like to evaluate a model on a different task from the underlying evaluation harness, you simply need to specify the full configuration.\n",
    "\n",
    "For this tutorial, we will use the `wikitext` task from `lm-evaluation-harness`.\n",
    "Note that for tasks without a predefined configuration you must specify the type in the `\"<evaluation harness>.<task name>\"` format.\n",
    "\n",
    "Since this task uses the log-likelihoods of the input texts, we need to specify parameters for loading the tokenizer: `\"tokenizer_backend\"` and `\"tokenizer\"`.\n",
    "For the model used in this example these are `\"huggingface\"` and `\"/checkpoints/llama-3_2-1b-instruct_v2.0/context/nemo_tokenizer\"`, respectively."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "250733b3-fbe2-4da3-bb18-6f357331c241",
   "metadata": {},
   "outputs": [],
   "source": [
    "target_config = EvaluationTarget(api_endpoint={\"url\": completions_url, \"type\": \"completions\"})\n",
    "eval_config = EvaluationConfig(\n",
    "    type=\"lm-evaluation-harness.wikitext\",\n",
    "    params={\"extra\": {\n",
    "                \"tokenizer_backend\": \"huggingface\",\n",
    "                \"tokenizer\": f\"{CHECKPOINT_PATH}/context/nemo_tokenizer\"},\n",
    "           },\n",
    "    output_dir=f\"{WORKSPACE}/wikitext_results\",\n",
    ")\n",
    "\n",
    "results = api.evaluate(target_cfg=target_config, eval_cfg=eval_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "757e6d42",
   "metadata": {},
   "source": [
    "Finally, we can shut the model server down and inspect evaluation results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e55c94d0-f302-48d1-bb40-8cbdea323e32",
   "metadata": {},
   "outputs": [],
   "source": [
    "deploy_process.send_signal(signal.SIGINT)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5ac073b9-f581-4df3-8225-92086a2a0962",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(json.dumps(results['tasks'], indent=4))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "78efa6f2-4dc2-49bd-afa1-9d89e5f09fdf",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}