File size: 21,558 Bytes
63c39dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用 DeepSpeed 和 Hugging Face Transformer 微调 FLAN-T5 XL/XXL\n",
"\n",
"[Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) 论文发布了 FLAN-T5 模型,它是 T5 模型的增强版。FLAN-T5 由很多各种各样的任务微调而得,因此,简单来讲,它就是个方方面面都更优的 T5 模型。相同参数量的条件下,FLAN-T5 的性能相比 T5 而言有两位数的提高。Google 在 Hugging Face 上开源了 [5 个 FLAN-T5 的 checkpoints](https://huggingface.co/models?other=arxiv:2210.11416),参数量范围从 8000 万 到 110 亿。\n",
"\n",
"在之前的一篇博文中,我们已经学习了如何 [针对聊天对话数据摘要生成任务微调 FLAN-T5](https://www.philschmid.de/fine-tune-flan-t5),那时我们使用的是 [Base (250M参数)](https://huggingface.co/google/flan-t5-base)模型。本文,我们将研究如何将训练从 Base 扩展到 [XL (30 亿参数)](https://huggingface.co/google/flan-t5-xl) 或 [XXL (110 亿参数)](https://huggingface.co/google/flan-t5-xxl)。\n",
"\n",
"这意味着我们将学习如何利用模型并行、多 GPU 以及 [DeepSpeed ZeRO](https://www.deepspeed.ai/tutorials/zero/) 来微调 FLAN-T5 XL 和 XXL。\n",
"\n",
"除了作为教程的部分之外,我们还跑了一系列实验,这些实验数据可以帮助你选择正确的硬件设置。你可以在*结果和实验*部分找到详细信息。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# install git lfs for pushing artifacts\n",
"!sudo apt install git-lfs\n",
"# install torch with the correct cuda version, check nvcc --version\n",
"!pip install torch --extra-index-url https://download.pytorch.org/whl/cu116 --upgrade\n",
"# install Hugging Face Libraries\n",
"!pip install \"transformers==4.26.0\" \"datasets==2.9.0\" \"accelerate==0.16.0\" \"evaluate==0.4.0\" --upgrade\n",
"# install deepspeed and ninja for jit compilations of kernels\n",
"!pip install \"deepspeed==0.8.0\" ninja --upgrade\n",
"# install additional dependencies needed for training\n",
"!pip install rouge-score nltk py7zr tensorboard"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# 处理数据集"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"与[针对聊天对话的摘要生成任务微调 FLAN-T5](https://www.philschmid.de/fine-tune-flan-t5)一文中类似,我们需要先准备一个用于微调的数据集。本文,我们将在 [CNN Dailymail 数据集](https://huggingface.co/datasets/cnn_dailymail) 上微调 [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl)。我们不会赘述如何生成数据集,如果你想了解数据集生成的详细步骤,请参阅[上一篇文章](https://www.philschmid.de/fine-tune-flan-t5)。\n",
"\n",
"我们定义了一些参数,本文的示例都会基于这些参数,但你可以根据实际需要进行调整。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# 实验配置\n",
"model_id = \"google/flan-t5-xxl\" # Hugging Face 模型 Id\n",
"dataset_id = \"cnn_dailymail\" # Hugging Face 数据集 Id\n",
"dataset_config = \"3.0.0\" # 数据集版本\n",
"save_dataset_path = \"data\" # 存放处理后数据的本地路径\n",
"text_column = \"article\" # 输入文本所属列\n",
"summary_column = \"highlights\" # 输出文本所属列\n",
"# 定制指令提示格式\n",
"prompt_template = f\"Summarize the following news article:\\n{{input}}\\nSummary:\\n\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"与 [之前的示例](https://www.philschmid.de/fine-tune-flan-t5) 不同,这次我们把预处理和训练分开。这样我们就可以在非 GPU 实例上运行预处理。我们先对数据集进行预处理(即分词)并将其保存到磁盘,然后训练脚本再从磁盘中加载预处理后的数据集。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datasets import load_dataset\n",
"from transformers import AutoTokenizer\n",
"import numpy as np \n",
"\n",
"# Load dataset from the hub\n",
"dataset = load_dataset(dataset_id,name=dataset_config)\n",
"# Load tokenizer of FLAN-t5-base\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"\n",
"print(f\"Train dataset size: {len(dataset['train'])}\")\n",
"print(f\"Test dataset size: {len(dataset['test'])}\")\n",
"\n",
"# Train dataset size: 287113\n",
"# Test dataset size: 11490"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"我们在配置文件中定义了一个 `prompt_template`,其可用于来构建指令提示,以提高我们模型的性能。 `prompt_template` 有“固定”的开始词和结束词,文档放在中间。这意味着我们需要确保 *“固定”模板词 + 文档* 总长不超过模型支持的最大序列长度。因此我们需要计算模型支持的最大文档长度,稍后我们会根据它来填充或截断模板中的文档。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt length: 12\n",
"Max input length: 500\n"
]
}
],
"source": [
"prompt_length = len(tokenizer(prompt_template.format(input=\"\"))[\"input_ids\"])\n",
"max_sample_length = tokenizer.model_max_length - prompt_length\n",
"print(f\"Prompt length: {prompt_length}\")\n",
"print(f\"Max input length: {max_sample_length}\")\n",
"\n",
"# Prompt length: 12\n",
"# Max input length: 500"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们知道,模型支持的最大输入文档长度为 500。除了输入之外,我们还需要知道最大“目标”序列长度,我们可以通过遍历数据集中的摘要长度来得到。(代码需要运行几分钟)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"application/json": {
"ascii": false,
"bar_format": null,
"colour": null,
"elapsed": 0.012465238571166992,
"initial": 0,
"n": 0,
"ncols": null,
"nrows": null,
"postfix": null,
"prefix": "",
"rate": null,
"total": 299,
"unit": "ba",
"unit_divisor": 1000,
"unit_scale": false
},
"application/vnd.jupyter.widget-view+json": {
"model_id": "32577879b38640f898e798ea8f88a801",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/299 [00:00<?, ?ba/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Max source length: 500\n"
]
},
{
"data": {
"application/json": {
"ascii": false,
"bar_format": null,
"colour": null,
"elapsed": 0.011892318725585938,
"initial": 0,
"n": 0,
"ncols": null,
"nrows": null,
"postfix": null,
"prefix": "",
"rate": null,
"total": 299,
"unit": "ba",
"unit_divisor": 1000,
"unit_scale": false
},
"application/vnd.jupyter.widget-view+json": {
"model_id": "724cc7afe0ba49a3b8a6a763a189e380",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/299 [00:00<?, ?ba/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Max target length: 129\n"
]
}
],
"source": [
"from datasets import concatenate_datasets\n",
"import numpy as np\n",
"\n",
"\n",
"# The maximum total input sequence length after tokenization. \n",
"# Sequences longer than this will be truncated, sequences shorter will be padded.\n",
"tokenized_inputs = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[text_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])\n",
"max_source_length = max([len(x) for x in tokenized_inputs[\"input_ids\"]])\n",
"max_source_length = min(max_source_length, max_sample_length)\n",
"print(f\"Max source length: {max_source_length}\")\n",
"\n",
"# The maximum total sequence length for target text after tokenization. \n",
"# Sequences longer than this will be truncated, sequences shorter will be padded.\"\n",
"tokenized_targets = concatenate_datasets([dataset[\"train\"], dataset[\"test\"]]).map(lambda x: tokenizer(x[summary_column], truncation=True), batched=True, remove_columns=[text_column, summary_column])\n",
"target_lenghts = [len(x) for x in tokenized_targets[\"input_ids\"]]\n",
"# use 95th percentile as max target length\n",
"max_target_length = int(np.percentile(target_lenghts, 95))\n",
"print(f\"Max target length: {max_target_length}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"现在一切准备就绪,可以处理数据集了。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"def preprocess_function(sample, padding=\"max_length\"):\n",
" # created prompted input\n",
" inputs = [prompt_template.format(input=item) for item in sample[text_column]]\n",
"\n",
" # tokenize inputs\n",
" model_inputs = tokenizer(inputs, max_length=tokenizer.model_max_length, padding=padding, truncation=True)\n",
"\n",
" # Tokenize targets with the `text_target` keyword argument\n",
" labels = tokenizer(text_target=sample[summary_column], max_length=max_target_length, padding=padding, truncation=True)\n",
"\n",
" # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\n",
" # padding in the loss.\n",
" if padding == \"max_length\":\n",
" labels[\"input_ids\"] = [\n",
" [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\n",
" ]\n",
"\n",
" model_inputs[\"labels\"] = labels[\"input_ids\"]\n",
" return model_inputs\n",
"\n",
"# process dataset\n",
"tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=list(dataset[\"train\"].features))\n",
"\n",
"# save dataset to disk\n",
"tokenized_dataset[\"train\"].save_to_disk(os.path.join(save_dataset_path,\"train\"))\n",
"tokenized_dataset[\"test\"].save_to_disk(os.path.join(save_dataset_path,\"eval\"))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用 `deepspeed` 微调模型\n",
"\n",
"准备完毕!我们现在可以开始训练模型了!如前所述,我们将使用集成了 DeepSpeed 的 Hugging Face Trainer。因此我们需要创建一个 `deespeed_config.json`。[DeepSpeed 配置](https://www.deepspeed.ai/docs/config-json/) 定义了要使用的 ZeRO 策略以及是否要使用混合精度训练等配置项。 Hugging Face Trainer 允许我们从 `deepspeed_config.json` 中的 `TrainingArguments` 继承相关配置以避免重复设置,查看[文档了解更多信息](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration)。\n",
"\n",
"我们创建了 4 组 deepspeed 配置文件用于实验,包括 `CPU 卸载`和`混合精度`:\n",
"\n",
"- [ds_flan_t5_z3_config.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config.json)\n",
"- [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json)\n",
"- [ds_flan_t5_z3_offload.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_offload.json)\n",
"- [ds_flan_t5_z3_offload_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_offload_bf16.json)\n",
"\n",
"你可以根据你的运行环境选择,例如如果在 NVIDIA V100s 上运行,你就不能使用带 `bf16` 的配置,因为 V100 不支持 `bfloat16` 数据类型。\n",
"\n",
"> 在微调 `T5` 模型时,不能使用 `fp16`,因为它会导致精度溢出问题,参见:[#4586](https://github.com/huggingface/transformers/issues/4586),[#10830](https://github.com/huggingface/transformers/issues/10830), [#10956](https://github.com/huggingface/transformers/pull/10956)\n",
">\n",
"\n",
"如开头所述,我们使用的是 p4dn.24xlarge AWS EC2 实例,该实例包含 8 张显存为 40GB 的 NVIDIA A100。这意味着我们可以使用 `bf16`,它将减少近一半的模型显存占用,使我们能够在不卸载的情况下高效训练。\n",
"\n",
"我们将使用 [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json)。如果你不想用 `auto` 值,可以查看 [文档](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration)。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"```\n",
"{\n",
" \"bf16\": {\n",
" \"enabled\": \"auto\"\n",
" },\n",
" \"optimizer\": {\n",
" \"type\": \"AdamW\",\n",
" \"params\": {\n",
" \"lr\": \"auto\",\n",
" \"betas\": \"auto\",\n",
" \"eps\": \"auto\",\n",
" \"weight_decay\": \"auto\"\n",
" }\n",
" },\n",
" \"scheduler\": {\n",
" \"type\": \"WarmupLR\",\n",
" \"params\": {\n",
" \"warmup_min_lr\": \"auto\",\n",
" \"warmup_max_lr\": \"auto\",\n",
" \"warmup_num_steps\": \"auto\"\n",
" }\n",
" },\n",
" \"zero_optimization\": {\n",
" \"stage\": 3,\n",
" \"overlap_comm\": true,\n",
" \"contiguous_gradients\": true,\n",
" \"sub_group_size\": 1e9,\n",
" \"reduce_bucket_size\": \"auto\",\n",
" \"stage3_prefetch_bucket_size\": \"auto\",\n",
" \"stage3_param_persistence_threshold\": \"auto\",\n",
" \"stage3_max_live_parameters\": 1e9,\n",
" \"stage3_max_reuse_distance\": 1e9,\n",
" \"stage3_gather_16bit_weights_on_model_save\": false\n",
" },\n",
" \"gradient_accumulation_steps\": \"auto\",\n",
" \"gradient_clipping\": \"auto\",\n",
" \"steps_per_print\": 2000,\n",
" \"train_batch_size\": \"auto\",\n",
" \"train_micro_batch_size_per_gpu\": \"auto\",\n",
" \"wall_clock_breakdown\": false\n",
"}\n",
"```"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"现在,该训练脚本上场了。我们根据[之前的博文](https://www.philschmid.de/fine-tune-flan-t5)准备了一个 [run_seq2seq_deepspeed.py](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/scripts/run_seq2seq_deepspeed.py) 训练脚本,它支持我们配置 deepspeed 和其他超参数,包括 `google/flan-t5-xxl` 的模型 ID。\n",
"\n",
"我们使用 `deepspeed` 启动器触发训练,输入给启动器的参数包括 GPU 数量、deepspeed 配置及其它超参数(如 `google/flan-t5-xxl` 的模型 ID)。"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
"deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py --model_id google/flan-t5-xxl --dataset_path data --epochs 3 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --generation_max_length 129 --lr 1e-4 --deepspeed configs/ds_flan_t5_z3_config_bf16.json\n"
]
}
],
"source": [
"!deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py \\\n",
" --model_id $model_id \\\n",
" --dataset_path $save_dataset_path \\\n",
" --epochs 3 \\\n",
" --per_device_train_batch_size 8 \\\n",
" --per_device_eval_batch_size 8 \\\n",
" --generation_max_length $max_target_length \\\n",
" --lr 1e-4 \\\n",
" --deepspeed configs/ds_flan_t5_z3_config_bf16.json "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"DeepSpeed 先将模型加载到 CPU 上,然后将其拆分到 8 张 A100 上然后开始训练。使用 [CNN Dailymail 数据集](https://huggingface.co/datasets/cnn_dailymail)进行训练大约需要 10 个小时,费用约为 `322 美元`。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# 结果与实验\n",
"\n",
"为了更好地了解硬件要求,我们对 FLAN-T5 XL 和 XXL 进行了一系列实验,以帮助我们评估和了解硬件需求以及训练这些模型的成本。\n",
"\n",
"下表列出了实验和相关设置的详细信息。\n",
"\n",
"数据集: `\"cnn_dailymail\"`\n",
"- 训练样本数: `287113`\n",
"- 验证样本数: `13368`\n",
"\n",
"超参:\n",
"- epochs: `3`\n",
"- 学习率: `1e-4`\n",
"\n",
"运行环境设置: \n",
"- 4x V100 16GB: p3.8xlarge\n",
"- 4x A10G 24GB: g5.24xlarge\n",
"- 8x V100 16GB: p3.16xlarge\n",
"- 8x A100 40GB: p4dn.24xlarge\n",
"\n",
"\n",
"| 模型 | DeepSpeed 卸载 | 硬件 | GPU每卡batch size | 精度 | 时长 | 费用 |\n",
"|-------------------|------------|--------------|--------------------|-----------|----------|--------|\n",
"| FLAN-T5-XL (3B) | No | 4x V100 16GB | OOM | fp32 | - | - |\n",
"| FLAN-T5-XL (3B) | No | 8x V100 16GB | 1 | fp32 | 105h | ~$2570 |\n",
"| FLAN-T5-XL (3B) | No | 8x A100 40GB | 72 | bf16 | 2.5h | ~$81 |\n",
"| FLAN-T5-XL (3B) | Yes | 4x V100 16GB | 8 | fp32 | 69h | ~$828 |\n",
"| FLAN-T5-XL (3B) | Yes | 8x V100 16GB | 8 | fp32 | 32h | ~$768 |\n",
"| FLAN-T5-XXL (11B) | No | 8x A100 40GB | 8 | bf16 | 10h | ~$322 |\n",
"| FLAN-T5-XXL (11B) | Yes | 4x V100 16GB | OOM | fp32 | - | - |\n",
"| FLAN-T5-XXL (11B) | Yes | 8x V100 16GB | OOM | fp32 | - | - |\n",
"| FLAN-T5-XXL (11B) | Yes | 4x A10G 24GB | 24 | bf16 | 90h | ~$732 |\n",
"| FLAN-T5-XXL (11B) | Yes | 8x A100 40GB | 48 | bf16 | 19h | ~$613 |\n",
"\n",
"我们可以看到 `bf16` 与 `fp32` 相比具有显著优势。FLAN-T5-XXL 能放进 4 张 A10G (24GB),但放不进 8 张 V100 16GB。\n",
"\n",
"我们的实验还表明,如果模型可以无需卸载同时以 batch size 大于 4 的配置跑在 GPU 上,其速度将比卸载模型和减小 batch size 的配置快约 2 倍且更具成本效益。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"> 英文原文: <url> https://www.philschmid.de/fine-tune-flan-t5-deepspeed </url>\n",
"> 原文作者:Philipp Schmid\n",
"> 译者: Matrix Yao (姚伟峰),英特尔深度学习工程师,工作方向为 transformer-family 模型在各模态数据上的应用及大规模模型的训练推理。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
|