{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Evaluation Python Code Generation with CodeGen 350M mono Using the ReCode Approach\n",
        "This notebook is a companion of chapter 6 of the \"Domain Specific LLMs in Action\" book, author Guglielmo Iozzia, [Manning Publications](https://www.manning.com/), 2024.  \n",
        "The code in this notebook is to show an approach to evaluate the quality of the Python code generated through a [CodeGen 350 M mono](https://huggingface.co/Salesforce/codegen-350M-mono) model. The code in this notebook has been derived and adapted from the ReCode paper. While the CodeGen 350M mono model is evaluated here, the exact same approach applies to other models for Python code generation available in the Hugging Face Hub. Python is the only programming language that could be evaluate using the code in this notebook. Execution of the code cells of this notebooks requires hardware acceleration (GPU).  \n",
        "More details about the code can be found in the related book's chapter."
      ],
      "metadata": {
        "id": "zhc_SPyDWS3C"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "The correctness evaluation steps depend on the OpenAI's HumanEval package. It isn't available through any Python package manager, so it needs to be installed from source."
      ],
      "metadata": {
        "id": "c76kVLv2_Mj6"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!git clone https://github.com/openai/human-eval\n",
        "%cd human-eval\n",
        "!pip install .\n",
        "%cd .."
      ],
      "metadata": {
        "id": "8J0-30nDWMwY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Download the preprocessed HumanEval dataset from the ReCode GitHub repo."
      ],
      "metadata": {
        "id": "_lrUbBzkZSQS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!mkdir -p datasets/nominal/\n",
        "%cd ./datasets/nominal/\n",
        "!wget https://raw.githubusercontent.com/amazon-science/recode/refs/heads/main/datasets/nominal/HumanEval.jsonl\n",
        "%cd ../.."
      ],
      "metadata": {
        "id": "vrOK6F4HZ65i"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Load the CodeGen 350 M mono model and tokenizer from the HF's Hub. The model weights are then loaded into GPU memory."
      ],
      "metadata": {
        "id": "p3fCaZV6ZbDI"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import AutoTokenizer\n",
        "\n",
        "model_id = \"Salesforce/codegen-350M-mono\"\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_id)"
      ],
      "metadata": {
        "id": "qMnzXk5QU__6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import torch\n",
        "from transformers import AutoModelForCausalLM\n",
        "\n",
        "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
        "model = AutoModelForCausalLM.from_pretrained(model_id).to(device)\n",
        "model.eval()"
      ],
      "metadata": {
        "id": "16m1oyC7amcb"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Setup the configuration to be used for the model at code generation time."
      ],
      "metadata": {
        "id": "SB6MdcmNae38"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import GenerationConfig\n",
        "\n",
        "generation_config = GenerationConfig(\n",
        "    pad_token_id=50256,\n",
        "    truncation=True,\n",
        "    max_length=1000,\n",
        "    max_context_length=1000,\n",
        "    use_cache=True,\n",
        "    return_dict_in_generate=True,\n",
        "    output_scores=True\n",
        ")"
      ],
      "metadata": {
        "id": "uX_syvspwnCF"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define two functions to load the prompts from the JSON file downloaded from the ReCode repo."
      ],
      "metadata": {
        "id": "zZUGpvkOcCTD"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import json\n",
        "import gzip\n",
        "\n",
        "def read_problems(eval_file):\n",
        "    return {str(task[\"task_id\"]): task for task in stream_jsonl(eval_file)}\n",
        "\n",
        "def stream_jsonl(filename):\n",
        "      with open(filename, \"r\") as fp:\n",
        "          for line in fp:\n",
        "              if any(not x.isspace() for x in line):\n",
        "                  yield json.loads(line)"
      ],
      "metadata": {
        "id": "YU42G0VlVkW7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Load the prompt list from the JSON file and then create a single task file for each one."
      ],
      "metadata": {
        "id": "92kkdNxZeH4Q"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "problems_file_path = 'datasets/nominal/HumanEval.jsonl'\n",
        "problems = read_problems(problems_file_path)\n",
        "num_samples = 1\n",
        "batch_size = 1"
      ],
      "metadata": {
        "id": "9ISqIl9V2QCs"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "\n",
        "output_dir = 'output_dir'\n",
        "fpath_format = os.path.join(\n",
        "        output_dir, \"output\", \"taskid-{task_idx}-gen{completion_idx}.json\"\n",
        "    )"
      ],
      "metadata": {
        "id": "UCgNzbz9VnLJ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function to count the non-empty prompt samples."
      ],
      "metadata": {
        "id": "GMvlvNvkgr1X"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def count_files_present_nonemtpy(list_fname):\n",
        "    count = 0\n",
        "    for fname in list_fname:\n",
        "        if os.path.isfile(fname):\n",
        "            f = open(fname, \"r\", encoding=\"utf8\")\n",
        "            s = f.read()\n",
        "            f.close()\n",
        "            if not s == \"\":\n",
        "                count += 1\n",
        "    return count, len(list_fname)"
      ],
      "metadata": {
        "id": "hw5R4u4vANur"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function to get token position from a string."
      ],
      "metadata": {
        "id": "OpxEwj40gzZS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_token_position_by_string(target_str, outputs, tokenizer, skip_special_tokens):\n",
        "    for position in range(1, len(outputs) + 1):\n",
        "        gen_str = tokenizer.decode(\n",
        "            outputs[:position],\n",
        "            skip_special_tokens=skip_special_tokens,\n",
        "            clean_up_tokenization_spaces=False,\n",
        "        )\n",
        "        if gen_str.rstrip() == target_str.rstrip():\n",
        "            return position  # not including outputs[position]\n",
        "        if gen_str.startswith(target_str) and target_str != \"\":\n",
        "            print(\"Cannot find an exact match, use approx!\")\n",
        "            print(f\"output length: {len(outputs)}\")\n",
        "            print(target_str)\n",
        "            print(\"-----------------------\")\n",
        "            print(gen_str)\n",
        "            return position\n",
        "    if target_str.rstrip() == \"\":\n",
        "        if target_str == \"\":\n",
        "            print(\"generated empty string!\")\n",
        "        else:\n",
        "            print(\"generated only white space!\")\n",
        "        return 0\n",
        "    print(f\"output length: {len(outputs)}\")\n",
        "    print(target_str)\n",
        "    print(\"-----------------------\")\n",
        "    print(gen_str)\n",
        "    raise RuntimeError(\"Cannot match prefix returned by AST.\")"
      ],
      "metadata": {
        "id": "ejxSz-w8X_4I"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function to assess if the generated code is a valid Python code. It uses the Python native *ast* package."
      ],
      "metadata": {
        "id": "2aXy4BmeiM2b"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import ast\n",
        "\n",
        "def is_valid_python(code):\n",
        "    try:\n",
        "        try:\n",
        "            pared_code = ast.parse(code)\n",
        "        except SyntaxError:\n",
        "            return False\n",
        "    except Exception as e:\n",
        "        print(\"Exception: \", e)\n",
        "        return False\n",
        "    return pared_code"
      ],
      "metadata": {
        "id": "c9rikM2rYNho"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define a function that grab entire function code from ast."
      ],
      "metadata": {
        "id": "qoomkpk-jih-"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_function_from_ast(parsed_code, code, option=\"func_ast_last\"):\n",
        "    assert option in [\n",
        "        \"func_ast_first\",\n",
        "        \"func_ast_last\",\n",
        "    ], f\"Invalid post process option {option}\"\n",
        "    for i in range(len(parsed_code.body)):\n",
        "        idx = -i - 1 if option == \"func_ast_last\" else i\n",
        "        if type(parsed_code.body[idx]) == ast.FunctionDef:\n",
        "            break\n",
        "        idx = None\n",
        "    assert idx is not None, \"No function found\"\n",
        "    function_segment = ast.get_source_segment(code, parsed_code.body[idx])\n",
        "    position = code.find(function_segment)\n",
        "    function_segment_plus_previous = code[: position + len(function_segment)]\n",
        "    return function_segment_plus_previous"
      ],
      "metadata": {
        "id": "lvmLaMC-YVit"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Define the function that handles how to decode the generated code and checks that it is valid Python code."
      ],
      "metadata": {
        "id": "XaPqxZC2kAKa"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def filter_valid_code(\n",
        "    true_str_input,\n",
        "    execution_prompt,\n",
        "    inputs,\n",
        "    sequences,\n",
        "    initial_context_length,\n",
        "    tokenizer,\n",
        "    task_id=None,\n",
        "    has_special_tokens=False,\n",
        "    post_process=\"greedy\",\n",
        "    replace_unk=False,\n",
        "    skip_special_tokens=True,\n",
        "    mean_logp=None,\n",
        "    use_language_tag=0,\n",
        "):\n",
        "    \"\"\"\n",
        "    Due to tokenizer non lossless-ness, the decoded original prompt and\n",
        "    the real original prompt are not the same.\n",
        "\n",
        "    Due to constrained generation, input tokens not not necessarily match\n",
        "    with the new input tokens (but match by characters instead)\n",
        "    \"\"\"\n",
        "    samples = []\n",
        "    # need both to handle CG / non losslessness of tokenizer\n",
        "    decoded_context_string = tokenizer.batch_decode(\n",
        "        inputs[:, use_language_tag:initial_context_length],\n",
        "        skip_special_tokens=skip_special_tokens,\n",
        "        clean_up_tokenization_spaces=False,\n",
        "    )[0]\n",
        "    decoded_original_prompt = tokenizer.batch_decode(\n",
        "        inputs[:, use_language_tag:],\n",
        "        skip_special_tokens=skip_special_tokens,\n",
        "        clean_up_tokenization_spaces=False,\n",
        "    )[0]\n",
        "    processed_prompt = decoded_context_string\n",
        "\n",
        "    assert execution_prompt is None, \"only support execution_prompt is None here\"\n",
        "    processed_execution_prompt = processed_prompt\n",
        "\n",
        "    output_lists = sequences[:, initial_context_length:]\n",
        "    for sample_id, outputs in enumerate(output_lists):\n",
        "        is_valid = False\n",
        "        for position in range(len(outputs), 0, -1):\n",
        "            gen_up_to_pos_toks = outputs[:position]\n",
        "            gen_up_to_pos_str = tokenizer.decode(\n",
        "                gen_up_to_pos_toks,\n",
        "                skip_special_tokens=skip_special_tokens,\n",
        "                clean_up_tokenization_spaces=False,\n",
        "            )\n",
        "            origin_pred = gen_up_to_pos_str\n",
        "            code = (\n",
        "                processed_execution_prompt + gen_up_to_pos_str\n",
        "            )  # something is off for python\n",
        "            parsed_code = is_valid_python(code)\n",
        "            if parsed_code:\n",
        "                is_valid = True\n",
        "                # print(f\"valid at position {position} / {len(outputs) - 1}. \")\n",
        "                if post_process != \"greedy\":\n",
        "                    try:\n",
        "                        function_segment_plus_previous = get_function_from_ast(\n",
        "                            parsed_code,\n",
        "                            code,\n",
        "                            option=post_process,\n",
        "                        )\n",
        "                        generated_part = function_segment_plus_previous[\n",
        "                            len(processed_execution_prompt) :\n",
        "                        ]\n",
        "                    except Exception as e:\n",
        "                        print(\"Something went wrong...\", e)\n",
        "                        generated_part = gen_up_to_pos_str\n",
        "                elif post_process == \"greedy\":\n",
        "                    generated_part = gen_up_to_pos_str\n",
        "                else:\n",
        "                    assert False, f\"post processing method {post_process} not supported\"\n",
        "\n",
        "                if task_id is None:\n",
        "                    return generated_part\n",
        "                if mean_logp is None:\n",
        "                    score = None\n",
        "                else:\n",
        "                    if post_process != \"greedy\":\n",
        "                        position = get_token_position_by_string(\n",
        "                            generated_part,\n",
        "                            outputs,\n",
        "                            tokenizer,\n",
        "                            skip_special_tokens,\n",
        "                        )\n",
        "                    if position == 0:\n",
        "                        score = -1e8\n",
        "                    else:\n",
        "                        score = mean_logp[sample_id][position - 1]\n",
        "\n",
        "                samples.append(\n",
        "                    dict(\n",
        "                        task_id=task_id,\n",
        "                        completion=(processed_prompt + generated_part)[\n",
        "                            len(decoded_original_prompt) :\n",
        "                        ],\n",
        "                        ori_pred=(processed_prompt + origin_pred)[\n",
        "                            len(decoded_original_prompt) :\n",
        "                        ],\n",
        "                        input=true_str_input,\n",
        "                        mean_logp=score,\n",
        "                    )\n",
        "                )\n",
        "                break\n",
        "        if not is_valid:\n",
        "            predictions = tokenizer.decode(\n",
        "                outputs,\n",
        "                skip_special_tokens=skip_special_tokens,\n",
        "                clean_up_tokenization_spaces=False,\n",
        "            )\n",
        "            origin_pred = predictions\n",
        "            print(\"Warning - no valid substring\")\n",
        "            if task_id is None:\n",
        "                return predictions\n",
        "            samples.append(\n",
        "                dict(\n",
        "                    task_id=task_id,\n",
        "                    completion=(processed_prompt + predictions)[\n",
        "                        len(decoded_original_prompt) :\n",
        "                    ],\n",
        "                    ori_pred=(processed_prompt + origin_pred)[\n",
        "                        len(decoded_original_prompt) :\n",
        "                    ],\n",
        "                    input=true_str_input,\n",
        "                    mean_logp=-1e8,\n",
        "                )\n",
        "            )\n",
        "\n",
        "    return samples"
      ],
      "metadata": {
        "id": "UjQdAWG7VMRd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Create the output directory. It would be used to save the generated code and the evaluation results too."
      ],
      "metadata": {
        "id": "K7KM6agTkGio"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!mkdir -p output_dir/output"
      ],
      "metadata": {
        "id": "G0160WQcah7C"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Start the code generation process. Each sample prompt in the HumanEval dataset is perturebed before being tokenized and sent to the model for Python code generation. At each step, the generated code is assessed to vefify that it is really valid code."
      ],
      "metadata": {
        "id": "sAHwnvj4jP84"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from tqdm import tqdm\n",
        "\n",
        "override_previous_results = False\n",
        "for enum_idx, task_id in enumerate(tqdm(problems)):\n",
        "        # assume TaskName/ID format. The id part needs not be integer.\n",
        "        task_idx = task_id.split(\"/\")[1]\n",
        "        if not override_previous_results:\n",
        "            fnames = [\n",
        "                fpath_format.format(task_idx=task_idx, completion_idx=_idx)\n",
        "                for _idx in range(num_samples)\n",
        "            ]\n",
        "            count, all_count = count_files_present_nonemtpy(fnames)\n",
        "            if count == all_count:\n",
        "                print(\n",
        "                    f\"Result caching mode: Skipping case {task_id}. Generated all {all_count}\"\n",
        "                )\n",
        "                continue\n",
        "            else:\n",
        "                print(\n",
        "                    f\"Result caching mode: Only {count} out of {all_count} were generated. Regenerating task {task_id}\"\n",
        "                )\n",
        "        execution_prompt = None\n",
        "        prompt = problems[task_id][\"prompt\"]\n",
        "        completion_idx = -1\n",
        "        for i in range(0, num_samples, batch_size):\n",
        "          num_return_sequences = min(num_samples - i, batch_size)\n",
        "          generation_config = GenerationConfig(\n",
        "              pad_token_id=50256,\n",
        "              truncation=True,\n",
        "              max_length=1000,\n",
        "              max_context_length=1000,\n",
        "              use_cache=True,\n",
        "              return_dict_in_generate=True,\n",
        "              output_scores=True\n",
        "          )\n",
        "          inputs = tokenizer(prompt, return_tensors=\"pt\").to(device)\n",
        "          input_ids = inputs.input_ids\n",
        "          output_dict = model.generate(input_ids,\n",
        "                                        generation_config=generation_config)\n",
        "\n",
        "          sequences = output_dict.sequences\n",
        "          initial_context_length = len(sequences[0]) - len(output_dict.scores)\n",
        "\n",
        "          predictions_post_eos = filter_valid_code(\n",
        "                        true_str_input=prompt,\n",
        "                        execution_prompt=execution_prompt,\n",
        "                        inputs=input_ids,\n",
        "                        sequences=sequences,\n",
        "                        initial_context_length=initial_context_length,\n",
        "                        tokenizer=tokenizer,\n",
        "                        task_id=task_id,\n",
        "                        post_process='func_ast_first',\n",
        "                        skip_special_tokens=True,\n",
        "                        mean_logp=None,\n",
        "                    )\n",
        "\n",
        "\n",
        "          for prediction in predictions_post_eos:\n",
        "              completion_idx += 1\n",
        "              fpath = fpath_format.format(\n",
        "                  task_idx=task_idx, completion_idx=completion_idx\n",
        "              )\n",
        "              prediction[\"language\"] = 'python'\n",
        "              with open(fpath, \"w\", encoding=\"utf8\") as _f:\n",
        "                  json.dump(prediction, _f)"
      ],
      "metadata": {
        "id": "atjknT5l7V1d"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Correctness Evaluation Process"
      ],
      "metadata": {
        "id": "yTrzmeWtq2N3"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "The code in the cell below is to overwrite the same from the HumanEval package, as the line responsible for the execution of the generated code is commented in the original for safety reasons. Please be careful when evaluating Python code generated by LLMs."
      ],
      "metadata": {
        "id": "qaDYxdSUBOVJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from typing import Optional, Callable, Dict\n",
        "import ast\n",
        "import contextlib\n",
        "import faulthandler\n",
        "import io\n",
        "import os\n",
        "import multiprocessing\n",
        "import platform\n",
        "import signal\n",
        "import tempfile\n",
        "\n",
        "\n",
        "def custom_check_correctness(problem: Dict, completion: str, timeout: float,\n",
        "                      completion_id: Optional[int] = None) -> Dict:\n",
        "    \"\"\"\n",
        "    Evaluates the functional correctness of a completion by running the test\n",
        "    suite provided in the problem.\n",
        "\n",
        "    :param completion_id: an optional completion ID so we can match\n",
        "        the results later even if execution finishes asynchronously.\n",
        "    \"\"\"\n",
        "\n",
        "    def unsafe_execute():\n",
        "\n",
        "        with create_tempdir():\n",
        "\n",
        "            # These system calls are needed when cleaning up tempdir.\n",
        "            import os\n",
        "            import shutil\n",
        "            rmtree = shutil.rmtree\n",
        "            rmdir = os.rmdir\n",
        "            chdir = os.chdir\n",
        "\n",
        "            # Disable functionalities that can make destructive changes to the test.\n",
        "            reliability_guard()\n",
        "\n",
        "            # Construct the check program and run it.\n",
        "            check_program = (\n",
        "                problem[\"prompt\"] + completion + \"\\n\" +\n",
        "                problem[\"test\"] + \"\\n\" +\n",
        "                f\"check({problem['entry_point']})\"\n",
        "            )\n",
        "\n",
        "            try:\n",
        "                exec_globals = {}\n",
        "                with swallow_io():\n",
        "                    with time_limit(timeout):\n",
        "                      exec(check_program, exec_globals)\n",
        "                      result.append(\"passed\")\n",
        "            except TimeoutException:\n",
        "                result.append(\"timed out\")\n",
        "            except BaseException as e:\n",
        "                result.append(f\"failed: {e}\")\n",
        "\n",
        "            # Needed for cleaning up.\n",
        "            shutil.rmtree = rmtree\n",
        "            os.rmdir = rmdir\n",
        "            os.chdir = chdir\n",
        "\n",
        "    manager = multiprocessing.Manager()\n",
        "    result = manager.list()\n",
        "\n",
        "    p = multiprocessing.Process(target=unsafe_execute)\n",
        "    p.start()\n",
        "    p.join(timeout=timeout + 1)\n",
        "    if p.is_alive():\n",
        "        p.kill()\n",
        "\n",
        "    if not result:\n",
        "        result.append(\"timed out\")\n",
        "\n",
        "    return dict(\n",
        "        task_id=problem[\"task_id\"],\n",
        "        passed=result[0] == \"passed\",\n",
        "        result=result[0],\n",
        "        completion_id=completion_id,\n",
        "    )\n",
        "\n",
        "\n",
        "@contextlib.contextmanager\n",
        "def time_limit(seconds: float):\n",
        "    def signal_handler(signum, frame):\n",
        "        raise TimeoutException(\"Timed out!\")\n",
        "    signal.setitimer(signal.ITIMER_REAL, seconds)\n",
        "    signal.signal(signal.SIGALRM, signal_handler)\n",
        "    try:\n",
        "        yield\n",
        "    finally:\n",
        "        signal.setitimer(signal.ITIMER_REAL, 0)\n",
        "\n",
        "\n",
        "@contextlib.contextmanager\n",
        "def swallow_io():\n",
        "    stream = WriteOnlyStringIO()\n",
        "    with contextlib.redirect_stdout(stream):\n",
        "        with contextlib.redirect_stderr(stream):\n",
        "            with redirect_stdin(stream):\n",
        "                yield\n",
        "\n",
        "\n",
        "@contextlib.contextmanager\n",
        "def create_tempdir():\n",
        "    with tempfile.TemporaryDirectory() as dirname:\n",
        "        with chdir(dirname):\n",
        "            yield dirname\n",
        "\n",
        "\n",
        "class TimeoutException(Exception):\n",
        "    pass\n",
        "\n",
        "\n",
        "class WriteOnlyStringIO(io.StringIO):\n",
        "    \"\"\" StringIO that throws an exception when it's read from \"\"\"\n",
        "\n",
        "    def read(self, *args, **kwargs):\n",
        "        raise IOError\n",
        "\n",
        "    def readline(self, *args, **kwargs):\n",
        "        raise IOError\n",
        "\n",
        "    def readlines(self, *args, **kwargs):\n",
        "        raise IOError\n",
        "\n",
        "    def readable(self, *args, **kwargs):\n",
        "        \"\"\" Returns True if the IO object can be read. \"\"\"\n",
        "        return False\n",
        "\n",
        "\n",
        "class redirect_stdin(contextlib._RedirectStream):  # type: ignore\n",
        "    _stream = 'stdin'\n",
        "\n",
        "\n",
        "@contextlib.contextmanager\n",
        "def chdir(root):\n",
        "    if root == \".\":\n",
        "        yield\n",
        "        return\n",
        "    cwd = os.getcwd()\n",
        "    os.chdir(root)\n",
        "    try:\n",
        "        yield\n",
        "    except BaseException as exc:\n",
        "        raise exc\n",
        "    finally:\n",
        "        os.chdir(cwd)\n",
        "\n",
        "\n",
        "def reliability_guard(maximum_memory_bytes: Optional[int] = None):\n",
        "    \"\"\"\n",
        "    This disables various destructive functions and prevents the generated code\n",
        "    from interfering with the test (e.g. fork bomb, killing other processes,\n",
        "    removing filesystem files, etc.)\n",
        "\n",
        "    WARNING\n",
        "    This function is NOT a security sandbox. Untrusted code, including, model-\n",
        "    generated code, should not be blindly executed outside of one. See the\n",
        "    Codex paper for more information about OpenAI's code sandbox, and proceed\n",
        "    with caution.\n",
        "    \"\"\"\n",
        "\n",
        "    if maximum_memory_bytes is not None:\n",
        "        import resource\n",
        "        resource.setrlimit(resource.RLIMIT_AS, (maximum_memory_bytes, maximum_memory_bytes))\n",
        "        resource.setrlimit(resource.RLIMIT_DATA, (maximum_memory_bytes, maximum_memory_bytes))\n",
        "        if not platform.uname().system == 'Darwin':\n",
        "            resource.setrlimit(resource.RLIMIT_STACK, (maximum_memory_bytes, maximum_memory_bytes))\n",
        "\n",
        "    faulthandler.disable()\n",
        "\n",
        "    import builtins\n",
        "    builtins.exit = None\n",
        "    builtins.quit = None\n",
        "\n",
        "    import os\n",
        "    os.environ['OMP_NUM_THREADS'] = '1'\n",
        "\n",
        "    os.kill = None\n",
        "    os.system = None\n",
        "    os.putenv = None\n",
        "    os.remove = None\n",
        "    os.removedirs = None\n",
        "    os.rmdir = None\n",
        "    os.fchdir = None\n",
        "    os.setuid = None\n",
        "    os.fork = None\n",
        "    os.forkpty = None\n",
        "    os.killpg = None\n",
        "    os.rename = None\n",
        "    os.renames = None\n",
        "    os.truncate = None\n",
        "    os.replace = None\n",
        "    os.unlink = None\n",
        "    os.fchmod = None\n",
        "    os.fchown = None\n",
        "    os.chmod = None\n",
        "    os.chown = None\n",
        "    os.chroot = None\n",
        "    os.fchdir = None\n",
        "    os.lchflags = None\n",
        "    os.lchmod = None\n",
        "    os.lchown = None\n",
        "    os.getcwd = None\n",
        "    os.chdir = None\n",
        "\n",
        "    import shutil\n",
        "    shutil.rmtree = None\n",
        "    shutil.move = None\n",
        "    shutil.chown = None\n",
        "\n",
        "    import subprocess\n",
        "    subprocess.Popen = None  # type: ignore\n",
        "\n",
        "    __builtins__['help'] = None\n",
        "\n",
        "    import sys\n",
        "    sys.modules['ipdb'] = None\n",
        "    sys.modules['joblib'] = None\n",
        "    sys.modules['resource'] = None\n",
        "    sys.modules['psutil'] = None\n",
        "    sys.modules['tkinter'] = None\n"
      ],
      "metadata": {
        "id": "Eg0BAwYnmkyx"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The code in the cell below is to implement a custom function to run the code correctness steps, to overcome the limitation in the HumanEval library explained above."
      ],
      "metadata": {
        "id": "B37JuADJBv4Q"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from collections import defaultdict, Counter\n",
        "from concurrent.futures import ThreadPoolExecutor, as_completed\n",
        "from typing import List, Union, Iterable, Dict\n",
        "import itertools\n",
        "\n",
        "import numpy as np\n",
        "import tqdm\n",
        "\n",
        "from human_eval.data import HUMAN_EVAL, read_problems, stream_jsonl, write_jsonl\n",
        "\n",
        "def estimate_pass_at_k(\n",
        "    num_samples: Union[int, List[int], np.ndarray],\n",
        "    num_correct: Union[List[int], np.ndarray],\n",
        "    k: int\n",
        ") -> np.ndarray:\n",
        "    \"\"\"\n",
        "    Estimates pass@k of each problem and returns them in an array.\n",
        "    \"\"\"\n",
        "\n",
        "    def estimator(n: int, c: int, k: int) -> float:\n",
        "        \"\"\"\n",
        "        Calculates 1 - comb(n - c, k) / comb(n, k).\n",
        "        \"\"\"\n",
        "        if n - c < k:\n",
        "            return 1.0\n",
        "        return 1.0 - np.prod(1.0 - k / np.arange(n - c + 1, n + 1))\n",
        "\n",
        "    if isinstance(num_samples, int):\n",
        "        num_samples_it = itertools.repeat(num_samples, len(num_correct))\n",
        "    else:\n",
        "        assert len(num_samples) == len(num_correct)\n",
        "        num_samples_it = iter(num_samples)\n",
        "\n",
        "    return np.array([estimator(int(n), int(c), k) for n, c in zip(num_samples_it, num_correct)])\n",
        "\n",
        "\n",
        "def custom_evaluate_functional_correctness(\n",
        "    sample_file: str,\n",
        "    k: List[int] = [1, 10, 100],\n",
        "    n_workers: int = 4,\n",
        "    timeout: float = 3.0,\n",
        "    problem_file: str = HUMAN_EVAL,\n",
        "):\n",
        "    \"\"\"\n",
        "    Evaluates the functional correctness of generated samples, and writes\n",
        "    results to f\"{sample_file}_results.jsonl.gz\"\n",
        "    \"\"\"\n",
        "\n",
        "    problems = read_problems(problem_file)\n",
        "\n",
        "    # Check the generated samples against test suites.\n",
        "    with ThreadPoolExecutor(max_workers=n_workers) as executor:\n",
        "\n",
        "        futures = []\n",
        "        completion_id = Counter()\n",
        "        n_samples = 0\n",
        "        results = defaultdict(list)\n",
        "\n",
        "        print(\"Reading samples...\")\n",
        "        for sample in tqdm.tqdm(stream_jsonl(sample_file)):\n",
        "            task_id = sample[\"task_id\"]\n",
        "            completion = sample[\"completion\"]\n",
        "            args = (problems[task_id], completion, timeout, completion_id[task_id])\n",
        "            future = executor.submit(custom_check_correctness, *args)\n",
        "            futures.append(future)\n",
        "            completion_id[task_id] += 1\n",
        "            n_samples += 1\n",
        "\n",
        "        assert len(completion_id) == len(problems), \"Some problems are not attempted.\"\n",
        "\n",
        "        print(\"Running test suites...\")\n",
        "        for future in tqdm.tqdm(as_completed(futures), total=len(futures)):\n",
        "            result = future.result()\n",
        "            results[result[\"task_id\"]].append((result[\"completion_id\"], result))\n",
        "\n",
        "    # Calculate pass@k.\n",
        "    total, correct = [], []\n",
        "    for result in results.values():\n",
        "        result.sort()\n",
        "        passed = [r[1][\"passed\"] for r in result]\n",
        "        total.append(len(passed))\n",
        "        correct.append(sum(passed))\n",
        "    total = np.array(total)\n",
        "    correct = np.array(correct)\n",
        "\n",
        "    ks = k\n",
        "    pass_at_k = {f\"pass@{k}\": estimate_pass_at_k(total, correct, k).mean()\n",
        "                 for k in ks if (total >= k).all()}\n",
        "\n",
        "    # Finally, save the results in one file:\n",
        "    def combine_results():\n",
        "        for sample in stream_jsonl(sample_file):\n",
        "            task_id = sample[\"task_id\"]\n",
        "            result = results[task_id].pop(0)\n",
        "            sample[\"result\"] = result[1][\"result\"]\n",
        "            sample[\"passed\"] = result[1][\"passed\"]\n",
        "            yield sample\n",
        "\n",
        "    out_file = sample_file + \"_results.jsonl\"\n",
        "    print(f\"Writing results to {out_file}...\")\n",
        "    write_jsonl(out_file, tqdm.tqdm(combine_results(), total=n_samples))\n",
        "\n",
        "    return pass_at_k\n"
      ],
      "metadata": {
        "id": "8uLDejbelvpN"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "Execute the code correctness evaluation on the 164 samples. The results are aggregated and saved in a single JSON file. The *passed* attribute contains the result of a test (true or false are the only possible values for it)."
      ],
      "metadata": {
        "id": "g-j--npQCCMC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import fire\n",
        "import sys\n",
        "import glob\n",
        "import os\n",
        "\n",
        "def entry_point(\n",
        "    sample_dir: str = '/content/output_dir/output/',\n",
        "    k: str = \"1,10,100\",\n",
        "    n_workers: int = 2,\n",
        "    timeout: float = 3.0,\n",
        "    problem_file: str = '/content/datasets/nominal/HumanEval.jsonl',\n",
        "):\n",
        "    \"\"\"\n",
        "    Evaluates the functional correctness of generated samples, and writes\n",
        "    results to f\"{sample_file}_results.jsonl.gz\"\n",
        "    \"\"\"\n",
        "    k = list(map(int, k.split(\",\")))\n",
        "\n",
        "    # Create a list of all generated sample files\n",
        "    sample_files = glob.glob(os.path.join(sample_dir, '*.json'))\n",
        "\n",
        "    # Read samples from all generated files and combine them into a list\n",
        "    samples = []\n",
        "    for sample_file in sample_files:\n",
        "        with open(sample_file, \"r\", encoding=\"utf8\") as f:\n",
        "            try:\n",
        "                samples.append(json.load(f))\n",
        "            except json.JSONDecodeError:\n",
        "                print(f\"Error decoding JSON from {sample_file}\")\n",
        "                continue\n",
        "\n",
        "    # Create a dummy samples.jsonl file for the evaluation function\n",
        "    # In a real scenario, you might want to process the samples directly\n",
        "    # without writing to an intermediate file.\n",
        "    temp_samples_file = '/content/output_dir/temp_samples.jsonl'\n",
        "    with open(temp_samples_file, 'w', encoding='utf8') as f:\n",
        "        for sample in samples:\n",
        "            json.dump(sample, f)\n",
        "            f.write('\\n')\n",
        "\n",
        "\n",
        "    results = custom_evaluate_functional_correctness(temp_samples_file, k, n_workers, timeout, problem_file)\n",
        "    print(results)\n",
        "\n",
        "    # Clean up the temporary file\n",
        "    os.remove(temp_samples_file)\n",
        "\n",
        "\n",
        "def main():\n",
        "    fire.Fire(entry_point)\n",
        "\n",
        "\n",
        "sys.exit(main())"
      ],
      "metadata": {
        "id": "3dVBQIj-HUtH"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [],
      "metadata": {
        "id": "IEhX6Jb8ocRP"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}