File size: 2,080 Bytes
c87c295
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# Evaluation

This repository contains code for evaluating the performance of LLM models through both single-turn and multi-turn scenarios.

To set up the environment, you can install the required dependencies by running:
```bash
pip install -r evaluation/requirements.txt
```

## Single-turn Evaluation

For single-turn evaluation, the processes of inference, post-processing, and result aggregation are separated as follows:

1. Execute `bash evaluation/evaluate/scripts/01_gen_single.sh` to generate model results.
2. Perform post-processing on the model output by executing `bash evaluation/evaluate/scripts/02_sanitize_single.sh`.
3. Finally, compute evaluation metrics by executing `bash evaluation/evaluate/scripts/03_eval_single.sh`.

## Multi-turn Evaluation

### Multi-turn Evaluation with Execution Feedback

Evaluate the performance of the models with execution feedback using the provided scripts:

- For OpenCodeInterpreter:
  ```bash
  bash evaluation/evaluate/scripts/04_execution_feedback_multiround_OpenCodeInterpreter.sh
  ```

- For OpenAI's GPT Models:
  Before proceeding with evaluation, ensure to implement the `get_predict` function in `chat_with_gpt.py` to enable interaction with the GPT Models. Then, execute the following script:
  ```bash
  bash evaluation/evaluate/scripts/05_execution_feedback_multiround_gpt.sh
  ```

### Multi-turn Evaluation with GPT-4 Simulated Human Feedback

Execute either of the following scripts to evaluate the models with simulated human feedback:

- For OpenCodeInterpreter:
  ```bash
  bash evaluation/evaluate/scripts/06_human_feedback_multiround_OpenCodeInterpreter.sh
  ```

- For Oracle OpenCodeInterpreter:
  ```bash
  bash evaluation/evaluate/scripts/07_human_feedback_multiround_Oracle_OpenCodeInterpreter.sh
  ```

These scripts facilitate the multi-turn evaluation with simulated human feedback.

This evaluation code is based on [EvalPlus](https://github.com/evalplus/evalplus) and has been modified for specific purposes. We extend our gratitude to the contributors of EvalPlus for their foundational work.