Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

PyBench: Evaluate LLM Agent on Real World Tasks

πŸ“ƒ Paper β€’ πŸ€— Data (PyInstruct) β€’ πŸ€— Model (PyLlama3) β€’

PyBench is a comprehensive benchmark evaluating LLM on real-world coding tasks including chart analysis, text analysis, image/ audio editing, complex math and software/website development.
We collect files from Kaggle, arXiv, and other sources and automatically generate queries according to the type and content of each file.

Overview

Why PyBench?

The LLM Agent, equipped with a code interpreter, is capable of automatically solving real-world coding tasks, such as data analysis and image processing. % However, existing benchmarks primarily focus on either simplistic tasks, such as completing a few lines of code, or on extremely complex and specific tasks at the repository level, neither of which are representative of various daily coding tasks. % To address this gap, we introduce PyBench, a benchmark that encompasses 6 main categories of real-world tasks, covering more than 10 types of files. How PyBench Works

πŸ“ PyInstruct

To figure out a way to enhance the model's ability on PyBench, we generate a homologous dataset: PyInstruct. The PyInstruct contains multi-turn interaction between the model and files, stimulating the model's capability on coding, debugging and multi-turn complex task solving. Compare to other Datasets focus on multi-turn coding ability, PyInstruct has longer turns and tokens per trajectory.

Data Statistics Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.

πŸͺ„ PyLlama

We trained Llama3-8B-base on PyInstruct, CodeActInstruct, CodeFeedback, and Jupyter Notebook Corpus to get PyLlama3, achieving an outstanding performance on PyBench

πŸš€ Model Evaluation with PyBench!

Demonstration of the chat interface.

Environment Setup:

Begin by establishing the required environment:

conda env create -f environment.yml

Model Configuration

Initialize a local server using the vllm framework, which defaults to port "8001":

bash SetUpModel.sh

A Jinja template is necessary to launch a vllm server. Commonly used templates can be located in the ./jinja/ directory.
Prior to starting the vllm server, specify the model path and Jinja template path in SetUpModel.sh.

Configuration Adjustments

Specify your model's path and the server port in ./config/model.yaml. This configuration file also allows for customization of the system prompts.

Execution on PyBench

Ensure to update the output trajectory file path in the script before execution:

python /data/zyl7353/codeinterpreterbenchmark/inference.py --config_path ./config/<your config>.yaml --task_path ./data/meta/task.json --output_path <your trajectory.jsonl path>

Unit Testing Procedure

  • Step 1: Store the output files in ./output.
  • Step 2: Define the trajectory file path in ./data/unit_test/enter_point.py.
  • Step 3: Execute the unit test script:
    python data/unit_test/enter_point.py  
    

πŸ“Š LeaderBoard

LLM Leaderboard

πŸ“š Citation

@misc{zhang2024pybenchevaluatingllmagent,
      title={PyBench: Evaluating LLM Agent on various real-world coding tasks}, 
      author={Yaolun Zhang and Yinxu Pan and Yudong Wang and Jie Cai and Zhi Zheng and Guoyang Zeng and Zhiyuan Liu},
      year={2024},
      eprint={2407.16732},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2407.16732}, 
}
Downloads last month
23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.