codelion's picture
Update README.md
d09b442 verified
|
raw
history blame
6.82 kB
metadata
dataset_info:
  features:
    - name: repo_name
      dtype: string
    - name: repo_commit
      dtype: string
    - name: repo_content
      dtype: string
    - name: repo_readme
      dtype: string
  splits:
    - name: train
      num_bytes: 29227644
      num_examples: 158
    - name: test
      num_bytes: 8765331
      num_examples: 40
  download_size: 12307532
  dataset_size: 37992975
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: apache-2.0
task_categories:
  - summarization
tags:
  - code
size_categories:
  - n<1K

Generate README Eval

The generate-readme-eval is a dataset (train split) and benchmark (test split) to evaluate the effectiveness of LLMs when summarizing entire GitHub repos in form of a README.md file. The datset is curated from top 400 real Python repositories from GitHub with at least 1000 stars and 100 forks. The script used to generate the dataset can be found here. For the dataset we restrict ourselves to GH repositories that are less than 100k tokens in size to allow us to put the entire repo in the context of LLM in a single call. The train split of the dataset can be used to fine-tune your own model, the results reported here are for the test split.

To evaluate a LLM on the benchmark we can use the evaluation script given here. During evaluation we prompt the LLM to generate a structured README.md file using the entire contents of the repository (repo_content). We evaluate the output response from LLM by comparing it with the actual README file of that repository across several different metrics.

In addition to the traditional NLP metircs like BLEU, ROUGE scores and cosine similarity, we also compute custom metrics that capture structural similarity, code consistency, readbility (FRES) and information retrieval (from code to README). The final score is generated between by taking a weighted average of the metrics. The weights used for the final score are shown below.

weights = {
    'bleu': 0.1,
    'rouge-1': 0.033,
    'rouge-2': 0.033,
    'rouge-l': 0.034,
    'cosine_similarity': 0.1,
    'structural_similarity': 0.1,
    'information_retrieval': 0.2,
    'code_consistency': 0.2,
    'readability': 0.2
}

At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the leaderboard please create a PR with the log file of the run and details about the model.

If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.79 on this benchmark. We can validate it by running the evaluation script with --oracle flag. The oracle run log is available here.

Leaderboard

The current SOTA model on this benchmark in zero shot setting is Gemini-1.5-Flash-Exp-0827. It scores the highest across a number of different metrics.

bleu: 0.0072 rouge-1: 0.1196 rouge-2: 0.0169 rouge-l: 0.1151 cosine_similarity: 0.3029 structural_similarity: 0.2416 information_retrieval: 0.4450 code_consistency: 0.0796 readability: 0.3790 weighted_score: 0.2443

Model Score BLEU ROUGE-1 ROUGE-2 ROUGE-l Cosine-Sim Structural-Sim Info-Ret Code-Consistency Readability Logs
llama3.1-8b-instruct 24.43 0.72 11.96 1.69 11.51 30.29 24.16 44.50 7.96 37.90 link
mistral-nemo-instruct-2407 25.62 1.09 11.24 1.70 10.94 26.62 24.26 52.00 8.80 37.30 link
gpt-4o-mini-2024-07-18 32.16 1.64 15.46 3.85 14.84 40.57 23.81 72.50 4.77 44.81 link
gpt-4o-2024-08-06 33.13 1.68 15.36 3.59 14.81 40.00 23.91 74.50 8.36 44.33 link
gemini-1.5-flash-8b-exp-0827 32.12 1.36 14.66 3.31 14.14 38.31 23.00 70.00 7.43 46.47 link
gemini-1.5-flash-exp-0827 33.43 1.66 16.00 3.88 15.33 41.87 23.59 76.50 7.86 43.34 link
gemini-1.5-pro-exp-0827 32.51 2.55 15.27 4.97 14.86 41.09 23.94 72.82 6.73 43.34 link
oracle-score 56.79 100.00 100.00 100.00 100.00 100.00 98.24 59.00 11.01 14.84 link

Few-Shot

This benchmark is interesting because it is not that easy to few-shot your way to improve performance. There are couple of reasons for that:

  1. The average context length required for each item can be up to 100k tokens which makes it out of the reach of most models except Google Gemini which has a context legnth of up to 2 Million tokens.

  2. There is a trade-off in accuracy inherit in the benchmark as adding more examples makes some of the metrics like information_retrieval and readability worse. At larger contexts models do not have perfect recall and may miss important information.

Our experiments with few-shot prompts confirm this, there is 1

bleu: 0.1924 rouge-1: 0.3231 rouge-2: 0.2148 rouge-l: 0.3174 cosine_similarity: 0.6149 structural_similarity: 0.3317 information_retrieval: 0.5950 code_consistency: 0.1148 readability: 0.2765 weighted_score: 0.3397

Model Score BLEU ROUGE-1 ROUGE-2 ROUGE-l Cosine-Sim Structural-Sim Info-Ret Code-Consistency Readability Logs
0-shot-gemini-1.5-flash-exp-0827 33.43 1.66 16.00 3.88 15.33 41.87 23.59 76.50 7.86 43.34 link
1-shot-gemini-1.5-flash-exp-0827 35.40 21.81 34.00 24.97 33.61 61.53 37.60 61.00 12.89 27.22 link
3-shot-gemini-1.5-flash-exp-0827 33.43 1.66 16.00 3.88 15.33 41.87 23.59 76.50 7.86 43.34 link
5-shot-gemini-1.5-flash-exp-0827 33.97 19.24 32.31 21.48 31.74 61.49 33.17 59.50 11.48 27.65 link