codebleu / README.md
github-actions
chore(auto): update changelog and version [0.7.0]
d9a074d

A newer version of the Gradio SDK is available: 4.36.1

Upgrade
metadata
title: codebleu
tags:
  - evaluate
  - metric
  - code
  - codebleu
description: Unofficial `CodeBLEU` implementation that supports Linux, MacOS and Windows.
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false

Metric Card for codebleu

This repository contains an unofficial CodeBLEU implementation that supports Linux, MacOS and Windows. It is available through PyPI and the evaluate library.

Available for: Python, C, C#, C++, Java, JavaScript, PHP, Go, Ruby, Rust.


The code is based on the original CodeXGLUE/CodeBLEU and updated version by XLCoST/CodeBLEU. It has been refactored, tested, built for macOS and Windows, and multiple improvements have been made to enhance usability.

Metric Description

An ideal evaluation metric should consider the grammatical correctness and the logic correctness. We propose weighted n-gram match and syntactic AST match to measure grammatical correctness, and introduce semantic data-flow match to calculate logic correctness. CodeBLEU
[from CodeXGLUE repo]

In a nutshell, CodeBLEU is a weighted combination of n-gram match (BLEU), weighted n-gram match (BLEU-weighted), AST match and data-flow match scores.

The metric has shown higher correlation with human evaluation than BLEU and accuracy metrics.

How to Use

Inputs

  • refarences (list[str] or list[list[str]]): reference code
  • predictions (list[str]) predicted code
  • lang (str): code language, see codebleu.AVAILABLE_LANGS for available languages (python, c_sharp c, cpp, javascript, java, php, go and ruby at the moment)
  • weights (tuple[float,float,float,float]): weights of the ngram_match, weighted_ngram_match, syntax_match, and dataflow_match respectively, defaults to (0.25, 0.25, 0.25, 0.25)
  • tokenizer (callable): to split code string to tokens, defaults to s.split()

Output Values

The metric outputs the dict[str, float] with following fields:

  • codebleu: the final CodeBLEU score
  • ngram_match_score: ngram_match score (BLEU)
  • weighted_ngram_match_score: weighted_ngram_match score (BLEU-weighted)
  • syntax_match_score: syntax_match score (AST match)
  • dataflow_match_score: dataflow_match score

Each of the scores is in range [0, 1], where 1 is the best score.

Examples

Using pip package (pip install codebleu), also you have to install tree-sitter language you need (e.g. pip install tree-sitter-python or pip install codebleu[all] to install all languages):

from codebleu import calc_codebleu

prediction = "def add ( a , b ) :\n return a + b"
reference = "def sum ( first , second ) :\n return second + first"

result = calc_codebleu([reference], [prediction], lang="python", weights=(0.25, 0.25, 0.25, 0.25), tokenizer=None)
print(result)
{
  'codebleu': 0.5537, 
  'ngram_match_score': 0.1041, 
  'weighted_ngram_match_score': 0.1109, 
  'syntax_match_score': 1.0, 
  'dataflow_match_score': 1.0
}

Or using evaluate library (codebleu package required):

import evaluate
metric = evaluate.load("k4black/codebleu")

prediction = "def add ( a , b ) :\n return a + b"
reference = "def sum ( first , second ) :\n return second + first"

result = metric.compute([reference], [prediction], lang="python", weights=(0.25, 0.25, 0.25, 0.25), tokenizer=None)

Note: lang is required;

Limitations and Bias

This library requires so file compilation with tree-sitter, so it is platform dependent.
Currently available for Linux (manylinux), MacOS and Windows with Python 3.8+.

Citation

@misc{ren2020codebleu,
      title={CodeBLEU: a Method for Automatic Evaluation of Code Synthesis}, 
      author={Shuo Ren and Daya Guo and Shuai Lu and Long Zhou and Shujie Liu and Duyu Tang and Neel Sundaresan and Ming Zhou and Ambrosio Blanco and Shuai Ma},
      year={2020},
      eprint={2009.10297},
      archivePrefix={arXiv},
      primaryClass={cs.SE}
}

Further References

The source code is available at GitHub k4black/codebleu repository.