Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
n<1K
Tags:
code
benchmark
DOI:
License:
@context
dict
@type
string
distribution
list
recordSet
list
conformsTo
string
name
string
identifier
string
description
string
alternateName
sequence
creator
dict
keywords
sequence
license
string
url
string
{ "@language": "en", "@vocab": "https://schema.org/", "citeAs": "cr:citeAs", "column": "cr:column", "conformsTo": "dct:conformsTo", "cr": "http://mlcommons.org/croissant/", "data": { "@id": "cr:data", "@type": "@json" }, "dataBiases": "cr:dataBiases", "dataCollection": "cr:dataCollection", "dataType": { "@id": "cr:dataType", "@type": "@vocab" }, "dct": "http://purl.org/dc/terms/", "extract": "cr:extract", "field": "cr:field", "fileProperty": "cr:fileProperty", "fileObject": "cr:fileObject", "fileSet": "cr:fileSet", "format": "cr:format", "includes": "cr:includes", "isLiveDataset": "cr:isLiveDataset", "jsonPath": "cr:jsonPath", "key": "cr:key", "md5": "cr:md5", "parentField": "cr:parentField", "path": "cr:path", "personalSensitiveInformation": "cr:personalSensitiveInformation", "recordSet": "cr:recordSet", "references": "cr:references", "regex": "cr:regex", "repeated": "cr:repeated", "replace": "cr:replace", "sc": "https://schema.org/", "separator": "cr:separator", "source": "cr:source", "subField": "cr:subField", "transform": "cr:transform" }
sc:Dataset
[ { "@type": "cr:FileObject", "@id": "repo", "name": "repo", "description": "The Hugging Face git repository.", "contentUrl": "https://huggingface.co/datasets/llylly001/InfiBench/tree/refs%2Fconvert%2Fparquet", "encodingFormat": "git+https", "sha256": "https://github.com/mlcommons/croissant/issues/80", "containedIn": null, "includes": null }, { "@type": "cr:FileSet", "@id": "parquet-files-for-config-default", "name": "parquet-files-for-config-default", "description": "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/datasets-server/parquet).", "contentUrl": null, "encodingFormat": "application/x-parquet", "sha256": null, "containedIn": { "@id": "repo" }, "includes": "default/*/*.parquet" } ]
[ { "@type": "cr:RecordSet", "@id": "default", "name": "default", "description": "llylly001/InfiBench - 'default' subset", "field": [ { "@type": "cr:Field", "@id": "default/Unnamed__0", "name": "default/Unnamed__0", "description": "Column 'Unnamed: 0' from the Hugging Face parquet file.", "dataType": "sc:Integer", "source": { "fileSet": { "@id": "parquet-files-for-config-default" }, "extract": { "column": "Unnamed: 0" } } }, { "@type": "cr:Field", "@id": "default/no", "name": "default/no", "description": "Column 'no' from the Hugging Face parquet file.", "dataType": "sc:Integer", "source": { "fileSet": { "@id": "parquet-files-for-config-default" }, "extract": { "column": "no" } } }, { "@type": "cr:Field", "@id": "default/case_path", "name": "default/case_path", "description": "Column 'case_path' from the Hugging Face parquet file.", "dataType": "sc:Text", "source": { "fileSet": { "@id": "parquet-files-for-config-default" }, "extract": { "column": "case_path" } } }, { "@type": "cr:Field", "@id": "default/prompt", "name": "default/prompt", "description": "Column 'prompt' from the Hugging Face parquet file.", "dataType": "sc:Text", "source": { "fileSet": { "@id": "parquet-files-for-config-default" }, "extract": { "column": "prompt" } } }, { "@type": "cr:Field", "@id": "default/eval_spec", "name": "default/eval_spec", "description": "Column 'eval_spec' from the Hugging Face parquet file.", "dataType": "sc:Text", "source": { "fileSet": { "@id": "parquet-files-for-config-default" }, "extract": { "column": "eval_spec" } } }, { "@type": "cr:Field", "@id": "default/dependencies", "name": "default/dependencies", "description": "Column 'dependencies' from the Hugging Face parquet file.", "dataType": "sc:Text", "source": { "fileSet": { "@id": "parquet-files-for-config-default" }, "extract": { "column": "dependencies" } } } ] } ]
http://mlcommons.org/croissant/1.0
InfiBench
10.57967/hf/2474
InfiBench dataset hosted on Hugging Face and contributed by the HF Datasets community
[ "llylly001/InfiBench", "InfiBench" ]
{ "@type": "Team", "name": "The InfiBench Team", "url": "https://huggingface.co/llylly001" }
[ "text-generation", "100K<n<1M", "English", "cc-by-sa-4.0", "code", "benchmark", "Croissant", "doi:10.57967/hf/2474" ]
https://choosealicense.com/licenses/cc-by-sa-4.0/
https://huggingface.co/datasets/llylly001/InfiBench

InfiBench (Data Part)

Note: For full description, please visit our main website https://infi-coder.github.io/infibench.

This repo contains all data of our code LLM evaluation dataset InfiBench. suite_v2.1.yaml lists the case list and suite_v2.1_data.csv records all data (prompt, reference answer, evaluation metric). The data can be directly consumed by our automatic evaluation tool to evaluate any model's response.

Dataset Card


Name: InfiBench

Description: Evaluation Datset for the Question-Answering Capabilities of Code Large Language Models

URL: https://infi-coder.github.io/infibench (all info) / https://huggingface.co/datasets/llylly001/InfiBench (data part)

Version: 2.1

License: Creative Commons Attribution Share Alike 4.0

Citation:

@misc{infibench,
    title={InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models},
    howpublished = "\url{https://infi-coder.github.io/infibench}",
    author={InfiBench},
    year={2024}
}

DOI: doi:10.57967/hf/2474

Data Collection:

  • Data source is downloaded from the publicly available StackExchange archive (https://archive.org/download/stackexchange, https://ia904700.us.archive.org/view_archive.php?archive=/6/items/stackexchange/stackoverflow.com-Posts.7z). Especially, we use the preprocessed version from https://huggingface.co/datasets/mikex86/stackoverflow-posts where all posts are formatted in Markdown text.

  • We choose to keep only the questions that have at least three positively voted answers and an officially accepted answer, which turn out to be 1,090,238 questions. For these one million questions, we choose to keep questions that are frequently viewed and relatively new.

  • Utilizing the mandatory question tags of these questions, we then manually construct a tag tree that covers the 200 most frequent tags, enabling us to identify the top programming languages and areas for 14,330105 out of these 17,402 questions. We exclude 6 programming languages that either describe data or are domain-specific: JSON, regex, Markdown, YAML, CSV, and SQL. As a result, we compile 13,854 questions that serve as the initial seed set.

  • We randomly sample from the initial seed set. Then we recruited five domain experts inside our company to create the benchmark from the sampled initial seed set, each in charge of one area. The annotation process is composed of three steps: (1) Question Selection and Type Annotation; (2) Prompt Paraphrasing. (3) Correctness Criterion Annotation.

Data Biases:

The data essentially serves as an evaluation benchmark. We foresee data biases in following aspects:

(1) Non-standard evaluation. Alongside the data is a comprehensive benchmark of existing code LLMs. The benchmark scores are evaluated under a specific set of hyperparameters (e.g, temperature 0.2, top probability 0.9, best@10 at question level). Data usage under different evaluation conditions may result in misleading comparison results and conclusions.

(2) Usage misinterpretation. The benchmark focuses on evaluating the response correctness of code LLMs for a set of real-world developers' questions. Our evaluation standard does not specifically take other aspects (naturalness, conciseness, fairness, politeness, etc) into consideration. Hence, this is risk of overinterpreting the evaluation results. When evaluating a code LLM, we recommend combining this benchmark score with other evaluations to be a more comprehensive evaluation.

(3) Potential data contamination. Though we have made our efforts to reduce the impact of data contamination, future code LLMs may train or fine-tune on this benchmark dataset to improve the score on InfiBench. This could be challenging to prevent as a cost of being fully public. On the other hand, as responsible LLM developers, we hope future practitioners would report how they use the benchmark data if beyond the original scope (for evaluation use).

Personal Sensitive Informaton:

During the data construction process, our domain experts paraphrased the question prompts to remove personal and sensitive information (PII) and a cross validation stage is introduced to further ensure the PII removal.

Downloads last month
3
Edit dataset card