Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Cannot seek streaming HTTP file
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/", line 571, in compute_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/job_runners/", line 162, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/job_runners/", line 205, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1775, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1245, in as_streaming_dataset
                  splits_generators = { sg for sg in self._split_generators(dl_manager)}
                File "/tmp/modules-cache/datasets_modules/datasets/code_x_glue_cc_code_refinement/01a93c553626c3171fd010b39564cabbdabdfc395ed9abf8ceea9331d1c68eb4/", line 90, in _split_generators
                  return self.child._split_generators(dl_manager=dl_manager)
                File "/tmp/modules-cache/datasets_modules/datasets/code_x_glue_cc_code_refinement/01a93c553626c3171fd010b39564cabbdabdfc395ed9abf8ceea9331d1c68eb4/", line 50, in _split_generators
                  downloaded_files[k] = dl_manager.download_and_extract(v)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 1074, in download_and_extract
                  return self.extract(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 1026, in extract
                  urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 443, in map_nested
                  mapped = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 444, in <listcomp>
                  _single_map_nested((function, obj, types, None, True, None))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/", line 346, in _single_map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 1031, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 434, in _get_extraction_protocol
                  return _get_extraction_protocol_with_magic_number(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 405, in _get_extraction_protocol_with_magic_number
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/", line 747, in seek
                  raise ValueError("Cannot seek streaming HTTP file")
              ValueError: Cannot seek streaming HTTP file

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "code_x_glue_cc_code_refinement"

Dataset Summary

CodeXGLUE code-refinement dataset, available at

We use the dataset released by this paper( The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.

Supported Tasks and Leaderboards

  • text2text-generation-other-debugging: The dataset can be used to train a model for automatically fixing buggy code.


  • Java programming language

Dataset Structure

Data Instances


An example of 'train' looks as follows.

    "buggy": "public static TYPE_1 init ( java.lang.String name , java.util.Date date ) { TYPE_1 VAR_1 = new TYPE_1 ( ) ; VAR_1 . METHOD_1 ( name ) ; java.util.Calendar VAR_2 = java.util.Calendar.getInstance ( ) ; VAR_2 . METHOD_2 ( date ) ; VAR_1 . METHOD_3 ( VAR_2 ) ; return VAR_1 ; }\n", 
    "fixed": "public static TYPE_1 init ( java.lang.String name , java.util.Date date ) { TYPE_1 VAR_1 = new TYPE_1 ( ) ; VAR_1 . METHOD_1 ( name ) ; java.util.Calendar VAR_2 = null ; if ( date != null ) { VAR_2 = java.util.Calendar.getInstance ( ) ; VAR_2 . METHOD_2 ( date ) ; } VAR_1 . METHOD_3 ( VAR_2 ) ; return VAR_1 ; }\n", 
    "id": 0


An example of 'validation' looks as follows.

    "buggy": "public java.util.List < TYPE_1 > METHOD_1 ( ) { java.util.ArrayList < TYPE_1 > VAR_1 = new java.util.ArrayList < TYPE_1 > ( ) ; for ( TYPE_2 VAR_2 : VAR_3 ) { VAR_1 . METHOD_2 ( VAR_2 . METHOD_1 ( ) ) ; } return VAR_1 ; } \n", 
    "fixed": "public java.util.List < TYPE_1 > METHOD_1 ( ) { return VAR_1 ; } \n", 
    "id": 0

Data Fields

In the following each data field in go is explained for each config. The data fields are the same among all splits.

medium, small

field name type description
id int32 Index of the sample
buggy string The buggy version of the code
fixed string The correct version of the code

Data Splits

name train validation test
medium 52364 6546 6545
small 46680 5835 5835

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

Downloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs. [More Information Needed]

Who are the source language producers?

Software Engineering developers.


Annotation process

Automatically annotated by filtering commit messages containing the pattern: ("fix" or "solve") and ("bug" or "issue" or "problem" or "error"). A statistically significant amount of samples (95% confidence level with 5% confidence interval) were manually evaluated by two authors to check if the filtered bug/fix pairs were correct. After all disagreements were settled, authors conclude that 97.6% were true positives.

Who are the annotators?

Heuristics and the authors of the paper.

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators,

Licensing Information

Computational Use of Data Agreement (C-UDA) License.

Citation Information

         title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},


Thanks to @madlag (and partly also @ncoop57) for adding this dataset.

Downloads last month

Models trained or fine-tuned on code_x_glue_cc_code_refinement