DocuMint / README.md
matrix-multiply's picture
Update README.md
2a1031c verified
|
raw
history blame
6.06 kB
metadata
language:
  - en
  - py
tags:
  - code
  - documentation
  - python
  - docstring
  - dataset
license: mit

DocuMint Dataset

The DocuMint Dataset is a collection of 100,000 Python functions and their corresponding docstrings, extracted from popular open-source repositories in the Free and open-source software (FLOSS) ecosystem. This dataset was created to train the DocuMint model, a fine-tuned variant of Google's CodeGemma-2B that generates high-quality docstrings for Python code functions.

Dataset Description

The dataset consists of JSON-formatted entries, each containing a Python function definition (as the instruction) and its associated docstring (as the response). The functions were sourced from well-established and actively maintained projects, filtered based on metrics such as the number of contributors (> 50), commits (> 5k), stars (> 35k), and forks (> 10k).

An abstract syntax tree (AST) based parser was used to extract the functions and docstrings. Challenges in the data sampling process included syntactic errors, multi-language repositories, computational expense, repository size discrepancies, and ensuring diversity while avoiding repetition.

Data Sources

  • Repository: TODO
  • Paper: [DocuMint: Docstring Generation for Python using Small Language Models] Link TODO

Dataset Structure

Each entry in the dataset follows this structure:

{
  "instruction": "def get_dataloaders(accelerator: Accelerator, batch_size: int = 16):\n    \"\"\"\n    Creates a set of `DataLoader`s for the `glue` dataset,\n    using \"bert-base-cased\" as the tokenizer.\n\n    Args:\n        accelerator (`Accelerator`):\n            An `Accelerator` object\n        batch_size (`int`, *optional*):\n            The batch size for the train and validation DataLoaders.\n    \"\"\"\n    tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n    datasets = load_dataset(\"glue\", \"mrpc\")\n\n    def tokenize_function(examples):\n        # max_length=None => use the model max length (it's actually the default)\n        outputs = tokenizer(examples[\"sentence1\"], examples[\"sentence2\"], truncation=True, max_length=None)\n        return outputs\n\n    # Apply the method we just defined to all the examples in all the splits of the dataset\n    # starting with the main process first:\n    with accelerator.main_process_first():\n        tokenized_datasets = datasets.map(\n            tokenize_function,\n            batched=True,\n            remove_columns=[\"idx\", \"sentence1\", \"sentence2\"],\n        )\n\n    # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the\n    # transformers library\n    tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\n\n    def collate_fn(examples):\n        # For Torchxla, it's best to pad everything to the same length or training will be very slow.\n        max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None\n        # When using mixed precision we want round multiples of 8/16\n        if accelerator.mixed_precision == \"fp8\":\n            pad_to_multiple_of = 16\n\t        elif accelerator.mixed_precision != \"no\":\n            pad_to_multiple_of = 8\n\t\t        else:\n            pad_to_multiple_of = None\n\n        return tokenizer.pad(\n            examples,\n            padding=\"longest\",\n            max_length=max_length,\n            pad_to_multiple_of=pad_to_multiple_of,\n            return_tensors=\"pt\",\n        )\n\n    # Instantiate dataloaders.\n    train_dataloader = DataLoader(\n        tokenized_datasets[\"train\"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True\n    )\n    eval_dataloader = DataLoader(\n        tokenized_datasets[\"validation\"],\n        shuffle=False,\n        collate_fn=collate_fn,\n        batch_size=EVAL_BATCH_SIZE,\n        drop_last=(accelerator.mixed_precision == \"fp8\"),\n    )\n\n    return train_dataloader, eval_dataloader",
  "response": "Creates a set of `DataLoader`s for the `glue` dataset,\nusing \"bert-base-cased\" as the tokenizer.\n\nArgs:\n    accelerator (`Accelerator`):\n        An `Accelerator` object\n    batch_size (`int`, *optional*):\n        The batch size for the train and validation DataLoaders."
}

Usage

This dataset was used to train the DocuMint model, a fine-tuned variant of Google's CodeGemma-2B that generates high-quality docstrings for Python code functions. For more information on the model and its training procedure, please refer to the model card. Dataset Usecases

The DocuMint dataset can be used for various purposes related to code documentation and natural language processing tasks. Some potential usecases include:

Training and evaluating models for automatic docstring generation
Studying the characteristics and patterns of high-quality docstrings
Analyzing the relationship between code structure and its corresponding documentation
Developing tools for assisting developers in writing effective docstrings
Conducting research on the challenges and best practices in code documentation

Researchers, developers, and organizations interested in improving code documentation quality and automating the process of docstring generation can benefit from this dataset.

Citation [optional]

BibTeX:

(TODO)

@misc{poudel2024documint,
      title={DocuMint: Docstring Generation for Python using Small Language Models}, 
      author={Bibek Poudel* and Adam Cook* and Sekou Traore* and Shelah Ameli*},
      year={2024},
}

Model Card Contact

  • For questions or more information, please contact:
{bpoudel3,acook46,staore1,oameli}@vols.utk.edu