You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Solidity Dataset

Dataset Description

This dataset is collected from public GitHub repositories written in Solidity programming language. The list of the repositories is available at repositories.json file.

It contains useful data about smart contracts written in Solidity along with test cases (and unit tests) written to test smart contracts.

Dataset Summary

The dataset contains of 355,540 rows in total. Each row includes the following features:

  • hash (string): The sha256 hash value of the file content before any pre-processing.
  • size (integer): File size in bytes.
  • ext (string): File extention.
  • lang (string): The name of the programming language that the file is written with. (Solidity or Python or JavaScript)
  • is_test (bool): Indicates whether this file is test case (test file) or the smart contract main code.
  • repo_id (string): GitHub's repository identifer fetched from GitHub's API.
  • repo_name (string): GitHub's repository name.
  • repo_head (string): The head commit of the repository that the file is fetched.
  • repo_path (string): Relative file path.
  • content_tokens (integer): Number of tokens in the file content.
  • content_chars (integer): Number of characters in the file content.
  • content (string): File content.
  • __index_level_0__ (integer): Ignore this field please!

Supported Tasks and Leaderboards

This dataset can be used for tasks related to analyzing smart contracts, test cases in smart contracts, and improving language models on Solidity language. As of now, there are no specific leaderboards associated with this dataset.

Languages

  • The dataset is in the English language (en).
  • Smart contracts (is_test=false) are in Solidity programming language.
  • Test cases (is_test=true) are in Solidity, Python, or JavaScript programming language.

Data Splits

The dataset is split into three splits:

  • train: 284112 rows (80% of the dataset)
  • test: 35514 rows (10% of the dataset)
  • 'eval': 35514 rows (10% of the dataset)

Dataset Creation

The content_token is generated via StarCoderBase tokenizer using the following code snippet:

from transformers import AutoTokenizer

checkpoint = "bigcode/starcoderbase"

tokenizer = AutoTokenizer.from_pretrained(checkpoint)

def count_tokens(code: str) -> int:
    tokens = tokenizer.tokenize(code)
    return len(tokens)

The is_test calculated by detecting some regex patterns in the file content. More details will publish soon.

License

This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.

Citation

Please use the following citation when referencing the this dataset:

@misc {seyyed_ali_ayati_2023,
    author       = { {Seyyed Ali Ayati} },
    title        = { solidity-dataset (Revision 77e80ad) },
    year         = 2023,
    url          = { https://huggingface.co/datasets/seyyedaliayati/solidity-dataset },
    doi          = { 10.57967/hf/0808 },
    publisher    = { Hugging Face }
}
Downloads last month
70
Edit dataset card

Models trained or fine-tuned on seyyedaliayati/solidity-dataset