File size: 2,011 Bytes
b640e8e
 
 
ef68cc8
 
 
 
 
 
 
782f473
d384508
 
ef68cc8
 
8761503
 
 
 
 
b640e8e
e3fe54f
 
b640e8e
2e842e1
 
e88a091
 
 
 
16808ce
 
e88a091
2e842e1
 
 
 
 
 
 
 
 
 
 
 
8761503
 
b640e8e
d384508
 
 
 
 
 
 
 
 
 
a3111c2
 
d384508
 
a3111c2
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
language:
- en
license: mit
size_categories:
- 1M<n<10M
task_categories:
- text-generation
paperswithcode_id: disl
pretty_name: 'DISL: Fueling Research with A Large Dataset of Solidity Smart Contracts'
configs:
- config_name: decomposed
  data_files: data/decomposed/*
- config_name: raw
  data_files: data/raw/*
tags:
- code
- solidity
- smart contracts
- webdataset
---

# DISL

The DISL dataset features a collection of 514506 unique Solidity files that have been deployed to Ethereum mainnet. It caters to the need for a large and diverse dataset of real-world smart contracts. DISL serves as a resource for developing machine learning systems and for benchmarking software engineering tools designed for smart contracts.

## Content

It contains two subset:

* the *raw* subset has full contracts source code and it's not deduplicated, it has 3,298,271 smart contracts
* the *decomposed* subset contains Solidity files, it is derived from raw, it is deduplicated using Jaccard similarity with a threshold of 0.9, it has 514,506 Solidity files

If you use DISL, please cite the following tech report:

```bibtex
@techreport{disl2403.16861,
 title = {DISL: Fueling Research with A Large Dataset of Solidity Smart Contracts},
 year = {2024},
 author = {Gabriele Morello and Mojtaba Eshghie and Sofia Bobadilla and Martin Monperrus},
 url = {http://arxiv.org/pdf/2403.16861},
 number = {2403.16861},
 institution = {arXiv},
}
```

- **Curated by:** Gabriele Morello



## Instructions to explore the dataset

```python
from datasets import load_dataset

# Load the raw dataset
dataset = load_dataset("ASSERT-KTH/DISL", "raw")

# OR

# Load the decomposed dataset
dataset = load_dataset("ASSERT-KTH/DISL", "decomposed")

# number of rows and columns
num_rows = len(dataset["train"])
num_columns = len(dataset["train"].column_names)

# random row
import random
random_row = random.choice(dataset["train"])

# random source code
random_sc = random.choice(dataset["train"])['source_code']
print(random_sc)
```