Datasets:

ArXiv:
Tags:
License:
shailja commited on
Commit
1542010
1 Parent(s): 25fc626

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -1,3 +1,89 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ ---
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - code
8
+ model-index:
9
+ - name: VeriGen
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ dataset:
14
+ type:
15
+ name:
16
+
17
+ extra_gated_prompt: >-
18
+ ## Model License Agreement
19
+
20
+ Please read the BigCode [OpenRAIL-M
21
+ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
22
+ agreement before accepting it.
23
+
24
+ extra_gated_fields:
25
+ I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
26
+ ---
27
+
28
+
29
+ # VeriGen
30
+
31
+
32
+ ## Table of Contents
33
+
34
+ 1. [Dataset Summary](##model-summary)
35
+ 2. [Use](##use)
36
+ 3. [Limitations](##limitations)
37
+ 4. [License](##license)
38
+ 5. [Citation](##citation)
39
+
40
+ ## Dataset Summary
41
+
42
+ - The dataset comprises Verilog modules as entries. The entries were retrieved from the GitHub dataset on BigQuery.
43
+ - For training [models (https://huggingface.co/shailja/fine-tuned-codegen-2B-Verilog)], we filtered entries with no of characters exceeding 20000 and duplicates (exact duplicates ignoring whitespaces).
44
+
45
+ - **Paper:** [ Benchmarking Large Language Models for Automated Verilog RTL Code Generation](https://arxiv.org/abs/2212.11140)
46
+ - **Point of Contact:** [contact@shailja](mailto:shailja.thakur90@gmail.com)
47
+ - **Languages:** Verilog (Hardware Description Language)
48
+
49
+ ### Data Splits
50
+
51
+ The dataset only contains a train split.
52
+
53
+ ### Use
54
+ ```python
55
+ # pip install datasets
56
+
57
+ from datasets import load_dataset
58
+
59
+ ds = load_dataset("shailja/Verilog_GitHub", streaming=True, split="train")
60
+ print(next(iter(ds)))
61
+
62
+ #OUTPUT:
63
+
64
+
65
+ ```
66
+
67
+ ### Intended Use
68
+
69
+ The dataset consists of source code from a range of GitHub repositories. As such, they can potentially include non-compilable, low-quality, and vulnerable code.
70
+
71
+ ### Attribution & Other Requirements
72
+
73
+ The pretraining dataset of the model was not filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected.
74
+
75
+
76
+ # License
77
+ The dataset is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
78
+ # Citation
79
+ ```
80
+ @misc{https://doi.org/10.48550/arxiv.2212.11140,
81
+ doi = {10.48550/ARXIV.2212.11140},
82
+ url = {https://arxiv.org/abs/2212.11140},
83
+ author = {Thakur, Shailja and Ahmad, Baleegh and Fan, Zhenxing and Pearce, Hammond and Tan, Benjamin and Karri, Ramesh and Dolan-Gavitt, Brendan and Garg, Siddharth},
84
+ title = {Benchmarking Large Language Models for Automated Verilog RTL Code Generation},
85
+ publisher = {arXiv},
86
+ year = {2022},
87
+ copyright = {arXiv.org perpetual, non-exclusive license}
88
+ }
89
+ ```