zhaoyang commited on
Commit
8e14b78
·
verified ·
1 Parent(s): 7e34018

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -3
README.md CHANGED
@@ -1,3 +1,65 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ <div align="center">
5
+ <img src="./assets/logo.png" style="zoom:25%;" />
6
+ </div>
7
+
8
+ # CodeV:Empowering LLMs for Verilog Generation through Multi-Level Summarization
9
+
10
+ <img src="assets/overview.png" style="zoom:50%;" />
11
+
12
+ CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. **(This repo is under development)**
13
+
14
+ ## Models and Datasets
15
+
16
+ | | Base Model | CodeV |
17
+ | ---- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
18
+ | 6.7B | [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | [[zyyy1023399127/CodeV-DS-6.7B](https://huggingface.co/zyyy1023399127/CodeV-DS-6.7B) |
19
+ | 7B | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [zyyy1023399127/CodeV-CL-7B](https://huggingface.co/zyyy1023399127/CodeV-CL-7B) |
20
+ | 7B | [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) | [zyyy1023399127/CodeV-QW-7B](https://huggingface.co/zyyy1023399127/CodeV-QW-7B) |
21
+
22
+ ## Test
23
+
24
+ If you want to test the generation capability of existing models on Verilog, you need to install the [VerilogEval](https://github.com/NVlabs/verilog-eval) and [RTLLM](https://github.com/hkust-zhiyao/rtllm) environments.
25
+
26
+ ## Quick Start
27
+
28
+ ```python
29
+ from transformers import pipeline
30
+
31
+ import torch
32
+
33
+
34
+
35
+ prompt= "FILL IN THE QUESTION"
36
+
37
+
38
+
39
+ generator = pipeline(
40
+
41
+ model="CODEV",
42
+
43
+ task="text-generation",
44
+
45
+ torch_dtype=torch.bfloat16,
46
+
47
+ device_map="auto",
48
+
49
+ )
50
+
51
+
52
+
53
+ result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
54
+
55
+ response = result[0]["generated_text"]
56
+
57
+ print("Response:", response)
58
+ ```
59
+
60
+ ## Acknowledgements
61
+
62
+ * [Magicoder](https://github.com/ise-uiuc/magicoder): Training code, original datasets and data decontamination
63
+ * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for CodeV-DeepSeek
64
+ * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for CodeLlama
65
+ * [CodeQwen](https://github.com/QwenLM/CodeQwen1.5): CodeV-CodeQwen