unknown commited on
Commit
0fcb857
·
1 Parent(s): 35a3aa3
Files changed (1) hide show
  1. README.md +29 -3
README.md CHANGED
@@ -103,7 +103,6 @@ This repository contains all fine-tuned models for experiments with ComBack.
103
  | Code-LLaMA-34B | 0.41 | 19.07 | 0.85 | 16.77 | 0.56 | 18.22 | 1.58 | 13.54 | 2.66 | 17.95 | 2.47 | 16.59 | 9.38 | 35.53 | 11.06 | 37.15 | 8.24 | 33.00 |
104
  | CodeT5+-220m | **51.16** | **75.32** | **52.45** | **74.57** | **50.56** | **75.52** | **49.11** | **67.84** | **38.26** | **59.21** | **38.33** | **56.31** | **32.56** | **58.67** | **19.94** | **50.27** | **25.47** | **52.60** |
105
 
106
- 49.11 67.84 38.26 59.21 38.33 56.3
107
 
108
  - LLVM
109
 
@@ -128,7 +127,27 @@ This repository contains all fine-tuned models for experiments with ComBack.
128
  | Next-Statement Sugg. | 113,684(10.65M Token) | 20,063(1.87M Token) | 4,029(0.38M Token) |
129
  | Code Generation. | 21,184(3.14M Token) | 3,739(0.55M Token) | 1,372(0.18M Token) |
130
 
131
- We only fine-tuned CodeT5+ across three tasks, and compare it with accuracy in `New_Targets/All_Types/*`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
132
 
133
  - `New_Targets/Itr_Expansion/*`: **Take data of RI5CY in LLVM as test set, split train/valid set in the ratio of 85%:15% of CPU targets excluding RISC-V and including RISC-V**
134
 
@@ -149,4 +168,11 @@ We only fine-tuned CodeT5+ across three tasks, and compare it with accuracy in `
149
  | Next-Statement Sugg. | 118,175(11.04M Token) | 20,856(1.94M Token) | 1,035(0.06M Token) |
150
  | Code Generation. | 22,413(3.30M Token) | 3,957(0.58M Token) | 219(0.02M Token) |
151
 
152
- We only fine-tuned CodeT5+ across three tasks, and compare it with accuracy in `New_Targets/All_Types/*`.
 
 
 
 
 
 
 
 
103
  | Code-LLaMA-34B | 0.41 | 19.07 | 0.85 | 16.77 | 0.56 | 18.22 | 1.58 | 13.54 | 2.66 | 17.95 | 2.47 | 16.59 | 9.38 | 35.53 | 11.06 | 37.15 | 8.24 | 33.00 |
104
  | CodeT5+-220m | **51.16** | **75.32** | **52.45** | **74.57** | **50.56** | **75.52** | **49.11** | **67.84** | **38.26** | **59.21** | **38.33** | **56.31** | **32.56** | **58.67** | **19.94** | **50.27** | **25.47** | **52.60** |
105
 
 
106
 
107
  - LLVM
108
 
 
127
  | Next-Statement Sugg. | 113,684(10.65M Token) | 20,063(1.87M Token) | 4,029(0.38M Token) |
128
  | Code Generation. | 21,184(3.14M Token) | 3,739(0.55M Token) | 1,372(0.18M Token) |
129
 
130
+ We only fine-tuned CodeT5+ across three tasks, and compare it with accuracy of ARC(MPU) and NVPTX(GPU) in `New_Targets/All_Types/*`.
131
+
132
+ - GCC
133
+
134
+ | | Stmt. Comp. | Stmt. Comp. | Stmt. Comp. | Stmt. Comp. | Next. Sugg. | Next. Sugg. | Next. Sugg. | Next. Sugg. | Code. Gen. | Code. Gen. | Code. Gen. | Code. Gen. |
135
+ |------------------ |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:----------: |:----------: |:----------: |:----------: |
136
+ | | ARC(MPU) | ARC(MPU) | NVPTX(GPU) | NVPTX(GPU) | ARC(MPU) | ARC(MPU) | NVPTX(GPU) | NVPTX(GPU) | ARC(MPU) | ARC(MPU) | NVPTX(GPU) | NVPTX(GPU) |
137
+ | Dataset | EM | ED | EM | ED | EM | ED | EM | ED | BLEU4 | ED | BLEU4 | ED |
138
+ | -w GPU and MPU | 52.45 | 74.57 | 50.56 | 75.52 | 38.26 | 59.21 | 38.33 | 56.31 | 19.94 | 50.27 | 25.47 | 52.6 |
139
+ | -w/o GPU and MPU | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx |
140
+ | **Decrease** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** |
141
+
142
+ - LLVM
143
+
144
+ | | Stmt. Comp. | Stmt. Comp. | Stmt. Comp. | Stmt. Comp. | Next. Sugg. | Next. Sugg. | Next. Sugg. | Next. Sugg. | Code. Gen. | Code. Gen. | Code. Gen. | Code. Gen. |
145
+ |------------------ |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:----------: |:----------: |:----------: |:----------: |
146
+ | | ARC(MPU) | ARC(MPU) | NVPTX(GPU) | NVPTX(GPU) | ARC(MPU) | ARC(MPU) | NVPTX(GPU) | NVPTX(GPU) | ARC(MPU) | ARC(MPU) | NVPTX(GPU) | NVPTX(GPU) |
147
+ | Dataset | EM | ED | EM | ED | EM | ED | EM | ED | BLEU4 | ED | BLEU4 | ED |
148
+ | -w GPU and MPU | 71.34 | 85.98 | 64.45 | 81.53 | 58.68 | 74.57 | 47.81 | 65.5 | 55.38 | 74.41 | 44.33 | 66.36 |
149
+ | -w/o GPU and MPU | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx | xxx |
150
+ | **Decrease** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** |
151
 
152
  - `New_Targets/Itr_Expansion/*`: **Take data of RI5CY in LLVM as test set, split train/valid set in the ratio of 85%:15% of CPU targets excluding RISC-V and including RISC-V**
153
 
 
168
  | Next-Statement Sugg. | 118,175(11.04M Token) | 20,856(1.94M Token) | 1,035(0.06M Token) |
169
  | Code Generation. | 22,413(3.30M Token) | 3,957(0.58M Token) | 219(0.02M Token) |
170
 
171
+ We only fine-tuned CodeT5+ across three tasks, and compare it with accuracy of RI5CY between excluding and including RISC-V in dataset.
172
+
173
+ | Dataset | Stmt. Comp. | Stmt. Comp. | Next. Sugg. | Next. Sugg. | Code. Gen. | Code. Gen. |
174
+ |:-----------: |:-----------: |:-----------: |:-----------: |:-----------: |:----------: |:----------: |
175
+ | | EM | ED | EM | ED | BLEU4 | ED |
176
+ | -w RISC-V | xxx | xxx | xxx | xxx | xxx | xxx |
177
+ | -w/o RISC-V | xxx | xxx | xxx | xxx | xxx | xxx |
178
+ | **Diff** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx** | **xxx**