DDIDU commited on
Commit
526ccf0
1 Parent(s): 060754b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -20,7 +20,7 @@ model-index:
20
 
21
  We used LoRa to further pre-train Meta's CodeLLaMA-7B-hf model with high-quality C++ code tokens.
22
 
23
- Furthermore, we was fine-tuned on CodeM's C++ instruction data.
24
 
25
  ## Model Details
26
 
@@ -35,10 +35,7 @@ We pre-trained CodeLLaMA-7B further using 543 GB of C++ code collected online, a
35
  ## Requirements
36
 
37
  ```
38
- peft==0.3.0.dev0 
39
- tokenizers==0.13.3 
40
- transformers==4.33.0 
41
- bitsandbytes==0.41.1
42
  ```
43
 
44
  ## How to reproduce HumanEval-X results
@@ -86,7 +83,7 @@ pipeline = transformers.pipeline(
86
  )
87
 
88
  sequences = pipeline(
89
- 'import socket\n\ndef ping_exponential_backoff(host: str):',
90
  do_sample=True,
91
  top_k=10,
92
  temperature=0.1,
 
20
 
21
  We used LoRa to further pre-train Meta's CodeLLaMA-7B-hf model with high-quality C++ code tokens.
22
 
23
+ Furthermore, we fine-tuned on CodeM's C++ instruction data.
24
 
25
  ## Model Details
26
 
 
35
  ## Requirements
36
 
37
  ```
38
+ pip install torch transformers accelerate
 
 
 
39
  ```
40
 
41
  ## How to reproduce HumanEval-X results
 
83
  )
84
 
85
  sequences = pipeline(
86
+ '#include <iostream>\n#include <vector>\n\nusing namespace std;\n\nvoid quickSort(int *data, int start, int end) {',
87
  do_sample=True,
88
  top_k=10,
89
  temperature=0.1,