daniellnichols
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -17,3 +17,20 @@ We utilized the distributed training library [AxoNN](https://github.com/axonn-ai
|
|
17 |
HPC-Coder-v2-6.7b is the best performing LLM under 30b parameters on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
|
18 |
It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
HPC-Coder-v2-6.7b is the best performing LLM under 30b parameters on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
|
18 |
It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
|
19 |
|
20 |
+
## Using HPC-Coder-v2
|
21 |
+
|
22 |
+
The model is provided as a standard huggingface model with safetensor weights.
|
23 |
+
It can be used with [transformers pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines), [vllm](https://github.com/vllm-project/vllm), or any other standard model inference framework.
|
24 |
+
HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results.
|
25 |
+
It was trained with the following instruct template:
|
26 |
+
|
27 |
+
```md
|
28 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
29 |
+
|
30 |
+
### Instruction:
|
31 |
+
{instruction}
|
32 |
+
|
33 |
+
### Response:
|
34 |
+
|
35 |
+
```
|
36 |
+
|