RichardErkhov commited on
Commit
5acfc55
1 Parent(s): ca6f317

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ hpc-coder-v2-1.3b - AWQ
11
+ - Model creator: https://huggingface.co/hpcgroup/
12
+ - Original model: https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ library_name: transformers
20
+ tags:
21
+ - code
22
+ - hpc
23
+ - parallel
24
+ - axonn
25
+ datasets:
26
+ - hpcgroup/hpc-instruct
27
+ - ise-uiuc/Magicoder-OSS-Instruct-75K
28
+ - nickrosh/Evol-Instruct-Code-80k-v1
29
+ language:
30
+ - en
31
+ pipeline_tag: text-generation
32
+ ---
33
+
34
+ # HPC-Coder-v2
35
+
36
+ The HPC-Coder-v2-1.3b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc.
37
+ This version is a fine-tuning of the [Deepseek Coder 1.3b](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) model.
38
+ It is fine-tuned on the [hpc-instruct](https://huggingface.co/datasets/hpcgroup/hpc-instruct), [oss-instruct](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K), and [evol-instruct](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) datasets.
39
+ We utilized the distributed training library [AxoNN](https://github.com/axonn-ai/axonn) to fine-tune in parallel across many GPUs.
40
+
41
+ [HPC-Coder-v2-1.3b](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b), [HPC-Coder-v2-6.7b](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b), and [HPC-Coder-v2-16b](https://huggingface.co/hpcgroup/hpc-coder-v2-16b) are the most capable open-source LLMs for parallel and HPC code generation.
42
+ HPC-Coder-v2-16b is currently the best performing open-source LLM on the [ParEval](https://github.com/parallelcodefoundry/ParEval) parallel code generation benchmark in terms of _correctness_ and _performance_.
43
+ It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation.
44
+ HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.
45
+
46
+ ## Using HPC-Coder-v2
47
+
48
+ The model is provided as a standard huggingface model with safetensor weights.
49
+ It can be used with [transformers pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines), [vllm](https://github.com/vllm-project/vllm), or any other standard model inference framework.
50
+ HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results.
51
+ It was trained with the following instruct template:
52
+
53
+ ```md
54
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
55
+
56
+ ### Instruction:
57
+ {instruction}
58
+
59
+ ### Response:
60
+
61
+ ```
62
+
63
+ ## Quantized Models
64
+
65
+ 4 and 8 bit quantized weights are available in the GGUF format for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
66
+ The 4 bit model requires ~0.8 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b-Q4_K_S-GGUF).
67
+ The 8 bit model requires ~1.4 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-1.3b-Q8_0-GGUF).
68
+ Further information on how to use them with llama.cpp can be found in [its documentation](https://github.com/ggerganov/llama.cpp).
69
+