Transformers
GGUF
llama
Not-For-All-Audiences
TheBloke commited on
Commit
0848c0c
1 Parent(s): ef6eae4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -7,6 +7,7 @@ license: llama2
7
  model_creator: Jon Durbin
8
  model_name: Spicyboros 70B 2.2
9
  model_type: llama
 
10
  quantized_by: TheBloke
11
  tags:
12
  - not-for-all-audiences
@@ -60,6 +61,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
60
  <!-- repositories-available start -->
61
  ## Repositories available
62
 
 
63
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GPTQ)
64
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF)
65
  * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-70b-2.2)
 
7
  model_creator: Jon Durbin
8
  model_name: Spicyboros 70B 2.2
9
  model_type: llama
10
+ prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
11
  quantized_by: TheBloke
12
  tags:
13
  - not-for-all-audiences
 
61
  <!-- repositories-available start -->
62
  ## Repositories available
63
 
64
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-AWQ)
65
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GPTQ)
66
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF)
67
  * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-70b-2.2)