Upload README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ license: llama2
|
|
7 |
model_creator: Jon Durbin
|
8 |
model_name: Airoboros L2 70B 2.2
|
9 |
model_type: llama
|
|
|
10 |
quantized_by: TheBloke
|
11 |
---
|
12 |
|
@@ -42,6 +43,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
42 |
<!-- repositories-available start -->
|
43 |
## Repositories available
|
44 |
|
|
|
45 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ)
|
46 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF)
|
47 |
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2)
|
@@ -58,15 +60,8 @@ ASSISTANT:
|
|
58 |
```
|
59 |
|
60 |
<!-- prompt-template end -->
|
61 |
-
<!-- licensing start -->
|
62 |
-
## Licensing
|
63 |
|
64 |
-
The creator of the source model has listed its license as `llama2`, and this quantization has therefore used that same license.
|
65 |
|
66 |
-
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
|
67 |
-
|
68 |
-
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Jon Durbin's Airoboros L2 70B 2.2](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2).
|
69 |
-
<!-- licensing end -->
|
70 |
<!-- README_GPTQ.md-provided-files start -->
|
71 |
## Provided files and GPTQ parameters
|
72 |
|
|
|
7 |
model_creator: Jon Durbin
|
8 |
model_name: Airoboros L2 70B 2.2
|
9 |
model_type: llama
|
10 |
+
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
|
11 |
quantized_by: TheBloke
|
12 |
---
|
13 |
|
|
|
43 |
<!-- repositories-available start -->
|
44 |
## Repositories available
|
45 |
|
46 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-AWQ)
|
47 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ)
|
48 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF)
|
49 |
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2)
|
|
|
60 |
```
|
61 |
|
62 |
<!-- prompt-template end -->
|
|
|
|
|
63 |
|
|
|
64 |
|
|
|
|
|
|
|
|
|
65 |
<!-- README_GPTQ.md-provided-files start -->
|
66 |
## Provided files and GPTQ parameters
|
67 |
|