Fix name typo
#1
by
Mikael110
- opened
README.md
CHANGED
@@ -23,9 +23,9 @@ tags:
|
|
23 |
</div>
|
24 |
<!-- header end -->
|
25 |
|
26 |
-
#
|
27 |
|
28 |
-
These files are GGML format model files for [
|
29 |
|
30 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
31 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
|
@@ -145,7 +145,7 @@ Thank you to all my generous patrons and donaters!
|
|
145 |
|
146 |
<!-- footer end -->
|
147 |
|
148 |
-
# Original model card:
|
149 |
|
150 |
This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-7b). It was finetuned from the base [Llama-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
|
151 |
|
|
|
23 |
</div>
|
24 |
<!-- header end -->
|
25 |
|
26 |
+
# Mikael110's Llama2 7B Guanaco QLoRA GGML
|
27 |
|
28 |
+
These files are GGML format model files for [Mikael110's Llama2 7B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-7b-guanaco-fp16).
|
29 |
|
30 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
31 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
|
|
|
145 |
|
146 |
<!-- footer end -->
|
147 |
|
148 |
+
# Original model card: Mikael110's Llama2 7B Guanaco QLoRA
|
149 |
|
150 |
This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-7b). It was finetuned from the base [Llama-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
|
151 |
|