FPHam commited on
Commit
299e2ba
1 Parent(s): 0220be4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -12,6 +12,17 @@ tags:
12
  - gguf-my-repo
13
  ---
14
 
 
 
 
 
 
 
 
 
 
 
 
15
  # FPHam/Autolycus-Mistral_7B-Q6_K-GGUF
16
  This model was converted to GGUF format from [`FPHam/Autolycus-Mistral_7B`](https://huggingface.co/FPHam/Autolycus-Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/FPHam/Autolycus-Mistral_7B) for more details on the model.
 
12
  - gguf-my-repo
13
  ---
14
 
15
+ <!-- header start -->
16
+ <div style="display: flex; flex-direction: column; align-items: center;">
17
+ </div>
18
+ <div style="width: 100%;">
19
+ <img src="https://huggingface.co/FPHam/OpenAutolycus-Mistral_7B/resolve/main/openautolycustitle.jpg" alt="Open Autolycus" style="width: 40%; min-width: 200px; display: block; margin: auto;">
20
+ </div>
21
+ <div style="display: flex; flex-direction: column; align-items: center;">
22
+ <p><a href="https://ko-fi.com/Q5Q5MOB4M">Support me at Ko-fi</a></p>
23
+ </div>
24
+ <!-- header end -->
25
+
26
  # FPHam/Autolycus-Mistral_7B-Q6_K-GGUF
27
  This model was converted to GGUF format from [`FPHam/Autolycus-Mistral_7B`](https://huggingface.co/FPHam/Autolycus-Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
  Refer to the [original model card](https://huggingface.co/FPHam/Autolycus-Mistral_7B) for more details on the model.