File size: 777 Bytes
5717561
d6f1a4b
5717561
 
 
039103e
5717561
 
870de2e
 
2686b1c
 
870de2e
 
 
 
 
 
 
2686b1c
870de2e
18b685c
870de2e
3063775
5717561
3063775
9a3c00d
039103e
 
5717561
eb88016
807b973
a90e3a9
039103e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
base_model: Monero/Manticore-13b-Chat-Pyg-Guanaco
tags:
- manticore
- llama-cpp
- llama
---

<!DOCTYPE html>
<style>
h1 {
  color: #FF0000;
  text-decoration: none;
}
</style>
<html lang="en">
<head>
</head>
<body>
<h1>!!! Archive of LLaMa-1-13B Model !!!</h1>
</body>
</html>

# May 27, 2023 - Monero/Manticore-13b-Chat-Pyg-Guanaco

v000000

This model was converted to GGUF format from [`Monero/Manticore-13b-Chat-Pyg-Guanaco`](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) using llama.cpp
Refer to the [original model card](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) for more details on the model.

* [Quants in repo:] static Q5_K_M, static Q6_K, static Q8_0


Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied