TheBloke commited on
Commit
8ae2cf8
1 Parent(s): 816b541

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  inference: false
3
  license: other
4
- model_creator: The-Face-Of-Goonery
5
  model_link: https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
6
  model_name: Chronos Beluga v2 13B
7
  model_type: llama
@@ -23,12 +23,12 @@ quantized_by: TheBloke
23
  <!-- header end -->
24
 
25
  # Chronos Beluga v2 13B - GGML
26
- - Model creator: [The-Face-Of-Goonery](https://huggingface.co/The-Face-Of-Goonery)
27
  - Original model: [Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
28
 
29
  ## Description
30
 
31
- This repo contains GGML format model files for [The-Face-Of-Goonery's Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16).
32
 
33
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
34
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
@@ -42,7 +42,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
42
 
43
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GPTQ)
44
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGML)
45
- * [The-Face-Of-Goonery's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
46
 
47
  ## Prompt template: Alpaca
48
 
@@ -149,7 +149,7 @@ Thank you to all my generous patrons and donaters!
149
 
150
  <!-- footer end -->
151
 
152
- # Original model card: The-Face-Of-Goonery's Chronos Beluga v2 13B
153
 
154
  merged 58% chronos v2 42% beluga 13b merge using LUNK(Large universal neural kombiner)
155
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ model_creator: Caleb Morgan
5
  model_link: https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
6
  model_name: Chronos Beluga v2 13B
7
  model_type: llama
 
23
  <!-- header end -->
24
 
25
  # Chronos Beluga v2 13B - GGML
26
+ - Model creator: [Caleb Morgan](https://huggingface.co/The-Face-Of-Goonery)
27
  - Original model: [Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
28
 
29
  ## Description
30
 
31
+ This repo contains GGML format model files for [Caleb Morgan's Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16).
32
 
33
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
34
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
 
42
 
43
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GPTQ)
44
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGML)
45
+ * [Caleb Morgan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
46
 
47
  ## Prompt template: Alpaca
48
 
 
149
 
150
  <!-- footer end -->
151
 
152
+ # Original model card: Caleb Morgan's Chronos Beluga v2 13B
153
 
154
  merged 58% chronos v2 42% beluga 13b merge using LUNK(Large universal neural kombiner)
155