jebcarter commited on
Commit
dd857b8
1 Parent(s): ea52a3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
 
13
  Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llama2-based stack merge based on the models and recipe below:
14
 
15
- - [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/Psyfighter-2-13B)
16
  - [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
17
 
18
  ```yaml
@@ -21,13 +21,13 @@ Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llam
21
  - model: Orca2flat
22
  layer_range: [0, 16]
23
  - sources:
24
- - model: /KoboldAI/Psyfighter-2-13B (FP16 not yet available)
25
  layer_range: [8, 24]
26
  - sources:
27
  - model: Orca2flat
28
  layer_range: [17, 32]
29
  - sources:
30
- - model: /KoboldAI/Psyfighter-2-13B (FP16 not yet available)
31
  layer_range: [25, 40]
32
  merge_method: passthrough
33
  dtype: float16
 
12
 
13
  Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llama2-based stack merge based on the models and recipe below:
14
 
15
+ - [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2-GGUF)
16
  - [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
17
 
18
  ```yaml
 
21
  - model: Orca2flat
22
  layer_range: [0, 16]
23
  - sources:
24
+ - model: LLaMA2-13B-Psyfighter2 (FP16 not yet available)
25
  layer_range: [8, 24]
26
  - sources:
27
  - model: Orca2flat
28
  layer_range: [17, 32]
29
  - sources:
30
+ - model: LLaMA2-13B-Psyfighter2 (FP16 not yet available)
31
  layer_range: [25, 40]
32
  merge_method: passthrough
33
  dtype: float16