File size: 1,586 Bytes
136636f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: other
license_name: microsoft-research-license
license_link: LICENSE
language:
- en
---
# PsyOrca2-DARE-13b-GGUF
This is a public GGUF test of PsyOrca2-DARE-13b. If this model is favorable, the FP16 will be uploaded with more details.
The model details are below:
## Model
This is a [Llama 2](https://huggingface.co/meta-llama/Llama-2-70b)-based model consisting of a merge between:
- [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/Psyfighter-2-13B) (FP16 not available to the public yet. However, the merge config is.)
- [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
The goal of this merge is to test out the DARE merge algorithm and see how it works with these two models.
Mergekit config (Inspired from Charles Goddard):
```yml
models:
- model: KoboldAI/Psyfighter-2-13B
parameters:
weight: 1
density: 1
- model: microsoft/Orca-2-13b
parameters:
weight: 0.05
density: 0.30
merge_method: dare_ties
base_model: meta-llama/Llama-2-13b-hf
parameters:
int8_mask: true
dtype: bfloat16
```
## Usage:
The intended prompt format is unknown, but it can be safely assumed that it is compatible with Alpaca and Orca ChatML formatting.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details. |