TerraMix_L2_13B_16K / README.md
androlike's picture
Update README.md
1e0fecc
metadata
license: llama2
library_name: transformers
pipeline_tag: text-generation
tags:
  - llama

Link to GGUF version: GGUF

Thanks to everyone, who finenuted base Llama2 model, made (Q)LoRas, created scripts for merge: ties-merge, BlockMerge_Gradient, zaraki-tools

Model details:

Experiment about merging model with large context length. Use these rope scaling settings:

--rope-freq-base 10000 --rope-freq-scale 0.25 -c 16384 (llama.cpp)
--ropeconfig 0.25 10000 --contextsize 16384 (koboldcpp)

You can use various instruct formats:

Alpaca instruct format (Recommended):

### Instruction:
(your instruct prompt is here)
### Response:

Vicuna 1.1 instruct format:

You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:

Metharme instruct format:

<|system|> (your instruct prompt)
<|user|> (user's reply)<|model|> (for model's output)

Models used for the merge:

Part1:

Airoboros L2 13B 2.1 + LLAMA2 13B - Holodeck merged with Creative and Reasoning Airoboros LMoE 13B 2.1

Part2:

Chronos 13B V2 merged with Kimiko-v2-13B + Nous-Hermes-Llama2-13b merged with limarp-llama2 and limarp-llama2-v2 + Synthia-13B merged with BluemoonRP-L2-13B and LLama-2-13b-chat-erp-lora-mk2 + WizardLM-1.0-Uncensored-Llama2-13b merged with Llama-2-13B-Storywriter-LORA

Part3:

Speechless Llama2 13B + Redmond Puffin 13B

Part4:

Tsukasa 13B 16K (repo is deleted) + EverythingLM-13b-V2-16k

Part5:

TerraMix_L2_13B (base) was merged with PIPPA ShareGPT Subset QLoRa 13B

Part6:

Three parts merged in one, then TsuryLM-L2-16K was merged with TerraMix_L2_13B (base).

Model is intended for creativity purposes (roleplay). It can regularly break formatting or sometimes have poor understanding about small details in occuring situations.

But yet, this model is almost absent from alignment, can generate direct output, moderately in prose, good in internet RP style.

Limitations and risks

Llama2 and its derivatives (finetunes) is licensed under LLama 2 Community License, various finetunes or (Q)LoRAs has appropriate licenses depending on used datasets in finetuning or training (Quantized) Low-Rank Adaptations. This mix can generate heavily biased output, which aren't suitable for minors or common audience.