totally-not-an-llm's picture
Update README.md
63bef5d
|
raw
history blame
1.15 kB
metadata
license: llama2
datasets:
  - totally-not-an-llm/EverythingLM-data-V2

EverythingLM-13b-16k

Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM-V2 dataset, more info can be found on the dataset page.

The model is completely uncensored.

GGML quants:

soon

Make sure to use correct rope scaling settings: -c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25

GPTQ quants:

soon

Notable features:

  • Automatically triggered CoT reasoning.
  • Verbose and detailed replies.
  • Creative stories.
  • Better prompt understanding.

Prompt format:

It is a modified Vicuna format, the same used in many of ehartford's models.

You are a helpful AI assistant.

USER: <prompt>
ASSISTANT:

Training took about 2.5 hours using QLoRa on 1xA100, so this model can be recreated for about $4. QLoRa model can be found here: https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V2-peft.

Future plans:

  • Native finetune.
  • Other model sizes.
  • Test some model merges using this model. (Specifically OpenOrca and Platypus models)