GGUF
English
mistral
sft
Inference Endpoints
LDJnr's picture
Create README.md
f6a74d1
|
raw
history blame
1.4 kB
metadata
language:
  - eng
tags:
  - mistral
  - sft
license:
  - mit
datasets:
  - LDJnr/LessWrong-Amplify-Instruct
  - LDJnr/Pure-Dove
  - LDJnr/Verified-Camel

Nous-Capybara-7B V1.9 Preview (V2 COMING SOON)

This is the reccomended version of Capybara to use until V2 releases.

Leverages novel de-alignment techniques, enhanced quality curation for training and a significantly better foundation model!

This is a version of Capybara trained on Mistral instead of Llama, as well as using an improved dataset distribution and should be even better at avoiding censorship.

The Capybara series is made with a novel synthesis method in mind, Amplify-instruct, with a goal of having a synergistic combination of different data seeds and techniques used for SOTA models such as Airoboros, Evol-Instruct, Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).

Entirely contained under 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!