GGUF
English
mistral
sft
Inference Endpoints
File size: 1,408 Bytes
f6a74d1
 
 
 
 
 
 
 
 
 
 
 
 
 
4f2fc45
f6a74d1
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
language:
- eng
tags:
- mistral
- sft
license:
- mit
datasets:
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
---

## **Nous-Capybara-7B V1.9 Preview (V2 Coming around Christmas!)**

**This is the reccomended version of Capybara to use until V2 releases.**

*Leverages novel de-alignment techniques, enhanced quality curation for training and a significantly better foundation model!*

This is a version of Capybara trained on Mistral instead of Llama, as well as using an improved dataset distribution and should be even better at avoiding censorship.

The Capybara series is made with a novel synthesis method in mind, Amplify-instruct, with a goal of having a synergistic combination of different data seeds and techniques used for SOTA models such as Airoboros, Evol-Instruct, Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).

Entirely contained under 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!