Nous-Capybara-7B V1.9 Preview (V2 COMING SOON)

This is the reccomended version of Capybara to use until V2 releases.

Leverages novel de-alignment techniques, enhanced quality curation for training and a significantly better foundation model!

This is a version of Capybara trained on Mistral instead of Llama, as well as using an improved dataset distribution and should be even better at avoiding censorship.

The Capybara series is made with a novel synthesis method in mind, Amplify-instruct, with a goal of having a synergistic combination of different data seeds and techniques used for SOTA models such as Airoboros, Evol-Instruct, Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).

Entirely contained under 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!

Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train LoneStriker/Nous-Capybara-7B-V1.9-8.0bpw-h6-exl2