Crystalcareai's picture
Update README.md
6f402ea verified
metadata
license: apache-2.0
base_model: Qwen/Qwen2-1.5B
tags:
  - generated_from_trainer
  - axolotl
datasets:
  - cognitivecomputations/Dolphin-2.9
  - teknium/OpenHermes-2.5
  - m-a-p/CodeFeedback-Filtered-Instruction
  - cognitivecomputations/dolphin-coder
  - cognitivecomputations/samantha-data
  - microsoft/orca-math-word-problems-200k
  - Locutusque/function-calling-chatml
  - internlm/Agent-FLAN

Dolphin 2.9.3 Qwen2 1.5B 🐬

Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations

Discord Discord: https://discord.gg/cognitivecomputations

Our appreciation for the sponsors of Dolphin 2.9.3:

This model is based on Qwen2-1.5b, and is governed by the Apache-2.0

The base model has 128k context, and the full-weight fine-tuning was with 16k sequence length.

Due to the complexities of fine tuning smaller models on datasets created by/for larger models - we removed coding, function calling and systemchat-multilingual datasets when tuning these models.

example:

<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Dolphin-2.9.3 has a variety of instruction and conversational skills.

Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

Dolphin is licensed according to Apache-2.0 We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models.

Evals: