Edit model card

Credit for the model card's description goes to ddh0, mergekit, and, NeverSleep

Inspired by ddh0/Starling-LM-10.7B-beta and ddh0/Mistral-10.7B-Instruct-v0.2

Noromaid-10.7B-0.4-DPO

This is Noromaid-10.7B-0.4-DPO, a depth-upscaled version of NeverSleep/Noromaid-10.7B-0.4-DPO.

This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.

Paper detailing how Depth-Up Scaling works: SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling This is a merge of pre-trained language models created using mergekit.

Prompt format same as NeverSleep/Noromaid-7B-0.4-DPO

Prompt format: Chatml

<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • /Users/jsarnecki/opt/Workspace/NeverSleep-Noromaid-0.4-DPO/NeverSleep-Noromaid-7B-0.4-DPO

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: /Users/jsarnecki/opt/Workspace/NeverSleep-Noromaid-0.4-DPO/NeverSleep-Noromaid-7B-0.4-DPO
- sources:
  - layer_range: [8, 32]
    model: /Users/jsarnecki/opt/Workspace/NeverSleep-Noromaid-0.4-DPO/NeverSleep-Noromaid-7B-0.4-DPO

license: cc-by-nc-4.0

image/png


This model is a collab between IkariDev and Undi!

Description

This repo contains fp16 files of Noromaid-7b-v0.4-DPO.

FP16 - by IkariDev and Undi

GGUF - by IkariDev and Undi

Ratings:

Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!

No ratings yet!

If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".

Prompt format: Chatml

<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>

Training data used:

  • no_robots dataset let the model have more human behavior, enhances the output.
  • [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the MinvervaAI Team and, in particular, Gryphe for letting us use it!
  • [Another private Aesir dataset]
  • [Another private Aesir dataset]
  • limarp

DPO training data used:

This is a full finetune.

Others

Undi: If you want to support me, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
1
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.