EVA LLaMA 3.33 70B v0.1

A RP/storywriting specialist model, full-parameter finetune of Llama-3.3-70B-Instruct on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.
This model was built with Llama by Meta.

Version notes for v0.1

DELLA linear merge of v0.0 with an unreleased checkpoint from a different run. Reduced overfitting, better long context comprehension and recall, less repetition, more stability.

Prompt format is Llama3.


Recommended sampler values:

  • Temperature: 1
  • Min-P: 0.05
  • Repetition Penalty: 1.03

Recommended SillyTavern preset (via Virt-io):

Training data:

  • Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
  • Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
  • A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
  • A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
  • Synthstruct and SynthRP datasets by Epiculous
  • A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.

Model was created by Kearm, Auri and Cahvay.

Special thanks:

  • to Cahvay for his work on dataset filtering.
  • to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data
  • and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.

Licensing

Llama-3.3-70B-Instruct by Meta is licensed under Llama 3.3 Community License Agreement (further referred as L3.3 license) and is a subject to Acceptable Use Policy for Llama Materials.
This derivative is free for personal, research and commercial use on terms of L3.3 license with one extra clause:
- Infermatic Inc and any of its employees or paid associates cannot utilize, distribute, download, or otherwise make use of EVA models for any purpose.


This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear DELLA merge method using meta-llama/Llama-3.1-70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
    parameters:
      density: 0.6
      weight: 0.3
      lambda: 1.1
      epsilon: 0.35
  - model: EVA-UNIT-01/LLaMA-EVA-3.33-70B-v0.0
    parameters:
      density: 0.45
      weight: 0.7
      lambda: 1.1
      epsilon: 0.4
merge_method: della_linear
base_model: meta-llama/Llama-3.1-70B
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Kotokin/EVA-UNIT-01_EVA-LLaMA-3.33-70B-v0.1-exl2-5bpw

Datasets used to train Kotokin/EVA-UNIT-01_EVA-LLaMA-3.33-70B-v0.1-exl2-5bpw