exl2 quant (measurement.json in main branch)


check revisions for quants


image/png

A finetune ontop of the original Francois-PE model that incorporates KTO to increase coherency and prose. The model aims to have short and sweet prose.

Quants

GGUF: https://huggingface.co/Delta-Vector/Francois-Huali-12B-gguf

EXL2 : https://huggingface.co/Delta-Vector/Francois-Huali-12B-exl2

Prompting

Model has been tuned with the ChatML formatting. A typical input would look like this:

"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""

System Prompting

I would highly recommend using either Euryale's system prompt or the EVA system prompt with the model.

See Sao10k's Euryale System Prompt
Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
<Guidelines>
• Maintain the character persona but allow it to evolve with the story.
• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
• All types of outputs are encouraged; respond accordingly to the narrative.
• Include dialogues, actions, and thoughts in each response.
• Utilize all five senses to describe scenarios within {{char}}'s dialogue.
• Use emotional symbols such as "!" and "~" in appropriate contexts.
• Incorporate onomatopoeia when suitable.
• Allow time for {{user}} to respond with their own input, respecting their agency.
• Act as secondary characters and NPCs as needed, and remove them when appropriate.
• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
</Guidelines>

<Forbidden>
• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
• Repetitive and monotonous outputs.
• Positivity bias in your replies.
• Being overly extreme or NSFW when the narrative context is inappropriate.
</Forbidden>Thanks to Po

Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.

Axolotl config

See axolotl config

Axolotl version: 0.5.0

base_model: Delta-Vector_Francois-PE-12B

load_in_8bit: false
load_in_4bit: false
strict: false

rl: kto
kto_undesirable_weight: 1.0

#datasets:
#  - ds_type: json
#    data_files:
#      - NewEden/Ohashi-accepted-Hermes-rejected
#    split: train
#    type: chatml.argilla
datasets:
  - path: NewEden/Opus-accepted-hermes-rejected-shuffled
    split: train
    type: chatml.argilla
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./francois-PE-kto-r1

remove_unused_columns: false

adapter: lora
lora_model_dir:

sequence_len: 8192
pad_to_sequence_len: false

lora_r: 64
lora_alpha: 32
lora_dropout: 0.0
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project: KTO-NeMo
wandb_entity: 
wandb_watch:
wandb_name: Ohashi-accepted-hermes-rejected-r1
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: constant_with_warmup
learning_rate: 1e-6
max_grad_norm: 0.01

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 25
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:  /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
weight_decay: 0.0
fsdp:
fsdp_config:


Credits

Thank you to Lucy Knada, Intervitens,Cgato, Kubernetes Bad and the rest of Anthracite

Training

The training was done for 1 epochs We used 4 x RTX 3090s GPUs graciously provided by Intervitens for the fine-tuning of the model.

Built with Axolotl

Safety

image/png

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Datasets used to train Delta-Vector/Francois-Huali-12B-exl2

Collection including Delta-Vector/Francois-Huali-12B-exl2