Tips
I haven't done any rigorous testing on this model, although it can become extremely unhinged depending on your character card.
SillyTavern presets included.
Nina-7B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using Test157t/Mika-Longtext-7b as a base.
Models Merged
The following models were included in the merge:
- Mergekit/SmartyPants-Cerebrum-FC
- Nitral-AI/Mika-Longtext-7b
- tavtav/eros-7b-test
- ChaoticNeutrals/BuRP_7B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Test157t/Mika-Longtext-7b
parameters:
weight: 1.0
- model: ChaoticNeutrals/BuRP_7B
parameters:
weight: 0.75
- model: Mergekit/SmartyPants-Cerebrum-FC
parameters:
weight: 0.70
- model: tavtav/eros-7b-test
parameters:
weight: 0.20
merge_method: task_arithmetic
base_model: Test157t/Mika-Longtext-7b
parameters:
normalize: true
int8_mask: true
dtype: float16
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.