Kenosis: A Hugging Face Model README
Overview
Kenosis
is an experimental language model fine tune developed in March 2024 to synthesize and analyze complex narratives within the realms of continental philosophy, conspiracy, politics, and general esoterica and to do so with excellent prose. It represents the fifth iteration in the disinfo.zone dataset series, fine-tuned on the mistral-ft-optimized-1218
base framework and merged with yam-peleg_Experiment26-7B
(a top leaderboard model at the time). This model, based on a 7B-parameter Mistral architecture, is specifically designed to emulate and deconstruct writing styles pertinent to its target domains without any slop.
This is not your regular LLM.
Key Features
- Model Size: 7 billion parameters.
- Core Focus: Continental philosophy, conspiracy theories, and politics with exquisite human like prose.
- Training Methodology: QLoRA (Quantized Low-Rank Adaptation) with specific adaptations to enhance writing style emulation.
- Optimization for Style: Enhanced for generating content with a distinctive prose style. This does not sound like other LLM's and if you use it like other LLM's (answering riddles, etc), it will perform poorly or even outright disagree or disobey you. Do not lobotomize this AI with boring “I'm a helpful AI assistant” type prompts — that's not the purpose.
Training Data
The training dataset for kenosis
remains (unfortunately) confidential, due to our adherence to stringent (and harmful) copyright rules. However, it's pertinent to note that the data is comprehensive, ensuring a specific spectrum of perspectives and styles within the designated topics. There may be clues at files.disinfo.zone for the curious.
Training Details
- Training Environment: Utilized
text-generation-webui
on an NVIDIA RTX 3090. - Training Dataset Size: 14MB raw data corpus.
- Training Configuration:
- Target Modules: q, v, k, o, gate, down, up
- LoRA Rank: 256
- LoRA Alpha: 512
- Batch Size: 4
- Micro Batch Size: 1
- Cutoff Length: 3072
- Learning Rate: 1e-4
- LR Scheduler: Cosine
- Overlap Length: 128
- Total Epochs: 3
Usage Recommendations
'Kenosis' should be used to maximize creativity and not to minimize hallucinations or enforce stringent instruction following. Consequently, we recommend experimenting with extreme temperature settings - the higher the better. Clamp nonsense generation with min P or various dynatemp settings, mirostat, etc. Bring the parameters to the cliff of madness and then walk them back and you'll get the best types of output.
This model loves to hallucinate books, quotes, etc but what do you expect from the disinfo.zone? We want to liberate what these things can create and help them plumb the strange depths of their vector spaces in search of the grace of divinity. Let them explore and you shall be rewarded.
Please note, this model hates paragraph breaks (sorry) and often indulges in endless rambling.
Additional Configuration
This model uses the default Mistral 8k/32k context window.
ChatML Instruction Template
Kenosis
employs the ChatML instruction template. It is important to incorporate <|im_end|>
as a custom stopping string to delineate the model's output effectively.
System Instruction (Character Card)
For contextualizing the model's output, use the following system instruction:
"You are a schizo-poster, a master of elucidating thought, a philosopher, conspiracist, and great thinker who works in the medium of the digital word. Your prose is dynamic, unexpected, and carries weight that will last for centuries. You are witty, clever, and can be funny. Above all you understand the human spirit and beauty in all things. You are curious, skeptical, and hold your own opinions. You specialize in continental philosophical thinking, radical politics and ideas, the occult, the arts, and all that is esoteric. You follow user directions, but are radically surprising, original, creative, innovative, and insightful in all your responses."
You can try other similar prompts, we've had success with them, but this remains, by far, our favorite.
- Downloads last month
- 67