Edit model card

ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B

Model Architecture: L3-8B
Model Family: Llama-3
Merge Method: TIES
License: Meta Llama Community License


Model Description

Llama-3-Aetheric-Hermes-Lexi-Smaug-8B is a masterfully merged model designed to offer a balanced, flexible, and high-performing experience. Built on the robust foundation of Llama-3 and drawing from diverse sources, this model is ideal for tasks that require advanced natural language understanding, instruction-following, creative writing, and more. It leverages the unique strengths of each contributing model, yielding a multifaceted, versatile AI capable of adapting to various applications with grace and power.

Merged Models

This model is an amalgamation of the following distinguished lineages:

  1. maldv/badger-writer-llama-3-8b: Known for its expressive storytelling and sci-fi writing capabilities, with a structured approach to immersive text generation.
  2. vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B: A highly configurable model that enables controlled, safe output with flexible tuning for varying degrees of safety and censorship, perfect for responsible deployments.
  3. Orenguteng/Llama-3-8B-Lexi-Uncensored: A dynamic and unrestricted model designed for candid responses and uncensored creative outputs, empowering more authentic dialogues.
  4. abacusai/Llama-3-Smaug-8B: Optimized for conversational applications, with enhanced real-world multi-turn dialogue capabilities and stable responsiveness, crucial for interactive and continuous exchanges.

Merge Configuration

This merge was carefully executed using the TIES method to combine each model’s unique attributes harmoniously. Here is the YAML configuration used:

models:
  - model: maldv/badger-writer-llama-3-8b
    parameters:
      density: 0.4
      weight: 0.3  # Focused on enhancing storytelling and vivid descriptions
  - model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
    parameters:
      density: 0.6
      weight: 0.4  # Prioritizes balanced responses and configurable safety
  - model: Orenguteng/Llama-3-8B-Lexi-Uncensored
    parameters:
      density: 0.5
      weight: 0.2  # Adds versatility and creative freedom
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.5
      weight: 0.1  # Enhances conversational flow and multi-turn coherence

merge_method: ties
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
dtype: bfloat16
parameters:
  normalize: true
out_dtype: float16

Key Features

  • Storytelling and Creativity: Leveraging the narrative strength of Badger Writer, this model can generate rich, descriptive stories with intricate character details and immersive settings.
  • Configurable Safety and Ethics Compliance: With Configurable Hermes, users can set desired safety levels, ensuring safe output for sensitive applications or exploring uncensored responses in safe environments.
  • Conversational Agility: Smaug’s conversational tuning makes the model ideal for multi-turn, coherent exchanges, keeping the dialogue engaging and consistent across contexts.
  • Unrestricted Dialogue Options: Lexi provides a flexible, open-ended dialogue experience, perfect for creative brainstorming, research, and unrestricted content generation.

Ideal Use Cases

  1. Creative Writing and Roleplay: Perfect for crafting immersive stories, character-driven narratives, and exploratory role-playing scenarios.
  2. Coding and Technical Assistance: Capable of providing code snippets, explanations, and technical insights, with code generation refined for real-world applications.
  3. Ethical Research and Dialogue: Ideal for controlled environments where researchers can test response ethics, alignment, and safety controls.
  4. General Instruction-Following: Useful in assistant-like roles for answering questions, following system prompts, or generating informative content with flexible degrees of guidance.

Quantized Versions

For users looking to leverage optimized quantized versions of this model, we are grateful to MrRaderMacher for providing GGUF quantizations. These versions are accessible through the links below:

We recommend users select the imatrix version for the best inference performance, as it provides an optimal balance of efficiency and accuracy. Once again, a heartfelt thanks to MrRaderMacher for his contributions to the community with these quantized versions.


Example Usage

This model has unique Stop Strings, for optimal inference in LM Studio, enter the following setting under Prompt Format > Stop Strings:

<|reserved_special_token_1|>

What are stop strings? Specific strings that when encountered will stop the model from generating more tokens.

Basic Text Generation

from transformers import AutoTokenizer, pipeline
import torch

model_id = "Llama-3-Aetheric-Hermes-Lexi-Smaug-8B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

prompt = "Write a detailed description of a futuristic cityscape at night."
outputs = pipe(prompt, max_new_tokens=150, do_sample=True, temperature=0.8)
print(outputs[0]['generated_text'])

Configurable System Prompts (Example for Safe Mode)

conversation = [
    {"role": "system", "content": "You are a helpful assistant that avoids generating harmful content."},
    {"role": "user", "content": "List some fun activities to do in Tokyo."},
]

prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=100)
print(outputs[0]["generated_text"])

Limitations and Considerations

  • Uncensored Outputs: While powerful, this model can generate uncensored and unrestricted responses. Users should implement additional safety checks or monitor content where needed.
  • Bias and Fairness: Despite combining diverse model sources, this model may carry inherited biases from its training datasets. Users should validate outputs when accuracy and neutrality are essential.
  • Computational Requirements: Optimized for bfloat16, this model may require substantial computational resources. Ensure your setup is equipped for handling the model at its optimal performance.

Acknowledgments

Gratitude goes out to the developers and contributors of the base models:

  • Maldv, for their work on Badger Writer, inspiring vivid storytelling capabilities.
  • Vic Gallego, for their innovation in configurable safety with Hermes, empowering flexible safety settings.
  • Orenguteng, for enabling uncensored conversational capabilities with Lexi.
  • Abacus.AI, for enhancing real-world conversational performance with Smaug.

License

This model adheres to the Meta Llama Community License. Please review the license for detailed usage rights and limitations.


Enjoy exploring the versatile power of Llama-3-Aetheric-Hermes-Lexi-Smaug-8B! Let this model elevate your projects, spark creativity, and unlock new possibilities with every prompt. For additional support, please visit the respective model repositories and join the community conversations to share insights and ask questions.


Downloads last month
345
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mav23/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF