File size: 3,675 Bytes
a64cc3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3f70f4
a64cc3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
datasets:
- vicgalle/worldsim-claude-opus
- macadeliccc/opus_samantha
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT
- lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K
- QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT
- ChaoticNeutrals/Luminous_Opus
- kalomaze/Opus_Instruct_3k
- kalomaze/Opus_Instruct_25k
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
license: llama3.1
---

![L3.1-8B-Fabula](https://files.catbox.moe/blwlvb.jpeg)

# L3.1-8B-Fabula

L3.1-8B-Fabula is a fine-tuned version of Facebook's LLaMA 3.1 8B model, specifically optimized for roleplay and general knowledge tasks.

## Model Details

- **Base Model**: [Llama-3.1-8B](https://hf.co/meta-llama/Llama-3.1-8B)
- **Chat Template**: ChatML
- **Max Input Tokens**: 32,768
- **Datasets Used In Fine-tuning:**
  * [vicgalle/worldsim-claude-opus](https://hf.co/datasets/vicgalle/worldsim-claude-opus)
  * [macadeliccc/opus_samantha](https://hf.co/datasets/macadeliccc/opus_samantha)
  * [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://hf.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
  * [lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT](https://hf.co/datasets/lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT)
  * [lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K](https://hf.co/datasets/lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K)
  * [QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT](https://hf.co/datasets/QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT)
  * [ChaoticNeutrals/Luminous_Opus](https://hf.co/datasets/ChaoticNeutrals/Luminous_Opus)
  * [kalomaze/Opus_Instruct_3k](https://hf.co/datasets/kalomaze/Opus_Instruct_3k)
  * [kalomaze/Opus_Instruct_25k](https://hf.co/datasets/kalomaze/Opus_Instruct_25k)

## Chat Template
- In the finetuning ChatML were used.
```js
function chatml2(messages) {
    /**
     * @param {Array<{role: string, name: string, content: string}>} messages
     * @returns {{prompt: string, stop: string}}
     * @description Formats messages into ChatML template format
     */
    const isLastMessageAssistant = messages[messages.length - 1]?.role === "assistant";

    return {
        prompt: messages.map((message, index) => {
            const nameStr = message.name ? ` [${message.name}]` : "";
            const isLast = index === messages.length - 1;
            const needsEndTag = !isLastMessageAssistant || !isLast;
            
            return `<|im_start|>${message.role.toLowerCase()}${nameStr}\n${message.content}${needsEndTag ? "<|im_end|>" : ""}`;
        }).join("\n") + (isLastMessageAssistant ? "" : "\n<|im_start|>assistant\n"),
        stop: "<|im_end|>"
    };
}
```

I would highly recommend you add a set of rules in assistant role at the end of the chat history like this example below:
```md
<rules for="{{char}}'s responses">
1. I will write a response as {{char}} in a short manner and will keep it detailed (I will try to keep it under 300 characters).

2. Response formatting:
   "This is for talking"
   *This is for doing an action/ or self-reflection if I decide to write {{char}}'s response in first-person*
   ex: "Hello, there!" *{name} waves,* "How are you doing today?"

3. When I feel like it is needed for {{user}} to talk, I will not act as {{user}} or for them, I will simply stop generating more text via executing my EOS (end-of-string) token "<|im_end|>", to let the user write their response as {{user}}

4. I will use my past messages as an example of how {{char}} speaks
</rules>
**{{char}}'s response:**

```