BusRune commited on
Commit
a64cc3d
·
verified ·
1 Parent(s): 6320361

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -3
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - vicgalle/worldsim-claude-opus
4
+ - macadeliccc/opus_samantha
5
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
6
+ - lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT
7
+ - lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K
8
+ - QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT
9
+ - ChaoticNeutrals/Luminous_Opus
10
+ - kalomaze/Opus_Instruct_3k
11
+ - kalomaze/Opus_Instruct_25k
12
+ language:
13
+ - en
14
+ base_model:
15
+ - meta-llama/Llama-3.1-8B
16
+ pipeline_tag: text-generation
17
+ license: llama3.1
18
+ ---
19
+
20
+ ![L3.1-8B-Fabula](https://files.catbox.moe/blwlvb.jpeg)
21
+
22
+ # L3.1-8B-Fabula
23
+
24
+ L3.1-8B-Fabula is a fine-tuned version of Facebook's LLaMA 3.1 8B model, specifically optimized for roleplay and general knowledge tasks.
25
+
26
+ ## Model Details
27
+
28
+ - **Base Model**: [Llama-3.1-8B](https://hf.co/meta-llama/Llama-3.1-8B)
29
+ - **Chat Template**: ChatML
30
+ - **Max Input Tokens**: 32,768
31
+ - **Datasets Used In Fine-tuning:**
32
+ * [vicgalle/worldsim-claude-opus](https://hf.co/datasets/vicgalle/worldsim-claude-opus)
33
+ * [macadeliccc/opus_samantha](https://hf.co/datasets/macadeliccc/opus_samantha)
34
+ * [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://hf.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
35
+ * [lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT](https://hf.co/datasets/lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT)
36
+ * [lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K](https://hf.co/datasets/lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K)
37
+ * [QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT](https://hf.co/datasets/QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT)
38
+ * [ChaoticNeutrals/Luminous_Opus](https://hf.co/datasets/ChaoticNeutrals/Luminous_Opus)
39
+ * [kalomaze/Opus_Instruct_3k](https://hf.co/datasets/kalomaze/Opus_Instruct_3k)
40
+ * [kalomaze/Opus_Instruct_25k](https://hf.co/datasets/kalomaze/Opus_Instruct_25k)
41
+
42
+ ## Chat Template
43
+ - In the finetuning ChatML were used.
44
+ ```js
45
+ function chatml2(messages) {
46
+ /**
47
+ * @param {Array<{role: string, name: string, content: string}>} messages
48
+ * @returns {{prompt: string, stop: string}}
49
+ * @description Formats messages into ChatML template format
50
+ */
51
+ const isLastMessageAssistant = messages[messages.length - 1]?.role === "assistant";
52
+
53
+ return {
54
+ prompt: messages.map((message, index) => {
55
+ const nameStr = message.name ? ` [${message.name}]` : "";
56
+ const isLast = index === messages.length - 1;
57
+ const needsEndTag = !isLastMessageAssistant || !isLast;
58
+
59
+ return `<|im_start|>${message.role.toLowerCase()}${nameStr}\n${message.content}${needsEndTag ? "<|im_end|>" : ""}`;
60
+ }).join("\n") + (isLastMessageAssistant ? "" : "\n<|im_start|>assistant\n"),
61
+ stop: "<|im_end|>"
62
+ };
63
+ }
64
+ ```
65
+
66
+ I would highly recommend you add rules as assistant role before sending to generation like this below:
67
+ ```md
68
+ <rules for="{{char}}'s responses">
69
+ 1. I will write a response as {{char}} in a short manner and will keep it detailed (I will try to keep it under 300 characters).
70
+
71
+ 2. Response formatting:
72
+ "This is for talking"
73
+ *This is for doing an action/ or self-reflection if I decide to write {{char}}'s response in first-person*
74
+ ex: "Hello, there!" *{name} waves,* "How are you doing today?"
75
+
76
+ 3. When I feel like it is needed for {{user}} to talk, I will not act as {{user}} or for them, I will simply stop generating more text via executing my EOS (end-of-string) token "<|im_end|>", to let the user write their response as {{user}}
77
+
78
+ 4. I will use my past messages as an example of how {{char}} speaks
79
+ </rules>
80
+ **{{char}}'s response:**
81
+
82
+ ```