Pankaj Mathur
commited on
Commit
•
3eae805
1
Parent(s):
a9e3300
Update README.md
Browse files
README.md
CHANGED
@@ -8,14 +8,13 @@ license: cc-by-sa-4.0
|
|
8 |
<p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model <br />fine-tuned on over 40,000 long form chat discussions <br />
|
9 |
This model was fine-tuned by Tap, the creator of Luna AI. <br />
|
10 |
The result is an enhanced Llama2 7b model that rivals ChatGPT in performance <br />across a variety of tasks.</p>
|
11 |
-
<p>This model stands out for its long responses, low hallucination rate, and absence of censorship mechanisms. <br
|
12 |
-
The fine-tuning process was performed on an 8x a100 80GB machine.</p>
|
13 |
|
14 |
<h2>Model Training</h2>
|
15 |
-
<p>The
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
|
20 |
<h2>Prompt Format</h2>
|
21 |
<p>The model follows the Vicuna 1.1/ OpenChat format:</p>
|
|
|
8 |
<p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model <br />fine-tuned on over 40,000 long form chat discussions <br />
|
9 |
This model was fine-tuned by Tap, the creator of Luna AI. <br />
|
10 |
The result is an enhanced Llama2 7b model that rivals ChatGPT in performance <br />across a variety of tasks.</p>
|
11 |
+
<p>This model stands out for its long responses, low hallucination rate, and absence of censorship mechanisms. <br /></p>
|
|
|
12 |
|
13 |
<h2>Model Training</h2>
|
14 |
+
<p>The fine-tuning process was performed on an 8x a100 80GB machine.
|
15 |
+
<br />The model was trained almost entirely on synthetic outputs.
|
16 |
+
<br />This includes data from diverse sources which we included to create our custom dataset, it includes multiple rounds of chats between Human & AI.
|
17 |
+
</p>
|
18 |
|
19 |
<h2>Prompt Format</h2>
|
20 |
<p>The model follows the Vicuna 1.1/ OpenChat format:</p>
|