Update README.md
Browse files
README.md
CHANGED
@@ -5,13 +5,16 @@ license: cc-by-sa-4.0
|
|
5 |
<div style="width: 800px; margin: auto;">
|
6 |
|
7 |
<h2>Model Description</h2>
|
8 |
-
<p>“Luna AI Llama2 Uncensored” is a
|
9 |
-
This model was fine-tuned by Tap, the creator of Luna AI. <br />
|
|
|
10 |
<p>This model stands out for its long responses, low hallucination rate, and absence of censorship mechanisms. <br />
|
11 |
The fine-tuning process was performed on an 8x a100 80GB machine.</p>
|
12 |
|
13 |
<h2>Model Training</h2>
|
14 |
-
<p>The model was trained almost entirely on synthetic outputs. <br />
|
|
|
|
|
15 |
<p>Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.</p>
|
16 |
|
17 |
<h2>Prompt Format</h2>
|
@@ -19,7 +22,7 @@ license: cc-by-sa-4.0
|
|
19 |
|
20 |
|
21 |
<h2>Future Plans</h2>
|
22 |
-
<p>The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations.</p>
|
23 |
|
24 |
<h2>Benchmark Results</h2>
|
25 |
<pre>
|
@@ -30,10 +33,10 @@ license: cc-by-sa-4.0
|
|
30 |
|
31 |
<h2>Ethical considerations</h2>
|
32 |
<p>The data used to train the model is collected from various sources, mostly from the Web. <br />
|
33 |
-
As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.</p>
|
34 |
|
35 |
<h2>Human life</h2>
|
36 |
-
<p>The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.</p>
|
37 |
|
38 |
<h2>Risks and harms</h2>
|
39 |
<p>Risks and harms of large language models include the generation of harmful, offensive or biased content. <br />
|
|
|
5 |
<div style="width: 800px; margin: auto;">
|
6 |
|
7 |
<h2>Model Description</h2>
|
8 |
+
<p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model fine-tuned on over 400,000 instructions. <br />
|
9 |
+
This model was fine-tuned by Tap, the creator of Luna AI. <br />
|
10 |
+
The result is an enhanced Llama2 7b model that rivals ChatGPT in performance across a variety of tasks.</p>
|
11 |
<p>This model stands out for its long responses, low hallucination rate, and absence of censorship mechanisms. <br />
|
12 |
The fine-tuning process was performed on an 8x a100 80GB machine.</p>
|
13 |
|
14 |
<h2>Model Training</h2>
|
15 |
+
<p>The model was trained almost entirely on synthetic outputs. <br />
|
16 |
+
This includes data from diverse sources such as Orca, GPT4-LLM, and a custom unreleased dataset.<br />
|
17 |
+
The total volume of data encompassed over 400,000 high quality instructions.</p>
|
18 |
<p>Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.</p>
|
19 |
|
20 |
<h2>Prompt Format</h2>
|
|
|
22 |
|
23 |
|
24 |
<h2>Future Plans</h2>
|
25 |
+
<p>The model is currently being uploaded in FP16 format, <br />and there are plans to convert the model to GGML and GPTQ 4bit quantizations.</p>
|
26 |
|
27 |
<h2>Benchmark Results</h2>
|
28 |
<pre>
|
|
|
33 |
|
34 |
<h2>Ethical considerations</h2>
|
35 |
<p>The data used to train the model is collected from various sources, mostly from the Web. <br />
|
36 |
+
As such, it contains offensive, harmful and biased content. <br />We thus expect the model to exhibit such biases from the training data.</p>
|
37 |
|
38 |
<h2>Human life</h2>
|
39 |
+
<p>The model is not intended to inform decisions about matters central to human life, <br />and should not be used in such a way.</p>
|
40 |
|
41 |
<h2>Risks and harms</h2>
|
42 |
<p>Risks and harms of large language models include the generation of harmful, offensive or biased content. <br />
|