Owen Arliawan commited on
Commit
b00d54f
1 Parent(s): f13056e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -5,9 +5,18 @@ Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agree
5
  https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b/blob/main/LICENSE
6
 
7
 
8
- We don't know how good this model is exactly since we have not benched this yet, but from our preliminary testing it seems to follow specific prompts better without adding unneccesary information or asking the user back.
 
 
 
 
 
 
 
 
 
9
  We are happy for anyone to try it out and give some feedback.
10
- You can try this model on our API at https://www.awanllm.com/
11
 
12
 
13
  Trained on 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
 
5
  https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b/blob/main/LICENSE
6
 
7
 
8
+ We don't know how good this model is exactly in benchmarks since we have not benched this yet, but we think real prompts and usage is more telling anyways.
9
+
10
+
11
+ From our testing this model is:
12
+
13
+ - Less Refusals
14
+ - More Uncensored
15
+ - Follows requests better
16
+ - Can reply in requested formats better without adding unnecesary information
17
+
18
  We are happy for anyone to try it out and give some feedback.
19
+ You can also try this model on our API at https://www.awanllm.com/
20
 
21
 
22
  Trained on 2048 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.