Awan LLM commited on
Commit
863d720
1 Parent(s): 9dce3f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -14,6 +14,9 @@ In terms of reasoning and intelligence, this model is probably worse than the OG
14
 
15
  Will soon have quants uploaded here on HF and have it up on our site https://awanllm.com for anyone to try.
16
 
 
 
 
17
 
18
  Training:
19
  - 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
 
14
 
15
  Will soon have quants uploaded here on HF and have it up on our site https://awanllm.com for anyone to try.
16
 
17
+ OpenLLM Benchmark:
18
+ ![OpenLLM Leaderboard](https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v0.2-GGUF/blob/main/Screenshot%202024-05-02%20201231.png "OpenLLM Leaderboard")
19
+
20
 
21
  Training:
22
  - 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.