chargoddard commited on
Commit
b0921ef
1 Parent(s): 24a85a6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - EleutherAI/wikitext_document_level
4
+ tags:
5
+ - llama
6
+ ---
7
+ LLaMA 33b finetuned on `wikitext_document_level` with a linear ROPE scaling of 8, for a 16k token context length.
8
+ This is a merged version of [llama33b-16k-qlora](https://huggingface.co/chargoddard/llama33b-16k-qlora).
9
+
10
+ Note that this is *not* an instruct model - this is base LLaMA with an extended sequence length.