File size: 480 Bytes
ebf85fc
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
---
datasets:
- EleutherAI/wikitext_document_level
tags:
- llama
---
LLaMA 33b finetuned on `wikitext_document_level` with combined linear and NTK-aware ROPE scaling (alpha=4, scale=2.)
This model will be coherent up to at least 8k context length, but might work beyond that.
This is a merged version of [llama33b-s2a4-qlora](https://huggingface.co/chargoddard/llama33b-s2a4-qlora).

Note that this is *not* an instruct model - this is base LLaMA with an extended sequence length.