Update README.md
Browse files
README.md
CHANGED
@@ -20,10 +20,10 @@ The easiest way is to use [oobabooga text-generation-webui](https://github.com/o
|
|
20 |
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. The superHOT LoRA is an adapter that has been finetuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? This is an experiment to explore this.
|
21 |
|
22 |
## Relative Performance (perplexity)
|
23 |
-
| Model | Context | Perplexity |
|
24 |
| ---------------------------------------------------- | ----------- | ---------- |
|
25 |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.15 |
|
26 |
-
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ |
|
27 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ** | **2048** | **4.32** |
|
28 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ** | **3072** | **4.26** |
|
29 |
|
|
|
20 |
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. The superHOT LoRA is an adapter that has been finetuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been finetuned with the scaled embeddings from the start? This is an experiment to explore this.
|
21 |
|
22 |
## Relative Performance (perplexity)
|
23 |
+
| Model | Context (tokens) | Perplexity |
|
24 |
| ---------------------------------------------------- | ----------- | ---------- |
|
25 |
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.15 |
|
26 |
+
| TheBloke/airoboros-33B-gpt4-1-4-SuperHOT-8K-GPTQ | 3072 | 5.04 |
|
27 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ** | **2048** | **4.32** |
|
28 |
| **bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-GPTQ** | **3072** | **4.26** |
|
29 |
|