Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license: cc-by-sa-3.0
|
|
4 |
|
5 |
## LLongMA-2-7b-dolly-15k
|
6 |
|
7 |
-
This is an instruction fine tuned adapter for LLongMA-2-7B, trained at **8k context length** using linear positional interpolation scaling.
|
8 |
In order to run this inference with this adapter, you'll need the base [LLongMA-2-7b model](https://huggingface.co/conceptofmind/LLongMA-2-7b) as well.
|
9 |
|
10 |
The adapter was instruction fined tuned with peft training, using the [dolly-15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|
|
|
4 |
|
5 |
## LLongMA-2-7b-dolly-15k
|
6 |
|
7 |
+
This is an instruction fine tuned adapter for [LLongMA-2-7B](https://huggingface.co/conceptofmind/LLongMA-2-7b) , trained at **8k context length** using linear positional interpolation scaling.
|
8 |
In order to run this inference with this adapter, you'll need the base [LLongMA-2-7b model](https://huggingface.co/conceptofmind/LLongMA-2-7b) as well.
|
9 |
|
10 |
The adapter was instruction fined tuned with peft training, using the [dolly-15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|