Text Generation
Transformers
llama
Inference Endpoints
bhenrym14 commited on
Commit
c4316e0
1 Parent(s): 0343b8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,10 +15,10 @@ fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pn
15
  ## Overview
16
 
17
  This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
- 1. It is first trained on a long-context (7000-8192 tokens) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used, with partial NTK scaling applied. This took ~20 hours.
19
- 2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) for 3 epochs. This took ~17 hours.
20
 
21
- **This is a QLoRA fine-tune**.
22
 
23
  All training was performed with 1x RTX 6000 Ada.
24
 
 
15
  ## Overview
16
 
17
  This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
+ 1. It is first trained on a long-context (7000-8192 tokens) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used instead of the dolphin system prompt. Training was done with partial NTK scaling applied (scale factor of 4). This took ~20 hours.
19
+ 2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1), with same scaling approach, for 3 epochs. This took ~17 hours.
20
 
21
+ **This is a QLoRA fine-tune (rank 64)**.
22
 
23
  All training was performed with 1x RTX 6000 Ada.
24