File size: 2,071 Bytes
949ca92
52cb059
 
f670576
 
52cb059
 
59d7bcc
 
 
52cb059
94f5fea
b7db497
52cb059
59d7bcc
52cb059
94f5fea
 
 
949ca92
52cb059
 
 
 
 
 
b7db497
94f5fea
52cb059
 
 
59d7bcc
 
52cb059
 
 
94f5fea
52cb059
 
 
94f5fea
52cb059
 
 
94f5fea
 
52cb059
 
 
 
db9c810
52cb059
 
 
db9c810
52cb059
 
 
 
 
 
 
94f5fea
52cb059
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
library_name: peft
tags:
- PyTorch
- Transformers
- trl
- sft
- BitsAndBytes
- PEFT
- QLoRA
datasets:
- databricks/databricks-dolly-15k
base_model: meta-llama/Llama-2-7b-chat
model-index:
- name: llama2-7-dolly-query 
  results: []
license: mit
language:
- en
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# llama2-7-dolly-query

This model is a fine-tuned version of [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat) on the generator dataset.
Can be used in conjunction with [LukeOLuck/llama2-7-dolly-answer](https://huggingface.co/LukeOLuck/llama2-7-dolly-answer)

## Model description

A Fine-Tuned PEFT Adapter for the llama2 7b chat hf model
Leverages [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness](https://arxiv.org/abs/2205.14135), [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314), and [PEFT](https://huggingface.co/blog/peft)

## Intended uses & limitations

Generate a query based on context and input

## Training and evaluation data

Used SFTTrainer, [checkout the code](https://colab.research.google.com/drive/1sr0mUF8dwYKo6NNR3tkjk0Z-p5FFr1_6?usp=sharing)

## Training procedure

[Checkout the code here](https://colab.research.google.com/drive/1sr0mUF8dwYKo6NNR3tkjk0Z-p5FFr1_6?usp=sharing)

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3

### Training results

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65388a56a5ab055cf2d73676/FJ5p_wutu8o1z789Hd93g.png)

### Framework versions

- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2