File size: 2,254 Bytes
71a6046
 
 
661d056
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae28e32
 
 
 
 
 
 
 
661d056
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05a6cc3
 
 
661d056
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: apache-2.0
---

# Model Card for FalconAlpaca

<!-- Provide a quick summary of what the model is/does. -->

FalconAlpaca is Falcon-7B trained on the [Stanford Alpaca Dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json)

## Model Details

This model was an attempt to influence the learned outputs of Falcon-7B to adapt the outputs to become more information-rich and focused.
Trained using [Lit GPT](https://github.com/Lightning-AI/lit-gpt), the model took 2 hours to train on 1 4xA6000 node.


### Model Description

- **License:** [Apache 2.0]
- **Finetuned from model :** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b)

### Model Sources

[Stanford Alpaca Dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json)

### Out-of-Scope Use

This model is not intended for anything but testing purposes. There have been no attempts to control/remove bias, toxicity, or any other form of
potentially dangerous or harmful messages.

## Bias, Risks, and Limitations

No effort was made to remove any wrong or harmful information from Falcon-7B or the Alpaca dataset. Any risks and limitations from either of 
those datasets/models carry over to this project as well.

## How to Get Started with the Model

Download and install libraries for [Lit GPT](https://github.com/Lightning-AI/lit-gpt)

```sh
python generate/adapter_v2.py \
    --adapter_path path/to/model/lit_model_adapter_finetuned.pth \
    --checkpoint_dir path/to/model \
    --prompt "What temperature should I cook pork at to ensure it is safe?"
```

This uses around 14GB of VRAM. If you need to use less VRAM, you can add the parameters
```
--quantize llm.int8
```
or
```
--quantize gptq.int4
```

### Training Data

[Stanford Alpaca Dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json)


#### Training Hyperparameters

The defaults were as follows
```
learning_rate = 9e-3
batch_size = 32
micro_batch_size = 2
gradient_accumulation_iters = 16
epoch_size = 50000
num_epochs = 5
max_iters = 125000
weight_decay = 0.02
warmup_iters = 50000
```



## More Information

[HeitechSoft](https://heitechsoft.com/blog/heitechsoft-s-falcon-7b-fine-tuned-model-paves-the-way-for-advanced-ai-chatbots)