Menouar's picture
Update README.md
e9f5d2a verified
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- falcon
- transformers
base_model: tiiuae/falcon-7b
model-index:
- name: falcon7b-linear-equations
results: []
datasets:
- Menouar/LinearEquations
language:
- en
---
# falcon7b-linear-equations
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on a simple dataset of [linear equations](https://huggingface.co/datasets/Menouar/LinearEquations).
For this task it is better to finetune [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) as it is already finetuned on a mixture of chat/instruct datasets.
But we start by finetuning the raw model as it is more challenging.
The merged version of this model with QLoRA can be found at [falcon7b-linear-equations-merged](https://huggingface.co/Menouar/falcon7b-linear-equations-merged).
## Model description
The objective of this model is to test Falcon7B's ability to solve mathematical linear equations after fine-tuning. The linear equations are in the form:
```
Ay + ay + b + B = Dy + dy + c + C
```
This model was trained using TRL, LoRA quantization, and Flash Attention.
Due to limited GPU resources, I only considered 20,000 samples for training.
For more information, check my [**Notebook**](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
# Specify the model ID
peft_model_id = "Menouar/falcon7b-linear-equations"
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
equation = "Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 ."
outputs = pipe(equation,
max_new_tokens=172,
do_sample=True,
temperature=0.1,
top_k=50, top_p=0.1,
eos_token_id=pipe.tokenizer.eos_token_id,
pad_token_id=pipe.tokenizer.pad_token_id)
for seq in outputs:
print(f"{seq['generated_text']}")
"""
Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 .
The equation is in the form of ay + b = dy + c where:
a = 4 - 9 = -5
b = 10 + 5 = 15
d = 8 - 2 = 6
c = 4 + 8 = 12
The solution is y = (c - b)/(a - d) if a ≠ d
12 - 15 = -3
-5 - 6 = -11
y = -3 / -11
The fraction -3 / -11 = 3 / 11.
The solution is y = 3 / 11.
"""
```
## Intended uses & limitations
The model can solve any equation of the form ```Ay + ay + b + B = Dy + dy + c + C``` with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.
## Evaluation
I will compile the evaluation section at a later time. For the moment, I’d like to present an example of a linear equation. In this example, this model, Bard, and BingChat are able to find the correct solution. However, other models, including ChatGPT3.5, Llama 70B, Mixtral, and Falcon-7b-instruct, do not arrive at the correct solution.
```
Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 .
```
<strong>Falcon-7b-instruct</strong><br>
Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 .<br>
First, we can solve for y.<br>
y = -2 + 8x<br>
We add 8 to x and subtract -2, giving:<br>
x = -2<br>
so 2 = +2<br>
y = 4<br>
we add 2 to y, so y = +6<br>
We can now plug these values into the original equation:<br>
10 + 4x -9x + 5 = -2 + 4x<br>
10 + 4(-2) = -6<br>
we end up with -1 = -1 + 4<br>
4 = 1<br>
y = 1<br>
Therefore, the solution is <strong><span style="color:red">
y = 1
</span></strong>. The equation has been solved.
<strong>Mixtral 8x7b</strong><br>
Let's first simplify the equation by combining like terms:<br>
10 - 5y + 5 = 4 + 6y<br>
Now, let's get all the y terms on one side and constants on the other side. Subtract 6y from both sides:<br>
10 - 5y - 6y + 5 = 4 + 6y - 6y<br>
This simplifies to:<br>
15 - 11y = 4<br>
Next, subtract 15 from both sides to isolate the y term:<br>
15 - 15 - 11y = 4 - 15<br>
This simplifies to:<br>
-11y = -11<br>
Finally, divide both sides by -11 to solve for y:<br>
y = -11 / -11<br>
So,<br>
<strong><span style="color:red">
y = 1
</span></strong>.
<strong>Llma 70b</strong><br>
To solve for y, we need to isolate the y term on one side of the equation.<br>
First, we can combine the like terms:<br>
10 + 4y - 9y + 5 = 4 + 8y - 2y + 8<br>
This simplifies to:<br>
10 - 5y + 5 = 4 + 6y<br>
Next, we can subtract 4 from both sides:<br>
6 - 5y + 5 = 6y<br>
Now, we can isolate the y term by subtracting 5 from both sides:<br>
-5y + 5 = 6y - 5<br>
This simplifies to:<br>
-5y = 11<br>
Finally, we can divide both sides by -5 to solve for y:<br>
y = -11/5<br>
Therefore, the value of y is <strong><span style="color:red">
-11/5
</span></strong>.
<strong>ChatGPT3.5</strong><br>
To solve for y in the given equation 10 + 4y - 9y + 5 = 4 + 8y - 2y + 8, follow these steps:<br>
Combine like terms on each side: (4y - 9y + 5) + 10 = (6y + 12 + 8y - 2y)<br>
Combine the y terms and constants separately: -5y + 15 = 12y + 12<br>
Move all terms involving y to one side and the constants to the other side: -5y - 12y = 12 - 15<br>
Combine like terms: -17y = -3<br>
Divide by -17: y = 3/17<br>
Therefore, the solution to the equation 10 + 4y - 9y + 5 = 4 + 8y - 2y + 8 is <strong><span style="color:red">
y = 3/17
</span></strong>.
<strong>Bard</strong><br>
Sure, I'd be happy to help you solve for y. Here is the solution:<br>
10 + 4y - 9y +5 = 4 +8y - 2y + 8<br>
First, combine like terms on each side of the equation.<br>
-5y + 15 = 12 + 6y<br>
Next, subtract 6y from both sides of the equation.<br>
-11y + 15 = 12<br>
Finally, subtract 15 from both sides of the equation to isolate y.<br>
-11y = -3<br>
Divide both sides of the equation by -11 to find y.<br>
<strong><span style="color:green">
y = 0.2727
</span></strong>.
<strong>BingChat</strong><br>
Combine like terms on both sides:<br>
10 + 4y - 9y + 5 = 15 - 5y<br>
4 + 8y - 2y + 8 = 12 + 6y<br>
This gives us:<br>
15 - 5y = 12 + 6y<br>
Solving for y in one step:<br>
<strong><span style="color:green">
y = 3 / 11
</span></strong>.
<strong>Falcon7B-linear-equation</strong><br>
The equation is in the form of ay + b = dy + c where:<br>
a = 4 - 9 = -5<br>
b = 10 + 5 = 15<br>
d = 8 - 2 = 6<br>
c = 4 + 8 = 12<br>
The solution is y = (c - b)/(a - d) if a ≠ d <br>
12 - 15 = -3<br>
-5 - 6 = -11<br>
y = -3 / -11<br>
The fraction -3 / -11 = 3 / 11.<br>
The solution is <strong><span style="color:green">
y = 3 / 11
</span></strong>.
## Training procedure
For more information, check my [**Notebook**](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 42
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 84
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
The training results can be found on [**TensorBoard**](https://huggingface.co/Menouar/falcon7b-linear-equations/tensorboard)
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1