Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ datasets:
|
|
7 |
|
8 |
This model experiment was inspired by the work published in [Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks](https://arxiv.org/pdf/2305.14201.pdf), which found good success in fine tuning Llama models on math.
|
9 |
|
10 |
-
Fine tuning of [philschmid/Llama-2-7b-hf](https://huggingface.co/philschmid/Llama-2-7b-hf) was conducted with
|
11 |
|
12 |
Training was conducted on a trn1.32xlarge instance. The model here was complied for 2 Neuron cores, which will run on AWS inf2.8xlarge and larger instances.
|
13 |
|
|
|
7 |
|
8 |
This model experiment was inspired by the work published in [Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks](https://arxiv.org/pdf/2305.14201.pdf), which found good success in fine tuning Llama models on math.
|
9 |
|
10 |
+
Fine tuning of [philschmid/Llama-2-7b-hf](https://huggingface.co/philschmid/Llama-2-7b-hf) was conducted with 2.8M math problems from the [AtlasUnified/atlas-math-sets](https://huggingface.co/datasets/AtlasUnified/atlas-math-sets) dataset.
|
11 |
|
12 |
Training was conducted on a trn1.32xlarge instance. The model here was complied for 2 Neuron cores, which will run on AWS inf2.8xlarge and larger instances.
|
13 |
|