Piotr Zalewski commited on
Commit
aa1299e
1 Parent(s): 4661fb3

Latest version

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -60,6 +60,6 @@ print("ANSWER: " + response_output)
60
  - **Finetuned from model:** [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored)
61
  - **Dataset used:** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
62
 
63
- This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4?scriptVersionId=200492023).
64
 
65
- Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4?scriptVersionId=200492023 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
 
60
  - **Finetuned from model:** [chuanli11/Llama-3.2-3B-Instruct-uncensored](https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored)
61
  - **Dataset used:** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
62
 
63
+ This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
64
 
65
+ Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.