PEFT
Portuguese
llama
LoRA
Llama
Stanford-Alpaca
dominguesm commited on
Commit
491d563
1 Parent(s): d527604

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -15,6 +15,8 @@ inference: false
15
 
16
  <a target="_blank" href="https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/03%20-%20Evaluate.ipynb">
17
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
 
 
18
  </a>
19
 
20
  </br>
@@ -169,7 +171,7 @@ Alpaca-LoRA-PTBR: 'Este documento é um contrato entre duas partes, rotulado com
169
 
170
  ## Training procedure
171
 
172
- Fine-tuning was done via the Trainer API. Here is the Jupyter notebook with the training code. (**coming soon**)
173
 
174
  ### Training hyperparameters
175
 
 
15
 
16
  <a target="_blank" href="https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/03%20-%20Evaluate.ipynb">
17
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
18
+ </a><a target="_blank" href="https://github.com/DominguesM/alpaca-lora-ptbr-7b">
19
+ <img src="https://img.shields.io/badge/-Github-blue?style=social&logo=github&link=https://github.com/DominguesM/alpaca-lora-ptbr-7b" alt="Github Project Page"/>
20
  </a>
21
 
22
  </br>
 
171
 
172
  ## Training procedure
173
 
174
+ Fine-tuning was done via the Trainer API. Here is the [Jupyter notebook](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Train%20Model%20LoRa.ipynb) with the training code.
175
 
176
  ### Training hyperparameters
177