PEFT
Portuguese
cabra-pt-br-3B_peft / README.md
skoll520's picture
Update README.md
1df3f80
|
raw
history blame
712 Bytes
metadata
license: apache-2.0
datasets:
  - Gustrd/dolly-15k-libretranslate-pt
library_name: peft

this adapter model using (peft) was made on top of openlm-research/open_llama_3b_v2 (https://huggingface.co/openlm-research/open_llama_3b_v2)

it's not perfect in portuguese, but in the perfect point to train a bit more for specific task in this language.

consider check the jupyter notebooks in the files section for more info.

these notebooks were get from web and it's very similar to "cabrita" model, that was made on top of llama1.

trained in only 120 steps and with some results very similar to VMware/open-llama-13b-open-instruct

maybe necessary to adjust the parameters of inference to make it work better.