|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Gustrd/dolly-15k-libretranslate-pt |
|
library_name: peft |
|
language: |
|
- pt |
|
--- |
|
this adapter model using (peft) was made on top of openlm-research/open_llama_3b_v2 (https://huggingface.co/openlm-research/open_llama_3b_v2) |
|
|
|
it's not perfect in portuguese, but in the perfect point to train a bit more for specific task in this language. |
|
|
|
consider check the jupyter notebooks in the files section for more info. |
|
|
|
these notebooks were get from web and are very similar to "cabrita" model, that was made on top of llama1. |
|
|
|
trained in only 120 steps and with some results very similar to VMware/open-llama-13b-open-instruct |
|
|
|
maybe necessary to adjust the parameters of inference to make it work better. |