recogna commited on
Commit
393c656
1 Parent(s): 665e14b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -4
README.md CHANGED
@@ -148,10 +148,20 @@ model-index:
148
  url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b-it
149
  name: Open Portuguese LLM Leaderboard
150
  ---
151
- ## Training procedure
152
 
 
 
 
 
 
 
 
 
 
 
 
153
 
154
- The following `bitsandbytes` quantization config was used during training:
155
  - quant_method: bitsandbytes
156
  - _load_in_8bit: False
157
  - _load_in_4bit: True
@@ -165,13 +175,19 @@ The following `bitsandbytes` quantization config was used during training:
165
  - bnb_4bit_quant_storage: uint8
166
  - load_in_4bit: True
167
  - load_in_8bit: False
168
- ### Framework versions
 
169
 
170
 
171
  - PEFT 0.5.0
172
 
 
 
 
 
 
173
 
174
- # Open Portuguese LLM Leaderboard Evaluation Results
175
 
176
  Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/gembode-7b-it) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
177
 
 
148
  url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/gembode-7b-it
149
  name: Open Portuguese LLM Leaderboard
150
  ---
151
+ # gembode-7b-it
152
 
153
+ <!--- PROJECT LOGO -->
154
+ <p align="center">
155
+ <img src="https://huggingface.co/recogna-nlp/GemBode-2b-it/resolve/main/gembode.jpg" alt="Phi-Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
156
+ </p>
157
+
158
+ GemmBode é um modelo de linguagem ajustado para o idioma português, desenvolvido a partir do modelo Gemma-7b-it fornecido pela [Google](https://huggingface.co/google/gemma-7b-it).
159
+
160
+
161
+ # Treinamento
162
+
163
+ A seguinte configuração de quantização `bitsandbytes` foi usada durante o treinamento:
164
 
 
165
  - quant_method: bitsandbytes
166
  - _load_in_8bit: False
167
  - _load_in_4bit: True
 
175
  - bnb_4bit_quant_storage: uint8
176
  - load_in_4bit: True
177
  - load_in_8bit: False
178
+ -
179
+ ### Versões da estrutura
180
 
181
 
182
  - PEFT 0.5.0
183
 
184
+ ## Características Principais
185
+
186
+ - **Modelo Base:** Gemma-7b-it, criado pela Google, com 7 bilhões de parâmetros.
187
+ - **Dataset para Fine-tuning:** [UltraAlpaca](https://huggingface.co/datasets/recogna-nlp/ultra-alpaca-ptbr)
188
+ - **Treinamento:** O treinamento foi realizado a partir do fine-tuning, com QLoRA do gemma-7b-it.
189
 
190
+ # Resultados da avaliação do Open Portuguese LLM Leaderboard
191
 
192
  Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/gembode-7b-it) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
193