Text Generation
Transformers
PyTorch
PEFT
English
llama
text-generation-inference
chainyo commited on
Commit
2e4a4de
·
1 Parent(s): b3349e0

add training hardware

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -176,3 +176,7 @@ The performance degradation is due to the fact we load the model in 8bit and we
176
  Thanks to the 8bit quantization, the model is 4 times faster than the original model and the results are still decent.
177
 
178
  Some complex tasks like WinoGrande and OpenBookQA are more difficult to solve with the adapters.
 
 
 
 
 
176
  Thanks to the 8bit quantization, the model is 4 times faster than the original model and the results are still decent.
177
 
178
  Some complex tasks like WinoGrande and OpenBookQA are more difficult to solve with the adapters.
179
+
180
+ ## Training Hardware
181
+
182
+ This model was trained on a single NVIDIA RTX 3090 GPU.