muellerzr HF staff commited on
Commit
caa57eb
β€’
1 Parent(s): 050e47f

More notes

Browse files
Files changed (1) hide show
  1. app.py +3 -0
app.py CHANGED
@@ -121,6 +121,9 @@ with gr.Blocks() as demo:
121
  on a model hosted on the πŸ€— Hugging Face Hub. The minimum recommended vRAM needed for a model
122
  is denoted as the size of the "largest layer", and training of a model is roughly 4x its size (for Adam).
123
 
 
 
 
124
  Currently this tool supports all models hosted that use `transformers` and `timm`.
125
 
126
  To use this tool pass in the URL or model name of the model you want to calculate the memory usage for,
 
121
  on a model hosted on the πŸ€— Hugging Face Hub. The minimum recommended vRAM needed for a model
122
  is denoted as the size of the "largest layer", and training of a model is roughly 4x its size (for Adam).
123
 
124
+ When performing inference, expect to add up to an additional 20% to this as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/).
125
+ More tests will be performed in the future to get a more accurate benchmark for each model.
126
+
127
  Currently this tool supports all models hosted that use `transformers` and `timm`.
128
 
129
  To use this tool pass in the URL or model name of the model you want to calculate the memory usage for,