muellerzr HF staff pcuenq HF staff commited on
Commit
27e2fc4
1 Parent(s): 542542e

Fix Space URL in Discussion Report (#23)

Browse files

- Fix Space URL (c174643e466d6ba2c981c3c99da6f3f7f8a79830)


Co-authored-by: Pedro Cuenca <pcuenq@users.noreply.huggingface.co>

Files changed (1) hide show
  1. src/hub_utils.py +1 -1
src/hub_utils.py CHANGED
@@ -28,7 +28,7 @@ def report_results(model_name, library, access_token):
28
 
29
  You will need about {data[1]} VRAM to load this model for inference, and {data[3]} VRAM to train it using Adam.
30
 
31
- These calculations were measured from the [Model Memory Utility Space](https://hf.co/spaces/hf-accelerate/model-memory-utility) on the Hub.
32
 
33
  The minimum recommended vRAM needed for this model assumes using [Accelerate or `device_map="auto"`](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) and is denoted by the size of the "largest layer".
34
  When performing inference, expect to add up to an additional 20% to this, as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). More tests will be performed in the future to get a more accurate benchmark for each model.
 
28
 
29
  You will need about {data[1]} VRAM to load this model for inference, and {data[3]} VRAM to train it using Adam.
30
 
31
+ These calculations were measured from the [Model Memory Utility Space](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) on the Hub.
32
 
33
  The minimum recommended vRAM needed for this model assumes using [Accelerate or `device_map="auto"`](https://huggingface.co/docs/accelerate/usage_guides/big_modeling) and is denoted by the size of the "largest layer".
34
  When performing inference, expect to add up to an additional 20% to this, as found by [EleutherAI](https://blog.eleuther.ai/transformer-math/). More tests will be performed in the future to get a more accurate benchmark for each model.