maderix commited on
Commit
7b1be53
1 Parent(s): b97fdb6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -5,6 +5,7 @@ library_name: transformers
5
  ---
6
  Converted with https://github.com/qwopqwop200/GPTQ-for-LLaMa
7
  All models tested on A100-80G
 
8
 
9
  Installation instructions as mentioned in above repo:
10
  1. Install Anaconda and create a venv with python 3.8
 
5
  ---
6
  Converted with https://github.com/qwopqwop200/GPTQ-for-LLaMa
7
  All models tested on A100-80G
8
+ *Conversion may require lot of RAM, LLaMA-7b takes ~12 GB, 13b around 21 GB, 30b around 62 and 65b takes more than 120 GB of RAM.
9
 
10
  Installation instructions as mentioned in above repo:
11
  1. Install Anaconda and create a venv with python 3.8