justheuristic
commited on
Commit
•
e066edc
1
Parent(s):
2664145
Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
### Quantized EleutherAI/gpt-j-6b with 8-bit weights
|
2 |
|
3 |
-
This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune
|
4 |
|
5 |
-
|
6 |
|
7 |
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
|
8 |
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
|
|
|
1 |
### Quantized EleutherAI/gpt-j-6b with 8-bit weights
|
2 |
|
3 |
+
This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**.
|
4 |
|
5 |
+
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
|
6 |
|
7 |
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
|
8 |
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
|