Update README.md
#87
by
FelixMildon
- opened
README.md
CHANGED
@@ -75,7 +75,7 @@ for seq in sequences:
|
|
75 |
|
76 |
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
|
77 |
|
78 |
-
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost](
|
79 |
|
80 |
You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
|
81 |
|
|
|
75 |
|
76 |
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
|
77 |
|
78 |
+
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost](https://huggingface.co/blog/falcon).
|
79 |
|
80 |
You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B.
|
81 |
|