Commit
•
5810172
1
Parent(s):
18884a6
Update license (#1)
Browse files- Update license (bfed21418898c8f705c92b09d04144b020f0cb1a)
Co-authored-by: Sunitha Ravi <sunitha-ravi@users.noreply.huggingface.co>
README.md
CHANGED
@@ -3,7 +3,7 @@ base_model: PatronusAI/Llama-3-Patronus-Lynx-70B-Instruct
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
-
license:
|
7 |
pipeline_tag: text-generation
|
8 |
tags:
|
9 |
- text-generation
|
@@ -111,5 +111,4 @@ These I-quants can also be used on CPU and Apple Metal, but will be slower than
|
|
111 |
|
112 |
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
113 |
|
114 |
-
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
115 |
-
|
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
6 |
+
license: cc
|
7 |
pipeline_tag: text-generation
|
8 |
tags:
|
9 |
- text-generation
|
|
|
111 |
|
112 |
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
113 |
|
114 |
+
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|