byroneverson
commited on
Commit
•
524859e
1
Parent(s):
b45331b
Update README.md
Browse files
README.md
CHANGED
@@ -5,12 +5,14 @@ license: other
|
|
5 |
license_name: glm-4
|
6 |
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
|
7 |
language:
|
8 |
-
|
9 |
-
|
10 |
tags:
|
11 |
-
|
12 |
-
|
13 |
-
|
|
|
|
|
14 |
library_name: transformers
|
15 |
inference: false
|
16 |
---
|
@@ -22,4 +24,4 @@ Check out the <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliter
|
|
22 |
|
23 |
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>.
|
24 |
|
25 |
-
![Logo](https://huggingface.co/byroneverson/internlm2_5-7b-chat-abliterated/resolve/main/logo.png "Logo")
|
|
|
5 |
license_name: glm-4
|
6 |
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
|
7 |
language:
|
8 |
+
- zh
|
9 |
+
- en
|
10 |
tags:
|
11 |
+
- glm
|
12 |
+
- chatglm
|
13 |
+
- thudm
|
14 |
+
- chat
|
15 |
+
- abliterated
|
16 |
library_name: transformers
|
17 |
inference: false
|
18 |
---
|
|
|
24 |
|
25 |
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>.
|
26 |
|
27 |
+
![Logo](https://huggingface.co/byroneverson/internlm2_5-7b-chat-abliterated/resolve/main/logo.png "Logo")
|