--- base_model: THUDM/glm-4-9b-chat pipeline_tag: text-generation license: other license_name: glm-4 license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE language: - zh - en tags: - glm - chatglm - thudm - chat - abliterated library_name: transformers inference: false --- # GLM 4 9B Chat - Abliterated Check out the jupyter notebook for details of how this model was abliterated from glm-4-9b-chat. The python package "tiktoken" is required to quantize the model into gguf format. So I had to create a fork of GGUF My Repo (+tiktoken). ![Logo](https://huggingface.co/byroneverson/internlm2_5-7b-chat-abliterated/resolve/main/logo.png "Logo")