QuantFactory/glm-4-9b-chat-abliterated-GGUF
This is quantized version of byroneverson/glm-4-9b-chat-abliterated created using llama.cpp
Original Model Card
GLM 4 9B Chat - Abliterated
Check out the jupyter notebook for details of how this model was abliterated from glm-4-9b-chat.
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create a fork of GGUF My Repo (+tiktoken).
- Downloads last month
- 343
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/glm-4-9b-chat-abliterated-GGUF
Base model
THUDM/glm-4-9b-chat