license: cc-by-sa-4.0
StableLM-3B-4E1T
- Model Creator: Stability AI
- original Model: StableLM-3B-4E1T
Description
This repository contains the most relevant quantizations of Stability AI's StableLM-3B-4E1T model in GGUF format - ready to be used with llama.cpp and similar applications.
About StableLM-3B-4E1T
Stability AI claims: "StableLM-3B-4E1T achieves state-of-the-art performance (September 2023) at the 3B parameter scale for open-source models and is competitive with many of the popular contemporary 7B models, even outperforming our most recent 7B StableLM-Base-Alpha-v2."
According to them "The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications."
Files
Right now, the following quantizations are available:
(tell me if you need more)
These files are presented here with the written permission of Stability AI (although access to the model itself is still "gated").
Usage Details
Any technical details can be found on the original model card and in a paper on StableLM-3B-4E1T. The most important ones for using this model are
- context length is 4096
- there does not seem to be a specific prompt structure - just provide the text you want to be completed
License
The original "Model checkpoints are licensed under the Creative Commons license (CC BY-SA-4.0). Under this license, you must give credit to Stability AI, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use."
So, in order to be fair and give credits to whom they belong:
- the original model was created and published by Stability AI
- besides quantization, no changes were applied to the model itself