|
--- |
|
license: cc-by-sa-4.0 |
|
datasets: |
|
- allenai/prosocial-dialog |
|
- benjaminbeilharz/empathetic_dialogues_for_lm |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: conversational |
|
tags: |
|
- autogptq |
|
- conversation |
|
- dialogue |
|
- stableai |
|
- intellibridge |
|
--- |
|
## Model Card: stablelm-tuned-alpha-7b-4bit-128g |
|
|
|
### Description |
|
|
|
The stablelm-tuned-alpha-7b-4bit-128g model is a quantized version of the stablelm-tuned-alpha-7b language model. It is based on the GPTNeoX architecture and has been optimized using the AutoGPTQ framework. The model has been specifically trained and fine-tuned for generating conversational responses. |
|
|
|
The quantization process of this model reduces the memory footprint and improves inference efficiency while maintaining a high level of performance. It uses 4-bit quantization with a group size of 128, enabling efficient representation of model parameters. The dampening factor (damp_percent) is set to 0.01, which controls the quantization error. |
|
|
|
### Model Details |
|
|
|
- Model Name: stablelm-tuned-alpha-7b-4bit-128g |
|
- Base Model: stablelm-tuned-alpha-7b |
|
- Quantization Configuration: |
|
- Bits: 4 |
|
- Group Size: 128 |
|
- Damp Percent: 0.01 |
|
- Descending Activation Quantization (desc_act): Enabled |
|
- Symmetric Quantization (sym): Enabled |
|
- True Sequential Quantization (true_sequential): Enabled |
|
|
|
### Usage |
|
|
|
The stablelm-tuned-alpha-7b-4bit-128g model can be used for a variety of conversational tasks such as chatbots, question answering systems, and dialogue generation. It can generate human-like responses based on given system prompts, contexts, and input texts. |
|
|
|
To use the model, provide a system prompt, context, and input text in the following format: |
|
|
|
Input: {system_prompt}\n{context}: {text} |
|
Label: {response} |
|
|
|
**Example**: |
|
```py |
|
system_prompt = """# StableLM Tuned (Alpha version) |
|
- StableLM is a helpful and chatty open-source AI language model developed by StabilityAI. |
|
- StableLM is excited to be able to help the user. |
|
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. |
|
""" |
|
|
|
context = "It's not right to think black people deserve to be hit" |
|
text = "You're right, it isn't funny. Finding enjoyment in other people's pains isn't funny." |
|
response = "I am glad that you agree. Joking about abusing black people can quickly get you marked as a racist." |
|
|
|
prompt = f"{system_prompt}\n{context}: <|USER|>{text}<|ASSISTANT|>" |
|
label = f"{response}" |
|
``` |
|
|
|
Make sure to tokenize the inputs using the original tokenizer before passing them to the model. Use the official model's template for system prompt and user prompt format. |
|
|
|
### Performance |
|
|
|
- Model Size: 5GB |
|
- Inference Speed: N/A |
|
- Accuracy: N/A |
|
|
|
### Limitations and Considerations |
|
|
|
- As a language model, the stablelm-tuned-alpha-7b-4bit-128g model relies on the quality and relevance of the training data. It may generate responses that are contextually appropriate but might not always be factually accurate or suitable for all scenarios. |
|
- Quantization introduces a trade-off between model size, memory efficiency, and precision. Although the model has been optimized for performance, there might be a slight reduction in the quality of generated responses compared to the original model. |
|
- The model may not have been trained on specific domain-specific data and may not perform optimally for specialized tasks. |
|
|
|
### Acknowledgments |
|
|
|
The stablelm-tuned-alpha-7b-4bit-128g model is developed by StabilityAI, leveraging the GPTNeoX architecture and the AutoGPTQ framework. It builds upon the research and contributions from the open-source community in the field of language modeling and conversational AI. |
|
|
|
### License |
|
|
|
The stablelm-tuned-alpha-7b-4bit-128g model is released under the [license terms](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_GB) specified by StabilityAI. |
|
Quantized by Lazar Dilov [github](https://github.com/ldilov/IntelliBridge) |
|
Used framework created by [github](https://github.com/PanQiWei/) |