|
--- |
|
license: other |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
inference: false |
|
tags: |
|
- transformers |
|
- gguf |
|
- imatrix |
|
- SOLAR-10.7B-Instruct-v1.0 |
|
--- |
|
Quantizations of https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 |
|
|
|
# From original readme |
|
|
|
# **Usage Instructions** |
|
|
|
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat. |
|
|
|
### **Version** |
|
|
|
Make sure you have the correct version of the transformers library installed: |
|
|
|
```sh |
|
pip install transformers==4.35.2 |
|
``` |
|
|
|
### **Loading the Model** |
|
|
|
Use the following Python code to load the model: |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"Upstage/SOLAR-10.7B-Instruct-v1.0", |
|
device_map="auto", |
|
torch_dtype=torch.float16, |
|
) |
|
``` |
|
|
|
### **Conducting Single-Turn Conversation** |
|
|
|
```python |
|
conversation = [ {'role': 'user', 'content': 'Hello?'} ] |
|
|
|
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True) |
|
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, use_cache=True, max_length=4096) |
|
output_text = tokenizer.decode(outputs[0]) |
|
print(output_text) |
|
``` |
|
|
|
Below is an example of the output. |
|
``` |
|
<s> ### User: |
|
Hello? |
|
|
|
### Assistant: |
|
Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s> |
|
``` |