|
--- |
|
license: apache-2.0 |
|
--- |
|
# ThetaWave-7B v0.2 |
|
|
|
More info will be added in the future about this model. |
|
|
|
- Made By: [GR](https://twitter.com/gr_username). |
|
- Donate: [donation](https://www.buymeacoffee.com/gr.0). |
|
|
|
|
|
Give it a try: |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
device = "cuda" # the device to load the model onto |
|
|
|
model = AutoModelForCausalLM.from_pretrained("freecs/ThetaWave-7B-v0.2") |
|
tokenizer = AutoTokenizer.from_pretrained("freecs/ThetaWave-7B-v0.2") |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are an AI assistant"}, |
|
{"role": "user", "content": "Who are you?"}, |
|
] |
|
|
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
|
|
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |