ThetaWave-7B-v0.1 / README.md
freeCS-dot-org's picture
Update README.md
303d1bc verified
---
license: apache-2.0
model-index:
- name: freecs/ThetaWave-7B-v0.1
results:
- task:
type: text-generation
metrics:
- name: average
type: average
value: 69.17
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---
# ThetaWave-7B v0.1
This is the first model of the ThetaWave series, based on Mistral-7B.
Utilize this model as a starting point, as it necessitates further fine-tuning and reinforcement learning.
Give it a try:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("freecs/ThetaWave-7B-v0.1")
tokenizer = AutoTokenizer.from_pretrained("freecs/ThetaWave-7B-v0.1")
messages = [
{"role": "user", "content": "Who are you?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
*" My goal as the founder of FreeCS.org is to establish an Open-Source AI Research Lab driven by its Community. Currently, I am the sole contributor at FreeCS.org. If you share our vision, we welcome you to join our community and contribute to our mission at [freecs.org/#community](https://freecs.org/#community). "*
|- [GR](https://twitter.com/gr_username)
If you'd like to support this project, kindly consider making a [donation](https://freecs.org/donate).