Edit model card

ThetaWave-7B v0.2

More info will be added in the future about this model.

Give it a try:

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"  # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("freecs/ThetaWave-7B-v0.2")
tokenizer = AutoTokenizer.from_pretrained("freecs/ThetaWave-7B-v0.2")

messages = [
    {"role": "system", "content": "You are an AI assistant"},
    {"role": "user", "content": "Who are you?"},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Downloads last month
426
Safetensors
Model size
7.24B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using freecs/ThetaWave-7B-v0.2 1