Edit model card

OpenAGI-7B-v0.1

DPO tuned on a small set of GPT4 generated responses.

Give it a try:

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"  # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("openagi-project/OpenAGI-7B-v0.1")
tokenizer = AutoTokenizer.from_pretrained("openagi-project/OpenAGI-7B-v0.1")

messages = [
    {"role": "user", "content": "Who are you?"},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

" My goal as the founder of FreeCS.org is to establish an Open-Source AI Research Lab driven by its Community. Currently, I am the sole contributor at FreeCS.org. If you share our vision, we welcome you to join our community and contribute to our mission at freecs.org/#community. "
|- GR

If you'd like to support this project, kindly consider making a donation.

Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for openagi-project/OpenAGI-7B-v0.1

Finetuned
this model

Dataset used to train openagi-project/OpenAGI-7B-v0.1

Space using openagi-project/OpenAGI-7B-v0.1 1