Edit model card

Model Summary:

Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.

Usage:

Note that Centaur is trained on a data set in which human choices are encapsulated by "<<" and ">>" tokens. For optimal performance, it is recommended to adjust prompts accordingly.

This is the low-rank adapter that runs with unsloth on a single 80GB GPU.

from unsloth import FastLanguageModel

model_name = "marcelbinz/Llama-3.1-Centaur-70B-adapter"
model, tokenizer = FastLanguageModel.from_pretrained(
  model_name = model_name,
  max_seq_length = 32768,
  dtype = None,
  load_in_4bit = True,
)

Alternatively, you can also directly use the (untested) merged model.

Licensing Information

Llama 3.1 Community License Agreement

Citation Information

Forthcoming.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train marcelbinz/Llama-3.1-Centaur-70B-adapter