Model Summary:
Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.
- Paper: Centaur: a foundation model of human cognition
- Point of Contact: Marcel Binz
Usage:
Note that Centaur is trained on a data set in which human choices are encapsulated by "<<" and ">>" tokens. For optimal performance, it is recommended to adjust prompts accordingly.
This is the low-rank adapter that runs with unsloth on a single 80GB GPU.
from unsloth import FastLanguageModel
model_name = "marcelbinz/Llama-3.1-Centaur-70B-adapter"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = model_name,
max_seq_length = 32768,
dtype = None,
load_in_4bit = True,
)
Alternatively, you can also directly use the (untested) merged model.
Licensing Information
Llama 3.1 Community License Agreement
Citation Information
Forthcoming.