How to setup the model for inference tasks
@marcelbinz
Do you have an example code for using the model in inference tasks?
For example how to run it with this simple prompt:
" What issue matters most to you in this election?
Options:
Economy
Healthcare
Climate
Immigration"
check out this script here: https://github.com/marcelbinz/Llama-3.1-Centaur-70B/blob/main/test_adapter.py
We have a long list of prompt examples in the SI of the paper: https://arxiv.org/abs/2410.20268
or you can view them here: https://huggingface.co/datasets/marcelbinz/Psych-101/viewer
Thanks @marcelbinz .I've seen the code from test_adapter.py. But I don't find a way to use the model to answer to a simple multiple choice question like the one from above message. It just starts to generate text like in a text completion task. It doesn't answer to the question.