This model is presented in the paper Preference Learning Unlocks LLMs' Psycho-Counseling Skills. It's a fine-tuned meta-llama/Llama-3.1-8B-Instruct model trained using preference learning on the PsychoCounsel-Preference dataset. This dataset contains 36k high-quality preference comparison pairs aligned with the preferences of professional psychotherapists.

The model aims to improve the quality of responses in psycho-counseling sessions and achieves a win rate of 87% against GPT-4o.

This usage is the same as meta-llama/Llama-3.1-8B-Instruct

Downloads last month
144
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Psychotherapy-LLM/PsychoCounsel-Llama3-8B

Finetuned
(971)
this model
Quantizations
2 models

Dataset used to train Psychotherapy-LLM/PsychoCounsel-Llama3-8B