Edit model card

Model Card for roberta-base-motivational-interviewing

⚠ WARNING: This is a preliminary model that is still actively under development. ⚠

This is a roBERTa-base model fine-tuned on a small dataset of conversations between health coaches and cancer survivors.

How to Get Started with the Model

You can use this model directly with a pipeline for text classification:

>>> import transformers
>>> model_name = "clulab/roberta-base-motivational-interviewing"
>>> classifier = transformers.TextClassificationPipeline(
...     tokenizer=transformers.AutoTokenizer.from_pretrained(model_name),
...     model=transformers.AutoModelForSequenceClassification.from_pretrained(model_name))
>>> classifier("I'm planning on having tuna, ground tuna, chopped celery, and chopped black pepper, and half a apple.")
[{'label': 'change_talk_goal_talk_and_opportunities', 'score': 0.9995419979095459}]

Model Details

Uses

The model is intended to be used for text classification, taking as input conversational utterances and predicting as output different categories of motivational interviewing behaviors.

It is intended for use by health coaches to assist when reviewing their past calls with participants. Its predictions should not be used without manual review.

Training Details

The model was trained on data annotated under the grant Using Natural Language Processing to Determine Predictors of Healthy Diet and Physical Activity Behavior Change in Ovarian Cancer Survivors (NIH NCI R21CA256680). A roberta-base model was fine-tuned on that dataset, with texts tokenized using the standard roberta-base tokenizer.

Evaluation

On the test partition of the R21CA256680 dataset, the model achieves 0.60 precision and 0.46 recall.

Downloads last month
3