Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This is an updated version of the classifier reported in Pang & Ring (2020) and found at implicitmotives.com. The classifier identifies the presence of implicit motive imagery in sentences, namely the three felt needs for Power, Achievement, and Affiliation.

The current classifier is finetuned from ELECTRA-base and achieves > 0.86 Pearson correlation on the Winter (1994) training data (see the OSF repo for the benchmark dataset). Development of this classifier is ongoing, as it achieves only > ~0.7 Pearson correlation on larger unseen datasets, though all correlations are highly significant.

This model is being made available to other researchers for inference via a Huggingface api. The current license allows for free use without modification for non-commercial purposes. If you would like to use this model commercially, get in touch with us for access to our most recent model.

Predictions on Winter manual dataset
-----
Pearson correlations:
| Pow (Label_0): 0.81695 |
| Ach (Label_1): 0.92673 |
| Aff (Label_2): 0.85618 |
| mean: 0.86544 |

Category agreement:
| Pow (Label_0): | 0.84034 |
| Ach (Label_1): | 0.93274 |
| Aff (Label_2): | 0.87429 |
| mean: | 0.88163 |

Intra-class Correlation Coefficient:
| Pow (Label_0): | 0.89889 |
| Ach (Label_1): | 0.96200 |
| Aff (Label_2): | 0.92231 |
| mean: | 0.92736 |

Inference guide

The inference api requires a Huggingface token. The sample code below illustrates how it can be used to classify individual sentences.

import json
import requests
api_key = "<HF Token>"
headers = {"Authorization": f"Bearer {api_key}"}
api_url = "https://api.url.here"

# This is a sentence from the Winter manual that is dual-scored for both Pow and Aff
prompt = """The recollection of skating on the Charles, and the time she had
            pushed me through the ice, brought a laugh to the conversation; but
            it quickly faded in the murky waters of the river that could no
            longer freeze over."""

# Since this is a multilabel classifier, we want to return scores for the top 3 labels
data = {"inputs": prompt, "parameters": {"top_k": 3}}

response = requests.request("POST", api_url, headers=headers, json=data)

# The labels are arranged according to likelihood of classification
repdict = {"LABEL_0": "Pow", "LABEL_1": "Ach", "LABEL_2": "Aff"}
# so we replace them in the output
scores = {repdict[x['label']]: x['score'] for x in response.json()}
print(scores)

# output: {'Pow': 0.8279141187667847, 'Aff': 0.7250356674194336, 'Ach': 0.0020263446494936943}

References

McClelland, D. C. (1965). Toward a theory of motive acquisition. American Psychologist, 20,321-333.

Pang, J. S., & Ring, H. (2020). Automated Coding of Implicit Motives: A Machine-Learning Approach. Motivation and Emotion, 44(4), 549-566. DOI: 10.1007/s11031-020-09832-8.

Winter, D.G. (1994). Manual for scoring motive imagery in running text. Unpublished Instrument. Ann Arbor: University of Michigan.

Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.