language:
- en
license: cc-by-nc-nd-4.0
library_name: transformers
pipeline_tag: text-classification
widget:
- text: >-
Mr. Jones, an architect is going to surprise his family by building them a
new house.
example_title: Pow
- text: They want the research to go well and be productive.
example_title: Ach
- text: >-
The man is trying to see a friend on board, but the officer will not let
him go as the whistle for all ashore who are not going has already blown.
example_title: Aff
- text: >-
The recollection of skating on the Charles, and the time she had pushed me
through the ice, brought a laugh to the conversation; but it quickly faded
in the murky waters of the river that could no longer freeze over.
example_title: Pow + Aff
- text: >-
They are also well-known research scientists and are quite talented in
this field.
example_title: Pow + Ach
- text: >-
After a nice evening with his family, he will be back at work tomorrow,
doing the best job he can on his drafting.
example_title: Ach + Aff
- text: >-
She is surprised that she is able to make these calls and pleasantly
surprised that her friends respond to her request.
example_title: Pow + Aff
This is a version of a classifier for implicit motives based on ModernBert. The classifier identifies the presence of implicit motive imagery in sentences, namely the three felt needs for Power, Achievement, and Affiliation.
This model is being made available to other researchers via download. The current license allows for free use without modification for non-commercial purposes. If you would like to use this model commercially, get in touch with us for access to our most recent model.
Inference guide
This model can be directly downloaded and used with the following code.
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
mbert = "encodingai/mBERT-im-multilabel"
tokenizer = AutoTokenizer.from_pretrained(mbert, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(mbert,
problem_type="multi_label_classification",
)
# load model using the pipeline, returning the top 3 classifications
classifier = pipeline("text-classification", model=model, device=0, tokenizer=tokenizer, top_k=3)
sample = ["""The recollection of skating on the Charles, and the time she had
pushed me through the ice, brought a laugh to the conversation; but
it quickly faded in the murky waters of the river that could no
longer freeze over."""]
# predict on a sentence
pred = classifier(sample)
print(pred)
# The labels are arranged according to likelihood of classification
repdict = {"LABEL_0": "Pow", "LABEL_1": "Ach", "LABEL_2": "Aff"}
# so we replace them in the output
for y in pred:
scores = {repdict[x['label']]: x['score'] for x in y}
print(scores)
References
McClelland, D. C. (1965). Toward a theory of motive acquisition. American Psychologist, 20,321-333.
Pang, J. S., & Ring, H. (2020). Automated Coding of Implicit Motives: A Machine-Learning Approach. Motivation and Emotion, 44(4), 549-566. DOI: 10.1007/s11031-020-09832-8.
Winter, D.G. (1994). Manual for scoring motive imagery in running text. Unpublished Instrument. Ann Arbor: University of Michigan.