Overview
Model trained from roberta-large on a dataset of human and LLM annotated self-beliefs for multi-label classification.
Training Details
Model training , hyper-parameters, and evaluation can be found in "Capturing Self-Beliefs in Natural Language" by Mangalik et al. 2024
Inference
A sample way to use this model for classification
from transformers import pipeline
huggingface_model = 'sidmangalik/selfBERTa'
model = RobertaForSequenceClassification.from_pretrained(huggingface_model)
tokenizer = RobertaTokenizerFast.from_pretrained(huggingface_model, max_length = 512, padding="max_length", truncation=True)
texts = ["I am the coolest person I know."]
inputs = tokenizer(texts, max_length=512, padding="max_length", truncation=True, return_tensors='pt')
outputs = model(**inputs)
logits = outputs.logits
soft_logits = torch.softmax(logits, dim=1).tolist()
predicted_classes = np.argmax(soft_logits, axis=1)
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.