File size: 1,210 Bytes
116a5f2 3636257 116a5f2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: mit
tags:
- truthfulness
- bert
- text-classification
- dual-classifier
pipeline_tag: text-classification
---
# Truthfulness Detection Model
Fine-tuned BERT model for detecting truthfulness in text at both token and sentence levels.
## Model Description
This model uses a dual-classifier architecture on top of BERT to:
- Classify truthfulness at the sentence level (returns probability 0-1)
- Classify truthfulness for each token (returns probability 0-1 per token)
Low scores indicate likely false statements, high scores indicate likely true statements.
## Example Output
For "The earth is flat.":
- Sentence score: 0.0736 (7.36% - correctly identified as false)
- Token scores: ~0.10 for each token
## Training
- Base model: bert-base-uncased
- Training samples: 6,330
- Epochs: 3
- Batch size: 16
- Training time: 49 seconds on H100
## Custom Architecture Required
⚠️ This model uses a custom `BERTForDualTruthfulness` class. You cannot load it with standard AutoModel.
See the [implementation code](https://huggingface.co/prompterminal/classifier/blob/main/model_architecture.py) for the model class definition.---
license: mit
---
|