Edit model card

Model description

This model is based on An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification. Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).

Initial weights were taken from google/bert_uncased_L-8_H-256_A-4. Model was additionally pretrained for 20_000 steps on 5m lines of text from english version of OpenSubtitles dataset.

Maximum input length is 512 tokens that is enoungh to encode dialog with few previous utterances (average sentence length per utterance in SWDA, MAPTASK, MRDA, BT_OASIS, FRAMES, AMI, DSTC3 is less than 11 tokens).

Downloads last month
6
Inference Examples
Mask token: undefined
Inference API (serverless) does not yet support model repos that contain custom code.