Back to all models
Model card Files and versions Use in transformers

Unable to determine this model’s pipeline type. Check the docs .

Contributed by

jeniya Jeniya Tabassum
2 models


Model description

We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: Code and Named Entity Recognition in StackOverflow. We would like to thank Wuwei Lan for helping us in training this model.

How to use

from transformers import *
import torch

tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")

BibTeX entry and citation info

    title={Code and Named Entity Recognition in StackOverflow},
    author={Tabassum, Jeniya  and Maddela, Mounica  and Xu, Wei and Ritter, Alan },
    booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
    year = {2020},