Back to all models
fill-mask mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
Share Copied link to clipboard

Monthly model downloads

jeniya/BERTOverflow jeniya/BERTOverflow
last 30 days



Contributed by

jeniya Jeniya Tabassum
2 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow") model = AutoModelWithLMHead.from_pretrained("jeniya/BERTOverflow")
Uploaded in S3


Model description

We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details in our ACL 2020 paper:

How to use

from transformers import *
import torch

tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")

BibTeX entry and citation info

    title={Code and Named Entity Recognition in StackOverflow},
    author={Tabassum, Jeniya  and Maddela, Mounica  and Xu, Wei and Ritter, Alan },
    booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
    year = {2020},