Back to all models
fill-mask mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							$
							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/jeniya/BERTOverflow
Share Copied link to clipboard

Monthly model downloads

jeniya/BERTOverflow jeniya/BERTOverflow
1,428 downloads
last 30 days

pytorch

tf

Contributed by

jeniya Jeniya Tabassum
2 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow") model = AutoModelWithLMHead.from_pretrained("jeniya/BERTOverflow")
Uploaded in S3

BERTOverflow

Model description

We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details in our ACL 2020 paper: https://www.aclweb.org/anthology/2020.acl-main.443/

How to use

from transformers import *
import torch

tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")

BibTeX entry and citation info

@inproceedings{tabassum2020code,
    title={Code and Named Entity Recognition in StackOverflow},
    author={Tabassum, Jeniya  and Maddela, Mounica  and Xu, Wei and Ritter, Alan },
    booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
    url={https://www.aclweb.org/anthology/2020.acl-main.443/}
    year = {2020},
}