%auto-ignore
\section{Introduction}


Language model pre-training has been shown to be effective for improving many natural language processing tasks~\cite{dai-le:2015:_semi, peters-etal:2018:_deep, radford-etal:2018, howard-ruder:2018}. These include sentence-level tasks such as natural language inference~\cite{bowman-etal:2015, williams-nangia-bowman:2018} and paraphrasing~\cite{dolan-brockett:2005:_autom}, which aim to predict the relationships between sentences by analyzing them holistically, as well as token-level tasks such as named entity recognition and question answering, where models are required to produce fine-grained output at the token level~\cite{tjong-de:2003, rajpurkar-etal:2016:_squad}.


There are two existing strategies for applying pre-trained language representations to downstream tasks: {\em feature-based} and {\em fine-tuning}. The feature-based approach, such as ELMo~\cite{peters-etal:2018:_deep}, uses task-specific architectures that include the pre-trained representations as additional features. The fine-tuning approach, such as the Generative Pre-trained Transformer (OpenAI GPT)~\cite{radford-etal:2018}, introduces minimal task-specific parameters, and is trained on the downstream tasks by simply fine-tuning {\em all} pre-trained parameters. The two approaches share the same objective function during pre-training, where they use unidirectional language models to learn general language representations.

We argue that current techniques restrict the power of the pre-trained representations, especially for the fine-tuning approaches. The major limitation is that standard language models are unidirectional, and this limits the choice of architectures that can be used during pre-training. For example, in OpenAI GPT, the authors use a left-to-right architecture, where every token can only attend to previous tokens in the self-attention layers of the Transformer~\cite{vaswani-etal:2017:_atten}. Such restrictions are sub-optimal for sentence-level tasks, and could be very harmful when applying fine-tuning based approaches to token-level tasks such as  question answering, where it is crucial to incorporate context from both directions.

In this paper, we improve the fine-tuning based approaches by proposing \bert: \textbf{B}idirectional \textbf{E}ncoder \textbf{R}epresentations from \textbf{T}ransformers. \bert alleviates the previously mentioned unidirectionality constraint by using a ``masked language model''~(MLM) pre-training objective, inspired by the Cloze task~\cite{taylor:1953:_cloze}. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, we also use a ``next sentence prediction'' task that jointly pre-trains text-pair representations. The contributions of our paper are as follows:
\begin{itemize}[leftmargin=1em]
  \item We demonstrate the importance of bidirectional pre-training for language representations. Unlike \citet{radford-etal:2018}, which uses unidirectional language models for pre-training, \bert uses masked language models to enable pre-trained deep bidirectional representations. This is also in contrast to \citet{peters-etal:2018:_deep}, which uses a shallow concatenation of independently trained left-to-right and right-to-left LMs.
  \item We show that pre-trained representations reduce the need for many heavily-engineered task-specific architectures. \bert is the first fine-tuning based representation model that achieves state-of-the-art performance on a large suite of sentence-level {\em and} token-level tasks, outperforming many task-specific architectures.
  \item \bert advances the state of the art for eleven NLP tasks. 
%
    The code and pre-trained models are available at \url{https://github.com/google-research/bert}.
\end{itemize}
