XciD's picture
XciD HF staff
initial commit
8969f81
{{#eq model "ctrl"}}
<header class="cross-collab">
<img class="logo-collab logo-hf" src="/front/assets/huggingface_logo.svg">
<img class="cross" src="/front/assets/cross-collab.svg">
<img class="logo-collab logo-salesforce" src="/front/assets/Salesforce_logo.svg">
</header>
{{/eq}}
{{#eq model "pplm"}}
<header class="cross-collab">
<img class="logo-collab logo-hf" src="/front/assets/huggingface_logo.svg">
<img class="cross" src="/front/assets/cross-collab.svg">
<img class="logo-collab logo-uber" src="/front/assets/Uber_logo.svg">
</header>
{{/eq}}
<div class="container">
<a class="back" href="/">
<img class="info" src="/front/assets/icon-back.svg">
See all models and checkpoints
</a>
<div class="model-title">
{{#eq model "arxiv-nlp"}}
🤓 ArXiv NLP model checkpoint
{{/eq}}
{{#eq model "distil-gpt2"}}
🐎 DistilGPT-2 model checkpoint
{{/eq}}
{{#eq model "ctrl"}}
☁️ Salesforce Research CTRL
{{/eq}}
{{#eq model "pplm"}}
🚕 Uber AI Plug and Play Language Model (PPLM)
{{/eq}}
</div>
<div class="github-repo">
<a
class="github-button"
href="https://github.com/huggingface/transformers" data-size="large" data-show-count="true" aria-label="Star huggingface/transformers on GitHub">
Star
</a>
</div>
<div class="model-details">
{{#eq model "distil-gpt2"}}
<p>The student of the now ubiquitous GPT-2 does not come short of its teacher’s expectations.
Obtained by distillation, DistilGPT-2 weighs 37% less, and is twice as fast as its OpenAI counterpart, while keeping the same generative power.
Runs smoothly on an iPhone 7. The dawn of lightweight generative transformers? 🤯</p>
<p>From the paper: <a target="_blank" href="https://arxiv.org/abs/1910.01108">DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter</a> by Victor Sanh, Lysandre Debut, Julien Chaumond and Thomas Wolf.
The same method was applied to distill GPT-2, and a Medium blogpost <a target="_blank" href="https://medium.com/huggingface/distilbert-8cf3380435b5">describes the process in detail</a>.</p>
{{/eq}}
{{#eq model "arxiv-nlp"}}
<p>Built on the OpenAI GPT-2 model, the Hugging Face team has fine-tuned the small version of the model on a tiny dataset (60MB of text) of Arxiv papers.
The targeted subject is Natural Language Processing, resulting in a very Linguistics/Deep Learning oriented generation.</p>
<p>All articles were downloaded from Cornell University’s arxiv.org website using arXiv Bulk Data Access.</p>
{{/eq}}
{{#eq model "ctrl"}}
<p><strong>CTRL</strong> transcends the pre-training/fine-tuning approach by taking advantage of a whopping 1.6 billion parameters 🤯.</p>
<p>Controllable Generation: this model generates some text directly tuned to several subreddits (fitness, personal finance, running and many more), Wikipedia articles or product reviews. Take advantage of its control codes and use it for question answering, translation or styled text generation. Kindly implemented by the Salesforce team in <a href="https://github.com/huggingface/transformers"><code>🤗/transformers</code></a>.</p>
<p>From the paper <a target="_blank" href="https://arxiv.org/abs/1909.05858">CTRL: A Conditional Transformer Language Model for Controllable Generation</a> by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.</p>
{{/eq}}
{{#eq model "pplm"}}
<p><strong>PPLM</strong> builds on top of other large transformer-based generative models (like GPT-2), where it enables finer-grained control of attributes of the generated language (e.g. gradually switching topic 🐱 or sentiment 😃).</p>
<p>This controlled language generation method consists of plugging in simple bag-of-words or one-layer classifiers as attribute controllers, and making updates in the activation space, without changing any model parameters.
Kindly implemented by the Uber AI team in <a href="https://github.com/huggingface/transformers/tree/master/examples/pplm"><code>🤗/transformers</code></a>.</p>
<p>From the paper <a target="_blank" href="https://arxiv.org/abs/1912.02164">Plug and Play Language Model: A simple baseline for controlled language generation</a> by
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu.</p>
{{/eq}}
</div>
<div class="model-bottom">
<a class="btn btn-primary" href="/doc/{{model}}">Start writing</a>
</div>
</div>