coyotte508 HF staff commited on
Commit
905f8ef
1 Parent(s): 007041f

Create 1910.01108.atom

Browse files
Files changed (1) hide show
  1. arxiv/1910.01108.atom +54 -0
arxiv/1910.01108.atom ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <?xml version="1.0" encoding="UTF-8"?>
2
+ <feed xmlns="http://www.w3.org/2005/Atom">
3
+ <link href="http://arxiv.org/api/query?search_query%3D%26id_list%3D1910.01108%26start%3D0%26max_results%3D1" rel="self" type="application/atom+xml"/>
4
+ <title type="html">ArXiv Query: search_query=&amp;id_list=1910.01108&amp;start=0&amp;max_results=1</title>
5
+ <id>http://arxiv.org/api/GrfVMsjRJ5QfXT4borMr9RuzWhk</id>
6
+ <updated>2023-06-09T00:00:00-04:00</updated>
7
+ <opensearch:totalResults xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">1</opensearch:totalResults>
8
+ <opensearch:startIndex xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">0</opensearch:startIndex>
9
+ <opensearch:itemsPerPage xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">1</opensearch:itemsPerPage>
10
+ <entry>
11
+ <id>http://arxiv.org/abs/1910.01108v4</id>
12
+ <updated>2020-03-01T02:57:50Z</updated>
13
+ <published>2019-10-02T17:56:28Z</published>
14
+ <title>DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
15
+ lighter</title>
16
+ <summary> As Transfer Learning from large-scale pre-trained models becomes more
17
+ prevalent in Natural Language Processing (NLP), operating these large models in
18
+ on-the-edge and/or under constrained computational training or inference
19
+ budgets remains challenging. In this work, we propose a method to pre-train a
20
+ smaller general-purpose language representation model, called DistilBERT, which
21
+ can then be fine-tuned with good performances on a wide range of tasks like its
22
+ larger counterparts. While most prior work investigated the use of distillation
23
+ for building task-specific models, we leverage knowledge distillation during
24
+ the pre-training phase and show that it is possible to reduce the size of a
25
+ BERT model by 40%, while retaining 97% of its language understanding
26
+ capabilities and being 60% faster. To leverage the inductive biases learned by
27
+ larger models during pre-training, we introduce a triple loss combining
28
+ language modeling, distillation and cosine-distance losses. Our smaller, faster
29
+ and lighter model is cheaper to pre-train and we demonstrate its capabilities
30
+ for on-device computations in a proof-of-concept experiment and a comparative
31
+ on-device study.
32
+ </summary>
33
+ <author>
34
+ <name>Victor Sanh</name>
35
+ </author>
36
+ <author>
37
+ <name>Lysandre Debut</name>
38
+ </author>
39
+ <author>
40
+ <name>Julien Chaumond</name>
41
+ </author>
42
+ <author>
43
+ <name>Thomas Wolf</name>
44
+ </author>
45
+ <arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">February 2020 - Revision: fix bug in evaluation metrics, updated
46
+ metrics, argumentation unchanged. 5 pages, 1 figure, 4 tables. Accepted at
47
+ the 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing
48
+ - NeurIPS 2019</arxiv:comment>
49
+ <link href="http://arxiv.org/abs/1910.01108v4" rel="alternate" type="text/html"/>
50
+ <link title="pdf" href="http://arxiv.org/pdf/1910.01108v4" rel="related" type="application/pdf"/>
51
+ <arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
52
+ <category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
53
+ </entry>
54
+ </feed>