julien-c HF staff commited on
Commit
1d09278
1 Parent(s): a0e379a

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/funnel-transformer/medium/README.md

Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ datasets:
5
+ - bookcorpus
6
+ - wikipedia
7
+ - gigaword
8
+ ---
9
+
10
+ # Funnel Transformer medium model (B6-3x2-3x2 with decoder)
11
+
12
+ Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
13
+ [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
14
+ [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
15
+ between english and English.
16
+
17
+ Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
18
+ written by the Hugging Face team.
19
+
20
+ ## Model description
21
+
22
+ Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
23
+ was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
24
+ publicly available data) with an automatic process to generate inputs and labels from those texts.
25
+
26
+ More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
27
+ the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
28
+
29
+ This way, the model learns an inner representation of the English language that can then be used to extract features
30
+ useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
31
+ classifier using the features produced by the BERT model as inputs.
32
+
33
+ ## Intended uses & limitations
34
+
35
+ You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
36
+ be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
37
+ fine-tuned versions on a task that interests you.
38
+
39
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
40
+ to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
41
+ generation you should look at model like GPT2.
42
+
43
+ ### How to use
44
+
45
+
46
+ Here is how to use this model to get the features of a given text in PyTorch:
47
+
48
+ ```python
49
+ from transformers import FunnelTokenizer, FunnelModel
50
+ tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
51
+ model = FunneModel.from_pretrained("funnel-transformer/medium")
52
+ text = "Replace me by any text you'd like."
53
+ encoded_input = tokenizer(text, return_tensors='pt')
54
+ output = model(**encoded_input)
55
+ ```
56
+
57
+ and in TensorFlow:
58
+
59
+ ```python
60
+ from transformers import FunnelTokenizer, TFFunnelModel
61
+ tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
62
+ model = TFFunnelModel.from_pretrained("funnel-transformer/medium")
63
+ text = "Replace me by any text you'd like."
64
+ encoded_input = tokenizer(text, return_tensors='tf')
65
+ output = model(encoded_input)
66
+ ```
67
+
68
+ ## Training data
69
+
70
+ The BERT model was pretrained on:
71
+ - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
72
+ - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
73
+ - [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
74
+ - [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
75
+ - [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
76
+
77
+
78
+ ### BibTeX entry and citation info
79
+
80
+ ```bibtex
81
+ @misc{dai2020funneltransformer,
82
+ title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
83
+ author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
84
+ year={2020},
85
+ eprint={2006.03236},
86
+ archivePrefix={arXiv},
87
+ primaryClass={cs.LG}
88
+ }
89
+ ```
90
+