Jiann commited on
Commit
7bdf0a2
1 Parent(s): 5087b96

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## LongLM
2
+
3
+ ### 1. Parameters
4
+
5
+ | Versions | $d_m$ | $d_{ff}$ | $d_{kv}$ | $n_h$ | $n_e/n_d$ | \# P |
6
+ | ------------ | ----- | -------- | -------- | ----- | --------- | ---- |
7
+ | LongLM-small | 512 | 2,048 | 64 | 8 | 6/6 | 60M |
8
+ | LongLM-base | 768 | 3,072 | 64 | 12 | 12/12 | 223M |
9
+ | LongLM-large | 1,536 | 3,072 | 64 | 12 | 24/32 | 1B |
10
+
11
+ - $d_m$: the dimension of hidden states
12
+ - $d_{ff}$: the dimension of feed forward layers
13
+ - $d_{kv}$: the dimension of the keys/values in the self-attention layers
14
+ - $n_h$: the number of attention heads
15
+ - $n_e$: the number of hidden layers of the encoder
16
+ - $n_d$: the number of hidden layers of the decoder
17
+ - \#P: the number of parameters
18
+
19
+ ### 2. Pretraining Tasks
20
+
21
+ Encoder-decoder models are trained typically by maximizing the likelihood of the target output given an input. To improve the capacities of both the encoder and decoder, we propose to train LongLM with two pretraining tasks including text infilling (Raffel et al., 2020) and conditional continuation (Radford et al., 2019). For the first task, the input is a text where a number of spans are sampled and replaced by special tokens with unique IDs, while the output is the spans delimited by the special tokens used in the input. The lengths of masked spans are drawn from a Poisson distribution with λ=3 and all masked tokens compress 15% of the original texts. As for the second task, the input and output are respectively the front and back half of a text, which is split into two parts randomly.
22
+
23
+ ### 3. Pretraining Data
24
+
25
+ We collect 120G novels as the pretraining data for LongLM.
26
+
27
+ ### 4. Checkpoints
28
+
29
+
30
+ 1. **Model Loading:**
31
+
32
+ ```python\
33
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
34
+ tokenizer = T5Tokenizer.from_pretrained('LongLM-large')
35
+ model = T5ForConditionalGeneration.from_pretrained('LongLM-large')
36
+ ```
37
+
38
+
39
+ 2. **Generation:**
40
+
41
+ ```python
42
+ input_ids = tokenizer("小咕噜对,<extra_id_1>",return_tensors="pt", padding=True, truncation=True, max_length=512).input_ids.to(device)
43
+
44
+ gen = model.generate(input_ids, do_sample=True, decoder_start_token_id=1, top_p=0.9, max_length=512)
45
+ ```
46
+
47
+
48
+ ### 5. Dependencies
49
+
50
+ ```
51
+ datasets 1.6.2
52
+ deepspeed 0.3.16
53
+ huggingface-hub 0.0.8
54
+ jieba 0.42.1
55
+ jsonlines 2.0.0
56
+ nltk 3.5
57
+ numpy 1.19.5
58
+ pytorch-lightning 1.2.0
59
+ regex 2020.11.13
60
+ rouge 1.0.1
61
+ rouge-score 0.0.4
62
+ sacrebleu 1.5.0
63
+ scipy 1.5.4
64
+ sentencepiece 0.1.95
65
+ tokenizers 0.10.1
66
+ torch 1.8.1
67
+ torchaudio 0.8.0
68
+ torchmetrics 0.2.0
69
+ torchvision 0.9.0
70
+ transformers 4.6.1
71
+ ```
72
+
73
+
74
+ ## Citation
75
+
76
+ ```txt
77
+ @misc{guan2021lot,
78
+ title={LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation},
79
+ author={Jian Guan and Zhuoer Feng and Yamei Chen and Ruilin He and Xiaoxi Mao and Changjie Fan and Minlie Huang},
80
+ year={2021},
81
+ eprint={2108.12960},
82
+ archivePrefix={arXiv},
83
+ primaryClass={cs.CL}
84
+ }
85
+ ```