bhavish07 commited on
Commit
3213714
1 Parent(s): 44ec8c2

Upload deduplicated_words.csv

Browse files
Files changed (1) hide show
  1. deduplicated_words.csv +541 -0
deduplicated_words.csv ADDED
@@ -0,0 +1,541 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Words
2
+ ML
3
+ Blog
4
+ -
5
+ Orca
6
+
7
+ Progressive
8
+ Learning
9
+ from
10
+ Complex
11
+ Explanation
12
+ Traces
13
+ of
14
+ GPT-4
15
+ Hands-On
16
+ GNNs
17
+ LLM
18
+ Course
19
+ Notes
20
+ Publications
21
+ About
22
+ @maximelabonne
23
+ 🗣️
24
+ Large
25
+ language
26
+ modelsOrca
27
+
28
+
29
+ Models
30
+ Author
31
+ Maxime
32
+ Lbonne
33
+ Published
34
+ August
35
+ "7,"
36
+ 2024
37
+ 🗣️
38
+ Extending
39
+ the
40
+ Context
41
+ Window
42
+ LLMs
43
+ Report
44
+
45
+ Few-Shot
46
+ Text
47
+ Classification
48
+ GPTQ
49
+
50
+ Accurate
51
+ Post-Training
52
+ Quantization
53
+ for
54
+ Generative
55
+ Pre-trained
56
+ Transformers
57
+ InCoder
58
+
59
+ A
60
+ Model
61
+ Code
62
+ Infilling
63
+ and
64
+ Synthesis
65
+ Inference
66
+ Optimization
67
+
68
+ Lil’Log
69
+ LIMA
70
+
71
+ Less
72
+ Is
73
+ More
74
+ Alignment
75
+ Local
76
+
77
+ Int8
78
+ LongNet
79
+
80
+ Scaling
81
+ to
82
+ "1,000,000,000"
83
+ Tokens
84
+ LoRA
85
+
86
+ Low-Rank
87
+ Adaptation
88
+ LoraHub
89
+
90
+ Efficient
91
+ Cross-Task
92
+ Generalization
93
+ via
94
+ Dynamic
95
+ Composition
96
+ Multipack
97
+ Sampler
98
+
99
+ phi-1
100
+
101
+ Textbooks
102
+ Are
103
+ All
104
+ You
105
+ Need
106
+ Self-Rewarding
107
+ Tart
108
+
109
+ A
110
+ plug-and-play
111
+ Transformer
112
+ module
113
+ task-agnostic
114
+ reasoning
115
+ 💡
116
+ Machine
117
+ Training
118
+ Data
119
+ Influence
120
+ Analysis
121
+ Estimation
122
+ A
123
+ Survey
124
+ Sections
125
+ Tuning
126
+ Experiments
127
+ Tip
128
+ a
129
+ 13B
130
+ parameter
131
+ with
132
+ ChatGPT
133
+ level
134
+ performance
135
+ thanks
136
+ a
137
+ huge
138
+ dataset
139
+ 5M
140
+ samples
141
+ step-by-step
142
+ explanations.
143
+ 📝
144
+ Paper:
145
+ https://arxiv.org/abs/2306.02707
146
+ will
147
+ probably
148
+ never
149
+ be
150
+ released
151
+ by
152
+ "Microsoft,"
153
+ but
154
+ open-source
155
+ projects
156
+ try
157
+ replicate
158
+ it
159
+ "(OpenOrca,"
160
+ Dolphin).
161
+ authors
162
+ note
163
+ that
164
+ while
165
+ Vicuna-13B
166
+ display
167
+ excellent
168
+ when
169
+ evaluated
170
+ performs
171
+ quite
172
+ poorly
173
+ on
174
+ benchmarks
175
+ like
176
+ "SAT,"
177
+ "LSAT,"
178
+ "GRE,"
179
+ GMAT.
180
+ Self-Instruct
181
+ involves
182
+ using
183
+ an
184
+ initial
185
+ set
186
+ prompts
187
+ ask
188
+ create
189
+ new
190
+ instructions.
191
+ Low-quality
192
+ or
193
+ overly
194
+ similar
195
+ responses
196
+ "removed,"
197
+ remaining
198
+ recycled
199
+ back
200
+ into
201
+ task
202
+ pool
203
+ further
204
+ iterations.
205
+ "However,"
206
+ queries
207
+ generated
208
+ can
209
+ lack
210
+ diversity
211
+ complexity.
212
+ Alpaca
213
+ WizardLM
214
+ use
215
+ a
216
+ variant
217
+ introduces
218
+ concept
219
+ "Evol-Instruct,"
220
+ which
221
+ gradually
222
+ rewrites
223
+ versions
224
+ BFS
225
+ DFS.
226
+ Vicuna
227
+ Koala
228
+ demonstrate
229
+ impressive
230
+ due
231
+ their
232
+ human-like
233
+ conversations
234
+ natural
235
+ (ShareGPT).
236
+ Problem
237
+ capture
238
+ style
239
+ not
240
+ process.
241
+ This
242
+ motivates
243
+ creation
244
+ a
245
+ auto-evaluation
246
+ has
247
+ several
248
+ "drawbacks,"
249
+ such
250
+ as
251
+ limited
252
+ test
253
+ sizes
254
+ "example,"
255
+ 80
256
+ in
257
+ 218
258
+ inherent
259
+ biases
260
+ tends
261
+ favor
262
+ instruction-tuned
263
+ its
264
+ own
265
+ resulting
266
+ a
267
+ preference
268
+ longer
269
+ texts
270
+ over
271
+ shorter
272
+ ones.
273
+ also
274
+ exhibits
275
+ a
276
+ bias
277
+ order
278
+ candidate
279
+ overestimates
280
+ abilities
281
+ smaller
282
+ Contributions:
283
+ Augmenting
284
+ query-response
285
+ pairs
286
+ detailed
287
+ outline
288
+ system
289
+ tasks
290
+ FLANv2
291
+ used
292
+ offers
293
+ a
294
+ wide
295
+ variety
296
+ They
297
+ created
298
+ a
299
+ 5
300
+ million
301
+ 1
302
+ Evaluation:
303
+ comprehension
304
+ assessed
305
+ under
306
+ various
307
+ settings.
308
+ focus
309
+ a
310
+ lot
311
+ how
312
+ guide
313
+ adopting
314
+ right
315
+ "tone,"
316
+ format.
317
+ I
318
+ believe
319
+ same
320
+ effect
321
+ achieved
322
+ user
323
+ (maybe
324
+ slightly
325
+ sampled
326
+ a
327
+ diverse
328
+ instruction
329
+ including
330
+ chain-of-thought
331
+ "steps,"
332
+ explain
333
+ I’m
334
+ "five,"
335
+ being
336
+ helpful
337
+ "informative,"
338
+ etc.
339
+ Construction
340
+ Each
341
+ sample
342
+ a
343
+ triplet
344
+ "message,"
345
+ response.
346
+ FLAN-v2
347
+ raw
348
+ Collection
349
+ consists
350
+ sub-collections:
351
+ "CoT,"
352
+ "NiV2,"
353
+ T0
354
+ "only),"
355
+ Flan
356
+ "2021,"
357
+ Dialogue:
358
+ most
359
+ interesting
360
+ one
361
+ 150K
362
+ V2
363
+ "FLAN2021,"
364
+ were
365
+ randomly
366
+ (~10%
367
+ was
368
+ selected
369
+ Dialog
370
+ completely
371
+ skipped
372
+ because
373
+ lacks
374
+ then
375
+ inputs
376
+ generate
377
+ high-quality
378
+ (1M).
379
+ These
380
+ prompted
381
+ +
382
+ 16
383
+ handcrafted
384
+ messages
385
+ ensure
386
+ different
387
+ kinds
388
+ <empty>
389
+ AI
390
+ assistant.
391
+ Provide
392
+ a
393
+ answer
394
+ so
395
+ don’t
396
+ search
397
+ outside
398
+ understand
399
+ given
400
+ a
401
+ must
402
+ a
403
+ long
404
+ a
405
+ who
406
+ always
407
+ Think
408
+ answering
409
+ a
410
+ year
411
+ old.
412
+ follows
413
+ extremely
414
+ well.
415
+ Help
416
+ much
417
+ helps
418
+ people
419
+ find
420
+ information.
421
+ a
422
+ give
423
+ a
424
+ Your
425
+ goal
426
+ complete
427
+ faithfully
428
+ performing
429
+ justify
430
+ should
431
+ describe
432
+ a
433
+ multiple
434
+ choice
435
+ "question,"
436
+ first
437
+ output
438
+ correct
439
+ why
440
+ other
441
+ answers
442
+ wrong.
443
+ a
444
+ definition
445
+ come
446
+ up
447
+ a
448
+ might
449
+ additional
450
+ knowledge
451
+ a
452
+ a
453
+ some
454
+ job
455
+ follow
456
+ a
457
+ teacher.
458
+ a
459
+ simple
460
+ what
461
+ "asking,"
462
+ any
463
+ guidelines
464
+ provides
465
+ those
466
+ knows
467
+ every
468
+ translate
469
+ another.
470
+ a
471
+ solve
472
+ show
473
+ a
474
+ a
475
+ a
476
+ "input,"
477
+ break
478
+ small
479
+ parts.
480
+ have
481
+ meaning
482
+ showing
483
+ meets
484
+ criteria
485
+ following
486
+ Part
487
+ #:
488
+ a
489
+ key
490
+ Usage:
491
+ motivated
492
+ curriculum
493
+ a
494
+ a
495
+ big
496
+ technical
497
+ reasons
498
+ "(cost,"
499
+ time).
500
+ LLaMA
501
+ BPE
502
+ tokenizer
503
+ padding
504
+ (vocabulary
505
+ size
506
+ =
507
+ "32,001)."
508
+ examples
509
+ packed
510
+ a
511
+ single
512
+ sequence
513
+ maximize
514
+ length
515
+ "(2,048"
516
+ get
517
+ a
518
+ uniform
519
+ trained
520
+ 160h
521
+ 20xA100
522
+ GPUs
523
+ (4
524
+ epochs)
525
+ ChatGPT-generated
526
+ +
527
+ 40h
528
+ GPT-4-generated
529
+ Open-ended
530
+ generation:
531
+ significantly
532
+ better
533
+ than
534
+ AGIEval:
535
+ doesn’t
536
+ perform
537
+ BigBench-Hard:
538
+ par
539
+ Copyright
540
+ "2023,"
541
+ Labonne