victormiller commited on
Commit
9b18c90
1 Parent(s): fb20585

Update main.py

Browse files
Files changed (1) hide show
  1. main.py +5 -4
main.py CHANGED
@@ -263,13 +263,14 @@ def intro():
263
  id="section1",
264
  ),
265
  Section(
266
- H2("Background"),
267
-
268
  id="section2",
269
  ),
270
  Section(
271
- H2("Main Content"),
272
-
 
273
  id="section3",
274
  ),
275
  Section(
 
263
  id="section1",
264
  ),
265
  Section(
266
+ H3("Global Deduplication"),
267
+ P("TxT360 curated a wide range of datasets, including a whopping 99 Common Crawl Dumps and a list of high quality datasets: StackExchange, Wikipedia, Arxiv, USPTO, DM Math, HackerNews, Ubuntu IRC, Europarl, FreeLaw, PG19, S2ORC, PhilPapers, PubMed Abstracts, and PubMed Central. For the first time in a released dataset, we locally and globally deduplicated the data across each dataset creating the highest quality data available."),
268
  id="section2",
269
  ),
270
  Section(
271
+ H3("Controllable Upweighting for Flexible Data Sample Weight Control"),
272
+ P("In large-scale corpora like CommonCrawl, text duplication is a frequent occurrence. Duplication can be considered as a natural upsampling of some data points. Recent studies have highlighted the potential drawbacks of oversampling specific data points, which can negatively impact pretraining performance [2205.10487]. However, when samples are repeated appropriately, the performance can actually improve [2306.01116, 2305.16264, 2406.11794, FineWeb]. Despite this, there is currently no widely accepted best practice for data sampling, and it’s unlikely that a one-size-fits-all approach will emerge given the scale of these datasets. Previous work either leaves the deduplication process to the user (as seen in RedPajama V2 and DCLM-Pool) or provides a corpus that has been downsampled in a specific manner (such as in FineWeb and RefinedWeb)."),
273
+ P("Given the high cost of deduplication, TxT360 offers a complete deduplication across all datasets (so you don’t have to). Additionally, TxT360 maintains detailed metadata for each sample, including the frequency and location of duplicates. This metadata gives pretrainers the flexibility to adjust the weight of samples as needed. In principle, one can recover the original dataset distribution (footnote: this approach also means a smaller size on disk). We will demonstrate a simple upsampling strategy that results in an effective pretraining dataset. "),
274
  id="section3",
275
  ),
276
  Section(