Suggested text changes

#1
by erinys HF staff - opened
Files changed (1) hide show
  1. app.py +51 -33
app.py CHANGED
@@ -359,32 +359,28 @@ with gr.Blocks() as demo:
359
  fig = cumulative_growth_plot_analysis(cumulative_df, cumulative_df_compressed)
360
 
361
  # Add top level heading and introduction text
362
- gr.Markdown("# Git LFS Usage Across the Hub")
363
  gr.Markdown(
364
- "The Hugging Face Hub has just crossed 1,000,000 models - but where is all that data stored? Most of it is stored in Git LFS. This analysis dives into the LFS storage on the Hub, breaking down the data by repository type, file extension, and growth over time. The data is based on a snapshot of the Hub's LFS storage, starting in March 2022 and ending September 20th, 2024 (meaning the data is incomplete for September 2024). Right now, this is a one-time analysis, but as we do our work we hope to revisit and update the underlying data to provide more insights."
365
  )
366
-
367
  gr.Markdown(
368
- "Now, you might ask yourself, 'Why are you doing this?' Well, the [Xet Team](https://huggingface.co/xet-team) is a [new addition to Hugging Face](https://huggingface.co/blog/xethub-joins-hf), bringing a new way to store massive datasets and models to enable ML teams to operate like software teams: Quickly and without friction. Because this story all starts with storage, that's where we've begun with our own deep dives into what the Hub holds. As part of this, we've included a look at what happens with just one simple deduplication strategy - deduplicating at the file level. Read on to see more!"
369
  )
 
370
  gr.HTML(div_px(25))
371
  # Cumulative growth analysis
372
- gr.Markdown("## Repository Growth")
373
  gr.Markdown(
374
- "The plot below shows the growth of Git LFS storage on the Hub over the past two years. The solid lines represent the cumulative growth of models, spaces, and datasets, while the dashed lines represent the growth with file-level deduplication."
375
- )
376
  gr.Plot(fig)
377
 
378
  gr.HTML(div_px(5))
379
  # @TODO Talk to Allison about variant="panel"
380
  with gr.Row():
381
  with gr.Column(scale=1):
 
382
  gr.Markdown(
383
- "In this table, we can see what the final picture looks like as of September 20th, 2024, along with the potential file-level deduplication savings."
384
- )
385
- gr.Markdown(
386
- "To put this in context, the last [Common Crawl](https://commoncrawl.org/) download was [451 TBs](https://github.com/commoncrawl/cc-crawl-statistics/blob/master/stats/crawler/CC-MAIN-2024-38.json#L31). The Spaces repositories alone outpaces that! Meanwhile, between Datasets and Model repos, the Hub stores **64 Common Crawls** 🤯. Current estimates put file deduplication savings at approximately 3.24 PBs (7.2 Common Crawls)!"
387
- )
388
  with gr.Column(scale=3):
389
  # Convert the total size to petabytes and format to two decimal places
390
  by_repo_type = format_dataframe_size_column(
@@ -393,29 +389,16 @@ with gr.Blocks() as demo:
393
  )
394
  gr.Dataframe(by_repo_type)
395
 
396
- gr.HTML(div_px(5))
397
- with gr.Row():
398
- with gr.Column(scale=1):
399
- gr.Markdown(
400
- "The month-to-month growth of models, spaces, can be seen in the adjacent table. In 2024, the Hub has averaged nearly **2.3 PBs uploaded to LFS per month!** By the same token, the monthly file deduplication savings are nearly 225TBs. "
401
- )
402
-
403
- gr.Markdown(
404
- "Borrowing from the Common Crawl analogy, that's about *5 crawls* uploaded every month, with an _easy savings of half a crawl every month_ by deduplicating at the file-level!"
405
- )
406
- with gr.Column(scale=3):
407
- gr.Dataframe(last_10_months)
408
-
409
  gr.HTML(div_px(25))
410
  # File Extension analysis
411
- gr.Markdown("## File Extensions on the Hub")
412
  gr.Markdown(
413
- "Breaking this down by file extension, some interesting trends emerge. The following sections filter the analysis to the top 20 file extensions stored (in bytes) using LFS (which accounts for 82% of storage consumption)."
414
  )
415
  gr.Markdown(
416
- "As is evident in the chart below, [Safetensors](https://huggingface.co/docs/safetensors/en/index) is quickly becoming the defacto standard on the Hub for storing tensor files, accounting for over 7PBs (25%) of LFS storage. If you want to know why you'd want to check out YAF (yet another format), this explanation from the [Safetensors docs](https://github.com/huggingface/safetensors?tab=readme-ov-file#yet-another-format-) is a good place to start. Speaking of YAF, [GGUF (GPT-Generated Unified Format)](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md) is also on the rise, accounting for 3.2 PBs (11%) of LFS storage. GGUF, like Safetensors, is a format for storing tensor files, with a different set of optimizations. The Hub has a few [built-in tools](https://huggingface.co/docs/hub/en/gguf) for working with GGUF."
417
  )
418
- # Get the top 10 file extnesions by size
419
  by_extension_size = by_extension.sort_values(by="size", ascending=False).head(22)
420
 
421
  # make a bar chart of the by_extension_size dataframe
@@ -445,20 +428,20 @@ with gr.Blocks() as demo:
445
 
446
  gr.HTML(div_px(5))
447
  gr.Markdown(
448
- "Below, we have a more detailed tabular view of the same top 20 file extensions by total size, number of files, and average file size."
449
  )
450
  gr.Dataframe(by_extension_size)
451
 
452
  gr.HTML(div_px(5))
453
- gr.Markdown("### File Extension Monthly Additions (in PBs)")
454
  gr.Markdown(
455
- "What if we want to see trends over time? The following area chart shows the number of bytes added to LFS storage each month, faceted by the most popular file extensions."
456
  )
457
  gr.Plot(area_plot_by_extension_month(by_extension_month))
458
 
459
  gr.HTML(div_px(5))
460
  gr.Markdown(
461
- "To dig a little deeper, the following dropdown allows you to filter the area chart by file extension. Because we're dealing with individual file extensions, the data is presented in terabytes (TBs)."
462
  )
463
 
464
  # build a dropdown using the unique values in the extension column
@@ -469,6 +452,41 @@ with gr.Blocks() as demo:
469
  )
470
  _by_extension_month = gr.State(by_extension_month)
471
  gr.Plot(filter_by_extension_month, inputs=[_by_extension_month, extension])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
472
 
473
  # launch the dang thing
474
  demo.launch()
 
359
  fig = cumulative_growth_plot_analysis(cumulative_df, cumulative_df_compressed)
360
 
361
  # Add top level heading and introduction text
362
+ gr.Markdown("# Git LFS Usage across the Hub")
363
  gr.Markdown(
364
+ "Ever wonder what the Hugging Face Hub holds? This is the space for you!"
365
  )
 
366
  gr.Markdown(
367
+ "The Hub stores all files using a combination of [Gitaly](https://gitlab.com/gitlab-org/gitaly) (small files) on EBS and [Git LFS](https://git-lfs.com/) (large files > 10MB) on S3. As part of the [Xet team](https://huggingface.co/xet-team), one of our goals is to improve Hub storage and transfer efficiency, and understanding how and what things are currently stored helps us establish a baseline. This analysis uses a snapshot of the Hub's Git LFS usage from March 2022 - September 2024, which accounts for 82% of all Hub storage, and we plan to update it monthly to track trends. We're starting with metrics around raw storage by repository type and size/count by file extension - if you're interested in other metrics, drop your suggestions in our [discussions](https://huggingface.co/spaces/xet-team/lfs-analysis/discussions)!"
368
  )
369
+
370
  gr.HTML(div_px(25))
371
  # Cumulative growth analysis
372
+ gr.Markdown("## Storage by Repository Type")
373
  gr.Markdown(
374
+ "The chart below shows the growth of Git LFS storage usage by repository type since March 2022.)
 
375
  gr.Plot(fig)
376
 
377
  gr.HTML(div_px(5))
378
  # @TODO Talk to Allison about variant="panel"
379
  with gr.Row():
380
  with gr.Column(scale=1):
381
+ gr.Markdown("### Current Storage Usage")
382
  gr.Markdown(
383
+ "As of September 20, 2024, total files stored in Git LFS summed to almost 29 PB. To put this into perspective, the last [Common Crawl](https://commoncrawl.org/) download was [451 TBs](https://github.com/commoncrawl/cc-crawl-statistics/blob/master/stats/crawler/CC-MAIN-2024-38.json#L31) - the Hub stores the equivalent of **64 Common Crawls** 🤯.")
 
 
 
 
384
  with gr.Column(scale=3):
385
  # Convert the total size to petabytes and format to two decimal places
386
  by_repo_type = format_dataframe_size_column(
 
389
  )
390
  gr.Dataframe(by_repo_type)
391
 
 
 
 
 
 
 
 
 
 
 
 
 
 
392
  gr.HTML(div_px(25))
393
  # File Extension analysis
394
+ gr.Markdown("## Large Files Stored by File Extension")
395
  gr.Markdown(
396
+ "What types of files are stored on the Hub? The Xet team's backend architecture allows for storage optimizations by file type, so seeing the breakdown of the most popular stored file types helps to prioritize our roadmap. The following sections filter the analysis to the top 20 file extensions stored (by bytes) using Git LFS."
397
  )
398
  gr.Markdown(
399
+ "[Safetensors](https://huggingface.co/docs/safetensors/en/index) is quickly becoming the defacto standard on the Hub for storing tensor files, accounting for over 7PBs (25%) of LFS storage. [GGUF (GPT-Generated Unified Format)](https://huggingface.co/docs/hub/gguf), a format for storing tensor files with a different set of optimizations, is also on the rise, accounting for 3.2 PBs (11%) of LFS storage."
400
  )
401
+ # Get the top 10 file extensions by size
402
  by_extension_size = by_extension.sort_values(by="size", ascending=False).head(22)
403
 
404
  # make a bar chart of the by_extension_size dataframe
 
428
 
429
  gr.HTML(div_px(5))
430
  gr.Markdown(
431
+ "This tabular view shows the same top 20 file extensions by total stored size, number of files, and average file size."
432
  )
433
  gr.Dataframe(by_extension_size)
434
 
435
  gr.HTML(div_px(5))
436
+ gr.Markdown("### Storage Growth by File Extension (Monthly PBs Added)")
437
  gr.Markdown(
438
+ "The following area chart shows the number of bytes added to LFS storage each month, faceted by file extension."
439
  )
440
  gr.Plot(area_plot_by_extension_month(by_extension_month))
441
 
442
  gr.HTML(div_px(5))
443
  gr.Markdown(
444
+ "To dig deeper, use the dropdown to filter the area chart by file extension."
445
  )
446
 
447
  # build a dropdown using the unique values in the extension column
 
452
  )
453
  _by_extension_month = gr.State(by_extension_month)
454
  gr.Plot(filter_by_extension_month, inputs=[_by_extension_month, extension])
455
+
456
+ gr.HTML(div_px(25))
457
+ # Optimizations
458
+
459
+ gr.Markdown("## Optimization 1: File-level Deduplication")
460
+ gr.Markdown(
461
+ "The first improvement we can make to Hub storage is to add file-level deduplication. Since forking any Hub repository makes copies of the files, a scan of existing files unsurprisingly shows that some files match exactly. The following chart shows the storage growth chart from above with additional dashed lines showing the potential savings from deduplicating at the file level."
462
+ )
463
+ gr.Plot(fig)
464
+
465
+ gr.HTML(div_px(5))
466
+ # @TODO Talk to Allison about variant="panel"
467
+ with gr.Row():
468
+ with gr.Column(scale=1):
469
+ gr.Markdown("### Current Storage Usage + File-level Deduplication")
470
+ gr.Markdown(
471
+ "This simple change to the storage backend will save 3.24 PBs (the equivalent of 7.2 Common Crawls)."
472
+ )
473
+ with gr.Column(scale=3):
474
+ # Convert the total size to petabytes and format to two decimal places
475
+ by_repo_type = format_dataframe_size_column(
476
+ by_repo_type,
477
+ ["Total Size (PBs)", "Deduplicated Size (PBs)", "Dedupe Savings (PBs)"],
478
+ )
479
+ gr.Dataframe(by_repo_type)
480
+
481
+ gr.HTML(div_px(5))
482
+ with gr.Row():
483
+ with gr.Column(scale=1):
484
+ gr.Markdown("### Month-to-Month Growth + File-level Deduplication")
485
+ gr.Markdown(
486
+ "This table shows month-to-month growth in model, dataset, and space storage. In 2024, the Hub has averaged nearly **2.3 PBs uploaded to Git LFS per month**. Deduplicating at the file level saves nearly 225 TB (half a Common Crawl) monthly."
487
+ )
488
+ with gr.Column(scale=3):
489
+ gr.Dataframe(last_10_months)
490
 
491
  # launch the dang thing
492
  demo.launch()