pszemraj commited on
Commit
bfa004d
1 Parent(s): 6a8855d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -347,15 +347,16 @@ model-index:
347
  verified: true
348
  verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI2NjVlYjgwYWJiMjcyMDUzMzEwNDNjZTMxMDM0MjAzMzk1ZmIwY2Q1ZDQ2Y2M5NDBlMDEzYzFkNWEyNzJmNiIsInZlcnNpb24iOjF9.iZ1Iy7FuWL4GH7LS5EylVj5eZRC3L2ZsbYQapAkMNzR_VXPoMGvoM69Hp-kU7gW55tmz2V4Qxhvoz9cM8fciBA
349
  ---
350
- # LED-Based Summarization Model (Large): Condensing Extensive Information
351
 
352
  <a href="https://colab.research.google.com/gist/pszemraj/3eba944ddc9fc9a4a1bfb21e83b57620/summarization-token-batching.ipynb">
353
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
354
  </a>
355
 
356
- This model is a fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the `BookSum` dataset. It aims to generalize well and be useful in summarizing lengthy text for both academic and everyday purposes. Capable of handling up to 16,384 tokens per batch, this model provides effective summarization of large volumes of text.
357
 
358
- - See the Colab demo linked above or try the [demo on Spaces](https://huggingface.co/spaces/pszemraj/summarize-long-text)
 
359
 
360
  > **Note:** Due to inference API timeout constraints, outputs may be truncated before the fully summary is returned (try python or the demo)
361
 
@@ -466,6 +467,7 @@ For detailed explanations and documentation, check the [README](https://github.c
466
 
467
  Check out these other related models, also trained on the BookSum dataset:
468
 
 
469
  - [Long-T5-tglobal-base](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary)
470
  - [BigBird-Pegasus-Large-K](https://huggingface.co/pszemraj/bigbird-pegasus-large-K-booksum)
471
  - [Pegasus-X-Large](https://huggingface.co/pszemraj/pegasus-x-large-book-summary)
347
  verified: true
348
  verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI2NjVlYjgwYWJiMjcyMDUzMzEwNDNjZTMxMDM0MjAzMzk1ZmIwY2Q1ZDQ2Y2M5NDBlMDEzYzFkNWEyNzJmNiIsInZlcnNpb24iOjF9.iZ1Iy7FuWL4GH7LS5EylVj5eZRC3L2ZsbYQapAkMNzR_VXPoMGvoM69Hp-kU7gW55tmz2V4Qxhvoz9cM8fciBA
349
  ---
350
+ # led-large-book-summary
351
 
352
  <a href="https://colab.research.google.com/gist/pszemraj/3eba944ddc9fc9a4a1bfb21e83b57620/summarization-token-batching.ipynb">
353
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
354
  </a>
355
 
356
+ This model is a fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the `BookSum` dataset (`kmfoda/booksum`). It aims to generalize well and be useful in summarizing lengthy text for both academic and everyday purposes.
357
 
358
+ - Handles up to 16,384 tokens input
359
+ - See the Colab demo linked above or try the [demo on Spaces](https://huggingface.co/spaces/pszemraj/summarize-long-text)
360
 
361
  > **Note:** Due to inference API timeout constraints, outputs may be truncated before the fully summary is returned (try python or the demo)
362
 
467
 
468
  Check out these other related models, also trained on the BookSum dataset:
469
 
470
+ - [LED-large continued](https://huggingface.co/pszemraj/led-large-book-summary-continued) - experiment with further fine-tuning
471
  - [Long-T5-tglobal-base](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary)
472
  - [BigBird-Pegasus-Large-K](https://huggingface.co/pszemraj/bigbird-pegasus-large-K-booksum)
473
  - [Pegasus-X-Large](https://huggingface.co/pszemraj/pegasus-x-large-book-summary)