Shobhank-iiitdwd commited on
Commit
913134b
1 Parent(s): 059bc25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -8
README.md CHANGED
@@ -455,11 +455,9 @@ model-index:
455
 
456
 
457
 
458
- Summarize long text and get a SparkNotes-esque summary of arbitrary topics!
459
 
460
  - generalizes reasonably well to academic & narrative text.
461
- - A simple example/use case on ASR is [here](https://longt5-booksum-example.netlify.app/).
462
- - Example notebook in Colab (_click on the icon above_).
463
 
464
  ## Cheeky Proof-of-Concept
465
 
@@ -492,7 +490,7 @@ A summary of the [infamous navy seals copypasta](https://knowyourmeme.com/memes/
492
 
493
  ## Model description
494
 
495
- A fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `kmfoda/booksum` dataset:
496
 
497
  - 30+ epochs of fine-tuning from the base model on V100/A100 GPUs
498
  - Training used 16384 token input / 1024 max output
@@ -553,10 +551,6 @@ This model was originally tuned on Google Colab with a heavily modified variant
553
 
554
  ## Training procedure
555
 
556
- ### Updates:
557
-
558
- - July 22, 2022: updated to a fairly converged checkpoint
559
- - July 3, 2022: Added a new version with several epochs of additional general training that is more performant.
560
 
561
  ### Training hyperparameters
562
 
 
455
 
456
 
457
 
 
458
 
459
  - generalizes reasonably well to academic & narrative text.
460
+
 
461
 
462
  ## Cheeky Proof-of-Concept
463
 
 
490
 
491
  ## Model description
492
 
493
+ A fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `booksum` dataset:
494
 
495
  - 30+ epochs of fine-tuning from the base model on V100/A100 GPUs
496
  - Training used 16384 token input / 1024 max output
 
551
 
552
  ## Training procedure
553
 
 
 
 
 
554
 
555
  ### Training hyperparameters
556