pszemraj commited on
Commit
80d13fb
1 Parent(s): 7246bab

make spaces demo more obvious

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -283,9 +283,12 @@ model-index:
283
 
284
  [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/3eba944ddc9fc9a4a1bfb21e83b57620/summarization-token-batching.ipynb)
285
 
286
- - A fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the BookSum dataset.
287
- - Goal: a model that can generalize well and is useful in summarizing long text in academic and daily usage. See the demo linked above!
288
- - works well on lots of text and can handle 16384 tokens/batch (_if you have the GPU memory to handle that_)
 
 
 
289
 
290
  > Note: the API is set to generate a max of 64 tokens for runtime reasons, so the summaries may be truncated (depending on the length of input text). For best results use python as below.
291
 
283
 
284
  [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/3eba944ddc9fc9a4a1bfb21e83b57620/summarization-token-batching.ipynb)
285
 
286
+ A fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the BookSum dataset.
287
+
288
+ Goal: a model that can generalize well and is useful in summarizing long text in academic and daily usage. The result works well on lots of text and can handle 16384 tokens/batch (_if you have the GPU memory to handle that_)
289
+
290
+ - See the Colab demo linked above or try the [demo on Spaces](https://huggingface.co/spaces/pszemraj/summarize-long-text)
291
+
292
 
293
  > Note: the API is set to generate a max of 64 tokens for runtime reasons, so the summaries may be truncated (depending on the length of input text). For best results use python as below.
294