import streamlit as st # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Introduction st.markdown('
State-of-the-Art Text Summarization with Spark NLP
', unsafe_allow_html=True) st.markdown("""

Welcome to the Spark NLP Demos App! In the rapidly evolving field of Natural Language Processing (NLP), the combination of powerful models and scalable frameworks is crucial. One such resource-intensive task is Text Summarization, which benefits immensely from the efficient implementation of machine learning models on distributed systems like Apache Spark.

Spark NLP stands out as the leading choice for enterprises building NLP solutions. This open-source library, built in Scala with a Python wrapper, offers state-of-the-art machine learning models within an easy-to-use pipeline design compatible with Spark ML.

""", unsafe_allow_html=True) # About the T5 Model st.markdown('
About the T5 Model
', unsafe_allow_html=True) st.markdown("""

A standout model for text summarization is the Text-to-Text Transformer (T5), introduced by Google researchers in 2019. T5 achieves remarkable results by utilizing a unique design that allows it to perform multiple NLP tasks with simple prefixes. For text summarization, the input text is prefixed with "summarize:".

In Spark NLP, the T5 model is available through the T5Transformer annotator. We'll show you how to use Spark NLP in Python to perform text summarization using the T5 model.

""", unsafe_allow_html=True) st.image('https://www.johnsnowlabs.com/wp-content/uploads/2023/09/img_blog_2.jpg', caption='Diagram of the T5 model, from the original paper', use_column_width='auto') # How to Use the Model st.markdown('
How to Use the T5 Model with Spark NLP
', unsafe_allow_html=True) st.markdown("""

To use the T5Transformer annotator in Spark NLP to perform text summarization, we need to create a pipeline with two stages: the first transforms the input text into an annotation object, and the second stage contains the T5 model.

""", unsafe_allow_html=True) st.markdown('### Installation') st.code('!pip install spark-nlp', language='python') st.markdown('### Import Libraries and Start Spark Session') st.code(""" import sparknlp from sparknlp.base import DocumentAssembler, PipelineModel from sparknlp.annotator import T5Transformer # Start the Spark Session spark = sparknlp.start() """, language='python') st.markdown("""

Now we can define the pipeline to use the T5 model. We'll use the PipelineModel object since we are using the pretrained model and don’t need to train any stage of the pipeline.

""", unsafe_allow_html=True) st.markdown('### Define the Pipeline') st.code(""" # Transforms raw texts into `document` annotation document_assembler = ( DocumentAssembler().setInputCol("text").setOutputCol("documents") ) # The T5 model t5 = ( T5Transformer.pretrained("t5_small") .setTask("summarize:") .setInputCols(["documents"]) .setMaxOutputLength(200) .setOutputCol("t5") ) # Define the Spark pipeline pipeline = PipelineModel(stages = [document_assembler, t5]) """, language='python') st.markdown("""

To use the model, create a Spark DataFrame containing the input data. In this example, we'll work with a single sentence, but the framework can handle multiple texts for simultaneous processing. The input column from the DocumentAssembler annotator requires a column named “text.”

""", unsafe_allow_html=True) st.markdown('### Create Example DataFrame') st.code(""" example = \""" Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new Colossal Clean Crawled Corpus, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code. \""" spark_df = spark.createDataFrame([[example]]) """, language='python') st.markdown('### Apply the Pipeline') st.code(""" result = pipeline.transform(spark_df) result.select("t5.result").show(truncate=False) """, language='python') st.markdown('
Output
', unsafe_allow_html=True) st.markdown("""

The summarization output will look something like this:

transfer learning has emerged as a powerful technique in natural language processing (NLP) the effectiveness of transfer learning has given rise to a diversity of approaches, methodologies, and practice.

Note: We defined the maximum output length to 200. Depending on the length of the original text, this parameter should be adapted.

""", unsafe_allow_html=True) # Additional Resources and References st.markdown('
Additional Resources and References
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) st.markdown('
Community & Support
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)