|
import streamlit as st
|
|
|
|
|
|
st.markdown("""
|
|
<style>
|
|
.main-title {
|
|
font-size: 36px;
|
|
color: #4A90E2;
|
|
font-weight: bold;
|
|
text-align: center;
|
|
}
|
|
.sub-title {
|
|
font-size: 24px;
|
|
color: #4A90E2;
|
|
margin-top: 20px;
|
|
}
|
|
.section {
|
|
background-color: #f9f9f9;
|
|
padding: 15px;
|
|
border-radius: 10px;
|
|
margin-top: 20px;
|
|
}
|
|
.section h2 {
|
|
font-size: 22px;
|
|
color: #4A90E2;
|
|
}
|
|
.section p, .section ul {
|
|
color: #666666;
|
|
}
|
|
.link {
|
|
color: #4A90E2;
|
|
text-decoration: none;
|
|
}
|
|
</style>
|
|
""", unsafe_allow_html=True)
|
|
|
|
|
|
st.markdown('<div class="main-title">Cyberbullying Detection in Tweets with Spark NLP</div>', unsafe_allow_html=True)
|
|
|
|
st.markdown("""
|
|
<div class="section">
|
|
<p>Welcome to the Spark NLP Cyberbullying Detection Demo App! Detecting cyberbullying in social media posts is crucial to creating a safer online environment. This app demonstrates how to use Spark NLP's powerful tools to identify and classify cyberbullying in tweets.</p>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|
|
st.write("")
|
|
st.image('images/Cyberbullying.jpeg', use_column_width='auto')
|
|
|
|
|
|
st.markdown('<div class="sub-title">About Cyberbullying Detection</div>', unsafe_allow_html=True)
|
|
st.markdown("""
|
|
<div class="section">
|
|
<p>Cyberbullying detection involves analyzing text to identify instances of harmful, threatening, or abusive language. Cyberbullying can have severe psychological effects on victims, making it essential to identify and address it promptly. Using Spark NLP, we can build a model to detect and classify cyberbullying in social media posts, helping to mitigate the negative impacts of online harassment.</p>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|
|
|
|
st.markdown('<div class="sub-title">Using Cyberbullying Detection Model in Spark NLP</div>', unsafe_allow_html=True)
|
|
st.markdown("""
|
|
<div class="section">
|
|
<p>The following pipeline uses the Universal Sentence Encoder and a pre-trained ClassifierDL model to detect cyberbullying in tweets. This model can identify various forms of cyberbullying such as racism and sexism.</p>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|
|
st.markdown('<h2 class="sub-title">Example Usage in Python</h2>', unsafe_allow_html=True)
|
|
|
|
|
|
st.markdown('<div class="sub-title">Setup</div>', unsafe_allow_html=True)
|
|
st.markdown('<p>To install Spark NLP in Python, use your favorite package manager (conda, pip, etc.). For example:</p>', unsafe_allow_html=True)
|
|
st.code("""
|
|
pip install spark-nlp
|
|
pip install pyspark
|
|
""", language="bash")
|
|
|
|
st.markdown("<p>Then, import Spark NLP and start a Spark session:</p>", unsafe_allow_html=True)
|
|
st.code("""
|
|
import sparknlp
|
|
|
|
# Start Spark Session
|
|
spark = sparknlp.start()
|
|
""", language='python')
|
|
|
|
|
|
st.markdown('<div class="sub-title">Example Usage: Cyberbullying Detection with Spark NLP</div>', unsafe_allow_html=True)
|
|
st.code('''
|
|
from sparknlp.base import DocumentAssembler, LightPipeline
|
|
from sparknlp.annotator import UniversalSentenceEncoder, ClassifierDLModel
|
|
from pyspark.ml import Pipeline
|
|
|
|
# Step 1: Transforms raw texts to document annotation
|
|
document_assembler = DocumentAssembler()\\
|
|
.setInputCol("text")\\
|
|
.setOutputCol("document")
|
|
|
|
# Step 2: Universal Sentence Encoder
|
|
use = UniversalSentenceEncoder.pretrained('tfhub_use', lang="en") \\
|
|
.setInputCols(["document"])\\
|
|
.setOutputCol("sentence_embeddings")
|
|
|
|
# Step 3: ClassifierDLModel for Cyberbullying Detection
|
|
document_classifier = ClassifierDLModel.pretrained('classifierdl_use_cyberbullying', 'en') \\
|
|
.setInputCols(["sentence_embeddings"]) \\
|
|
.setOutputCol("class")
|
|
|
|
# Define the pipeline
|
|
nlp_pipeline = Pipeline(stages=[document_assembler, use, document_classifier])
|
|
|
|
# Create a light pipeline for prediction
|
|
light_pipeline = LightPipeline(nlp_pipeline.fit(spark.createDataFrame([['']]).toDF("text")))
|
|
|
|
# Predict cyberbullying in a tweet
|
|
annotations = light_pipeline.fullAnnotate('@geeky_zekey Thanks for showing again that blacks are the biggest racists. Blocked')
|
|
print(annotations[0]['class'][0])
|
|
''', language='python')
|
|
|
|
st.text("""
|
|
Output:
|
|
Annotation(category, 0, 81, racism, {'sentence': '0', 'sexism': '2.4904006E-7', 'neutral': '9.4820876E-5', 'racism': '0.9999049'}, [])
|
|
""")
|
|
|
|
st.markdown("""
|
|
<p>The annotation classifies the text as "racism" with a probability score of 0.9999049, indicating very high confidence, while also providing low probability scores for "sexism" and "neutral."</p>
|
|
""", unsafe_allow_html=True)
|
|
|
|
|
|
st.markdown('<div class="sub-title">Benchmarking</div>', unsafe_allow_html=True)
|
|
st.markdown("""
|
|
<div class="section">
|
|
<p>The following table summarizes the performance of the Cyberbullying Detection model in terms of precision, recall, and f1-score:</p>
|
|
<pre>
|
|
precision recall f1-score support
|
|
|
|
neutral 0.72 0.76 0.74 700
|
|
racism 0.89 0.94 0.92 773
|
|
sexism 0.82 0.71 0.76 622
|
|
|
|
accuracy 0.81 2095
|
|
macro avg 0.81 0.80 0.80 2095
|
|
weighted avg 0.81 0.81 0.81 2095
|
|
</pre>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|
|
|
|
st.markdown("""
|
|
<div class="section">
|
|
<h2>Conclusion</h2>
|
|
<p>In this app, we demonstrated how to use Spark NLP's ClassifierDL model to perform cyberbullying detection on tweet data. These powerful tools enable users to efficiently process large datasets and identify harmful content, providing deeper insights for various applications. By integrating these annotators into your NLP pipelines, you can enhance text understanding, information extraction, and online safety measures.</p>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|
|
|
|
st.markdown('<div class="sub-title">For additional information, please check the following references.</div>', unsafe_allow_html=True)
|
|
|
|
st.markdown("""
|
|
<div class="section">
|
|
<ul>
|
|
<li>Documentation : <a href="https://nlp.johnsnowlabs.com/docs/en/transformers#classifierdl" target="_blank" rel="noopener">ClassifierDLModel</a></li>
|
|
<li>Python Docs : <a href="https://nlp.johnsnowlabs.com/api/python/reference/autosummary/sparknlp/annotator/classifierdl/index.html#sparknlp.annotator.classifierdl.ClassifierDLModel" target="_blank" rel="noopener">ClassifierDLModel</a></li>
|
|
<li>Model Used : <a href="https://sparknlp.org/2021/01/09/classifierdl_use_cyberbullying_en.html" target="_blank" rel="noopener">classifierdl_use_cyberbullying</a></li>
|
|
</ul>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|
|
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
|
|
st.markdown("""
|
|
<div class="section">
|
|
<ul>
|
|
<li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>
|
|
<li><a class="link" href="https://join.slack.com/t/spark-nlp/shared_invite/zt-198dipu77-L3UWNe_AJ8xqDk0ivmih5Q" target="_blank">Slack</a>: Live discussion with the community and team</li>
|
|
<li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub</a>: Bug reports, feature requests, and contributions</li>
|
|
<li><a class="link" href="https://medium.com/spark-nlp" target="_blank">Medium</a>: Spark NLP articles</li>
|
|
<li><a class="link" href="https://www.youtube.com/channel/UCmFOjlpYEhxf_wJUDuz6xxQ/videos" target="_blank">YouTube</a>: Video tutorials</li>
|
|
</ul>
|
|
</div>
|
|
""", unsafe_allow_html=True)
|
|
|