abdullahmubeen10 commited on
Commit
5f584f1
Β·
verified Β·
1 Parent(s): 6520bbf

Update pages/Workflow & Model Overview.py

Browse files
Files changed (1) hide show
  1. pages/Workflow & Model Overview.py +250 -250
pages/Workflow & Model Overview.py CHANGED
@@ -1,250 +1,250 @@
1
- import streamlit as st
2
-
3
- # Custom CSS for better styling
4
- st.markdown("""
5
- <style>
6
- .main-title {
7
- font-size: 36px;
8
- color: #4A90E2;
9
- font-weight: bold;
10
- text-align: center;
11
- }
12
- .sub-title {
13
- font-size: 24px;
14
- color: #4A90E2;
15
- margin-top: 20px;
16
- }
17
- .section {
18
- background-color: #f9f9f9;
19
- padding: 15px;
20
- border-radius: 10px;
21
- margin-top: 20px;
22
- }
23
- .section h2 {
24
- font-size: 22px;
25
- color: #4A90E2;
26
- }
27
- .section p, .section ul {
28
- color: #666666;
29
- }
30
- .link {
31
- color: #4A90E2;
32
- text-decoration: none;
33
- }
34
- </style>
35
- """, unsafe_allow_html=True)
36
-
37
- # Title
38
- st.markdown('<div class="main-title">Grammar Analysis & Dependency Parsing</div>', unsafe_allow_html=True)
39
-
40
- # Introduction Section
41
- st.markdown("""
42
- <div class="section">
43
- <p>Understanding the grammatical structure of sentences is crucial in Natural Language Processing (NLP) for various applications such as translation, text summarization, and information extraction. This page focuses on Grammar Analysis and Dependency Parsing, which help in identifying the grammatical roles of words in a sentence and the relationships between them.</p>
44
- <p>We utilize Spark NLP, a robust library for NLP tasks, to perform Part-of-Speech (POS) tagging and Dependency Parsing, enabling us to analyze sentences at scale with high accuracy.</p>
45
- </div>
46
- """, unsafe_allow_html=True)
47
-
48
- # Understanding Dependency Parsing
49
- st.markdown('<div class="sub-title">Understanding Dependency Parsing</div>', unsafe_allow_html=True)
50
-
51
- st.markdown("""
52
- <div class="section">
53
- <p>Dependency Parsing is a technique used to understand the grammatical structure of a sentence by identifying the dependencies between words. It maps out relationships such as subject-verb, adjective-noun, etc., which are essential for understanding the sentence's meaning.</p>
54
- <p>In Dependency Parsing, each word in a sentence is linked to another word, creating a tree-like structure called a dependency tree. This structure helps in various NLP tasks, including information retrieval, question answering, and machine translation.</p>
55
- </div>
56
- """, unsafe_allow_html=True)
57
-
58
- # Implementation Section
59
- st.markdown('<div class="sub-title">Implementing Grammar Analysis & Dependency Parsing</div>', unsafe_allow_html=True)
60
-
61
- st.markdown("""
62
- <div class="section">
63
- <p>The following example demonstrates how to implement a grammar analysis pipeline using Spark NLP. The pipeline includes stages for tokenization, POS tagging, and dependency parsing, extracting the grammatical relationships between words in a sentence.</p>
64
- </div>
65
- """, unsafe_allow_html=True)
66
-
67
- st.code('''
68
- import sparknlp
69
- from sparknlp.base import *
70
- from sparknlp.annotator import *
71
- from pyspark.ml import Pipeline
72
- import pyspark.sql.functions as F
73
-
74
- # Initialize Spark NLP
75
- spark = sparknlp.start()
76
-
77
- # Stage 1: Document Assembler
78
- document_assembler = DocumentAssembler()\\
79
- .setInputCol("text")\\
80
- .setOutputCol("document")
81
-
82
- # Stage 2: Tokenizer
83
- tokenizer = Tokenizer().setInputCols(["document"]).setOutputCol("token")
84
-
85
- # Stage 3: POS Tagger
86
- postagger = PerceptronModel.pretrained("pos_anc", "en")\\
87
- .setInputCols(["document", "token"])\\
88
- .setOutputCol("pos")
89
-
90
- # Stage 4: Dependency Parsing
91
- dependency_parser = DependencyParserModel.pretrained("dependency_conllu")\\
92
- .setInputCols(["document", "pos", "token"])\\
93
- .setOutputCol("dependency")
94
-
95
- # Stage 5: Typed Dependency Parsing
96
- typed_dependency_parser = TypedDependencyParserModel.pretrained("dependency_typed_conllu")\\
97
- .setInputCols(["token", "pos", "dependency"])\\
98
- .setOutputCol("dependency_type")
99
-
100
- # Define the pipeline
101
- pipeline = Pipeline(stages=[
102
- document_assembler,
103
- tokenizer,
104
- postagger,
105
- dependency_parser,
106
- typed_dependency_parser
107
- ])
108
-
109
- # Example sentence
110
- example = spark.createDataFrame([
111
- ["Unions representing workers at Turner Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul."]
112
- ]).toDF("text")
113
-
114
- # Apply the pipeline
115
- result = pipeline.fit(spark.createDataFrame([[""]]).toDF("text")).transform(example)
116
-
117
- # Display the results
118
- result.select(
119
- F.explode(
120
- F.arrays_zip(
121
- result.token.result,
122
- result.pos.result,
123
- result.dependency.result,
124
- result.dependency_type.result
125
- )
126
- ).alias("cols")
127
- ).select(
128
- F.expr("cols['0']").alias("token"),
129
- F.expr("cols['1']").alias("pos"),
130
- F.expr("cols['2']").alias("dependency"),
131
- F.expr("cols['3']").alias("dependency_type")
132
- ).show(truncate=False)
133
- ''', language='python')
134
-
135
- # Example Output
136
- st.text("""
137
- +------------+---+------------+---------------+
138
- |token |pos|dependency |dependency_type|
139
- +------------+---+------------+---------------+
140
- |Unions |NNP|ROOT |root |
141
- |representing|VBG|workers |amod |
142
- |workers |NNS|Unions |flat |
143
- |at |IN |Turner |case |
144
- |Turner |NNP|workers |flat |
145
- |Newall |NNP|say |nsubj |
146
- |say |VBP|Unions |parataxis |
147
- |they |PRP|disappointed|nsubj |
148
- |are |VBP|disappointed|nsubj |
149
- |' |POS|disappointed|case |
150
- |disappointed|JJ |say |nsubj |
151
- |' |POS|disappointed|case |
152
- |after |IN |talks |case |
153
- |talks |NNS|disappointed|nsubj |
154
- |with |IN |stricken |det |
155
- |stricken |NN |talks |amod |
156
- |parent |NN |Mogul |flat |
157
- |firm |NN |Mogul |flat |
158
- |Federal |NNP|Mogul |flat |
159
- |Mogul |NNP|stricken |flat |
160
- +------------+---+------------+---------------+
161
- """)
162
-
163
- # Visualizing the Dependencies
164
- st.markdown('<div class="sub-title">Visualizing the Dependencies</div>', unsafe_allow_html=True)
165
-
166
- st.markdown("""
167
- <div class="section">
168
- <p>For a visual representation of the dependencies, you can use the <b>spark-nlp-display</b> module, an open-source tool that makes visualizing dependencies straightforward and easy to integrate into your workflow.</p>
169
- <p>First, install the module with pip:</p>
170
- <code>pip install spark-nlp-display</code>
171
- <p>Then, you can use the <code>DependencyParserVisualizer</code> class to create a visualization of the dependency tree:</p>
172
- </div>
173
- """, unsafe_allow_html=True)
174
-
175
- st.code('''
176
- from sparknlp_display import DependencyParserVisualizer
177
-
178
- # Initialize the visualizer
179
- dependency_vis = DependencyParserVisualizer()
180
-
181
- # Display the dependency tree
182
- dependency_vis.display(
183
- result.collect()[0], # single example result
184
- pos_col="pos",
185
- dependency_col="dependency",
186
- dependency_type_col="dependency_type",
187
- )
188
- ''', language='python')
189
-
190
- st.image('images\DependencyParserVisualizer.png', caption='The visualization of dependencies')
191
-
192
- st.markdown("""
193
- <div class="section">
194
- <p>This code snippet will generate a visual dependency tree like shown above for the given sentence, clearly showing the grammatical relationships between words. The <code>spark-nlp-display</code> module provides an intuitive way to visualize complex dependency structures, aiding in the analysis and understanding of sentence grammar.</p>
195
- </div>
196
- """, unsafe_allow_html=True)
197
-
198
- # Model Info Section
199
- st.markdown('<div class="sub-title">Choosing the Right Model for Dependency Parsing</div>', unsafe_allow_html=True)
200
-
201
- st.markdown("""
202
- <div class="section">
203
- <p>For dependency parsing, the models <b>"dependency_conllu"</b> and <b>"dependency_typed_conllu"</b> are used. These models are trained on a large corpus and are effective for extracting grammatical relations between words in English sentences.</p>
204
- <p>To explore more models tailored for different NLP tasks, visit the <a class="link" href="https://sparknlp.org/models?annotator=DependencyParserModel" target="_blank">Spark NLP Models Hub</a>.</p>
205
- </div>
206
- """, unsafe_allow_html=True)
207
-
208
- # References Section
209
- st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
210
-
211
- st.markdown("""
212
- <div class="section">
213
- <ul>
214
- <li><a class="link" href="https://nlp.johnsnowlabs.com/docs/en/annotators" target="_blank" rel="noopener">Spark NLP documentation page</a> for all available annotators</li>
215
- <li>Python API documentation for <a class="link" href="https://nlp.johnsnowlabs.com/api/python/reference/autosummary/sparknlp/annotator/pos/perceptron/index.html#sparknlp.annotator.pos.perceptron.PerceptronModel" target="_blank" rel="noopener">PerceptronModel</a> and <a href="https://nlp.johnsnowlabs.com/api/python/reference/autosummary/sparknlp/annotator/dependency/dependency_parser/index.html#sparknlp.annotator.dependency.dependency_parser.DependencyParserModel" target="_blank" rel="noopener">Dependency Parser</a></li>
216
- <li>Scala API documentation for <a class="link" href="https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/annotators/pos/perceptron/PerceptronModel.html" target="_blank" rel="noopener">PerceptronModel</a> and <a href="https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/annotators/parser/dep/DependencyParserModel.html" target="_blank" rel="noopener">DependencyParserModel</a></li>
217
- <li>For extended examples of usage of Spark NLP annotators, check the <a class="link" href="https://github.com/JohnSnowLabs/spark-nlp-workshop" target="_blank" rel="noopener">Spark NLP Workshop repository</a>.</li>
218
- <li>Minsky, M.L. and Papert, S.A. (1969) Perceptrons. MIT Press, Cambridge.</li>
219
- </ul>
220
- </div>
221
- """, unsafe_allow_html=True)
222
-
223
- # Community & Support Section
224
- st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
225
-
226
- st.markdown("""
227
- <div class="section">
228
- <ul>
229
- <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>
230
- <li><a class="link" href="https://join.slack.com/t/spark-nlp/shared_invite/zt-198dipu77-L3UWNe_AJ8xqDk0ivmih5Q" target="_blank">Slack</a>: Live discussion with the community and team</li>
231
- <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub</a>: Bug reports, feature requests, and contributions</li>
232
- <li><a class="link" href="https://medium.com/spark-nlp" target="_blank">Medium</a>: Spark NLP articles</li>
233
- <li><a class="link" href="https://www.youtube.com/channel/UCmFOjlpYEhxf_wJUDuz6xxQ/videos" target="_blank">YouTube</a>: Video tutorials</li>
234
- </ul>
235
- </div>
236
- """, unsafe_allow_html=True)
237
-
238
- # Quick Links Section
239
- st.markdown('<div class="sub-title">Quick Links</div>', unsafe_allow_html=True)
240
-
241
- st.markdown("""
242
- <div class="section">
243
- <ul>
244
- <li><a class="link" href="https://sparknlp.org/docs/en/quickstart" target="_blank">Getting Started</a></li>
245
- <li><a class="link" href="https://nlp.johnsnowlabs.com/models" target="_blank">Pretrained Models</a></li>
246
- <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples/python/annotation/text/english" target="_blank">Example Notebooks</a></li>
247
- <li><a class="link" href="https://sparknlp.org/docs/en/install" target="_blank">Installation Guide</a></li>
248
- </ul>
249
- </div>
250
- """, unsafe_allow_html=True)
 
1
+ import streamlit as st
2
+
3
+ # Custom CSS for better styling
4
+ st.markdown("""
5
+ <style>
6
+ .main-title {
7
+ font-size: 36px;
8
+ color: #4A90E2;
9
+ font-weight: bold;
10
+ text-align: center;
11
+ }
12
+ .sub-title {
13
+ font-size: 24px;
14
+ color: #4A90E2;
15
+ margin-top: 20px;
16
+ }
17
+ .section {
18
+ background-color: #f9f9f9;
19
+ padding: 15px;
20
+ border-radius: 10px;
21
+ margin-top: 20px;
22
+ }
23
+ .section h2 {
24
+ font-size: 22px;
25
+ color: #4A90E2;
26
+ }
27
+ .section p, .section ul {
28
+ color: #666666;
29
+ }
30
+ .link {
31
+ color: #4A90E2;
32
+ text-decoration: none;
33
+ }
34
+ </style>
35
+ """, unsafe_allow_html=True)
36
+
37
+ # Title
38
+ st.markdown('<div class="main-title">Grammar Analysis & Dependency Parsing</div>', unsafe_allow_html=True)
39
+
40
+ # Introduction Section
41
+ st.markdown("""
42
+ <div class="section">
43
+ <p>Understanding the grammatical structure of sentences is crucial in Natural Language Processing (NLP) for various applications such as translation, text summarization, and information extraction. This page focuses on Grammar Analysis and Dependency Parsing, which help in identifying the grammatical roles of words in a sentence and the relationships between them.</p>
44
+ <p>We utilize Spark NLP, a robust library for NLP tasks, to perform Part-of-Speech (POS) tagging and Dependency Parsing, enabling us to analyze sentences at scale with high accuracy.</p>
45
+ </div>
46
+ """, unsafe_allow_html=True)
47
+
48
+ # Understanding Dependency Parsing
49
+ st.markdown('<div class="sub-title">Understanding Dependency Parsing</div>', unsafe_allow_html=True)
50
+
51
+ st.markdown("""
52
+ <div class="section">
53
+ <p>Dependency Parsing is a technique used to understand the grammatical structure of a sentence by identifying the dependencies between words. It maps out relationships such as subject-verb, adjective-noun, etc., which are essential for understanding the sentence's meaning.</p>
54
+ <p>In Dependency Parsing, each word in a sentence is linked to another word, creating a tree-like structure called a dependency tree. This structure helps in various NLP tasks, including information retrieval, question answering, and machine translation.</p>
55
+ </div>
56
+ """, unsafe_allow_html=True)
57
+
58
+ # Implementation Section
59
+ st.markdown('<div class="sub-title">Implementing Grammar Analysis & Dependency Parsing</div>', unsafe_allow_html=True)
60
+
61
+ st.markdown("""
62
+ <div class="section">
63
+ <p>The following example demonstrates how to implement a grammar analysis pipeline using Spark NLP. The pipeline includes stages for tokenization, POS tagging, and dependency parsing, extracting the grammatical relationships between words in a sentence.</p>
64
+ </div>
65
+ """, unsafe_allow_html=True)
66
+
67
+ st.code('''
68
+ import sparknlp
69
+ from sparknlp.base import *
70
+ from sparknlp.annotator import *
71
+ from pyspark.ml import Pipeline
72
+ import pyspark.sql.functions as F
73
+
74
+ # Initialize Spark NLP
75
+ spark = sparknlp.start()
76
+
77
+ # Stage 1: Document Assembler
78
+ document_assembler = DocumentAssembler()\\
79
+ .setInputCol("text")\\
80
+ .setOutputCol("document")
81
+
82
+ # Stage 2: Tokenizer
83
+ tokenizer = Tokenizer().setInputCols(["document"]).setOutputCol("token")
84
+
85
+ # Stage 3: POS Tagger
86
+ postagger = PerceptronModel.pretrained("pos_anc", "en")\\
87
+ .setInputCols(["document", "token"])\\
88
+ .setOutputCol("pos")
89
+
90
+ # Stage 4: Dependency Parsing
91
+ dependency_parser = DependencyParserModel.pretrained("dependency_conllu")\\
92
+ .setInputCols(["document", "pos", "token"])\\
93
+ .setOutputCol("dependency")
94
+
95
+ # Stage 5: Typed Dependency Parsing
96
+ typed_dependency_parser = TypedDependencyParserModel.pretrained("dependency_typed_conllu")\\
97
+ .setInputCols(["token", "pos", "dependency"])\\
98
+ .setOutputCol("dependency_type")
99
+
100
+ # Define the pipeline
101
+ pipeline = Pipeline(stages=[
102
+ document_assembler,
103
+ tokenizer,
104
+ postagger,
105
+ dependency_parser,
106
+ typed_dependency_parser
107
+ ])
108
+
109
+ # Example sentence
110
+ example = spark.createDataFrame([
111
+ ["Unions representing workers at Turner Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul."]
112
+ ]).toDF("text")
113
+
114
+ # Apply the pipeline
115
+ result = pipeline.fit(spark.createDataFrame([[""]]).toDF("text")).transform(example)
116
+
117
+ # Display the results
118
+ result.select(
119
+ F.explode(
120
+ F.arrays_zip(
121
+ result.token.result,
122
+ result.pos.result,
123
+ result.dependency.result,
124
+ result.dependency_type.result
125
+ )
126
+ ).alias("cols")
127
+ ).select(
128
+ F.expr("cols['0']").alias("token"),
129
+ F.expr("cols['1']").alias("pos"),
130
+ F.expr("cols['2']").alias("dependency"),
131
+ F.expr("cols['3']").alias("dependency_type")
132
+ ).show(truncate=False)
133
+ ''', language='python')
134
+
135
+ # Example Output
136
+ st.text("""
137
+ +------------+---+------------+---------------+
138
+ |token |pos|dependency |dependency_type|
139
+ +------------+---+------------+---------------+
140
+ |Unions |NNP|ROOT |root |
141
+ |representing|VBG|workers |amod |
142
+ |workers |NNS|Unions |flat |
143
+ |at |IN |Turner |case |
144
+ |Turner |NNP|workers |flat |
145
+ |Newall |NNP|say |nsubj |
146
+ |say |VBP|Unions |parataxis |
147
+ |they |PRP|disappointed|nsubj |
148
+ |are |VBP|disappointed|nsubj |
149
+ |' |POS|disappointed|case |
150
+ |disappointed|JJ |say |nsubj |
151
+ |' |POS|disappointed|case |
152
+ |after |IN |talks |case |
153
+ |talks |NNS|disappointed|nsubj |
154
+ |with |IN |stricken |det |
155
+ |stricken |NN |talks |amod |
156
+ |parent |NN |Mogul |flat |
157
+ |firm |NN |Mogul |flat |
158
+ |Federal |NNP|Mogul |flat |
159
+ |Mogul |NNP|stricken |flat |
160
+ +------------+---+------------+---------------+
161
+ """)
162
+
163
+ # Visualizing the Dependencies
164
+ st.markdown('<div class="sub-title">Visualizing the Dependencies</div>', unsafe_allow_html=True)
165
+
166
+ st.markdown("""
167
+ <div class="section">
168
+ <p>For a visual representation of the dependencies, you can use the <b>spark-nlp-display</b> module, an open-source tool that makes visualizing dependencies straightforward and easy to integrate into your workflow.</p>
169
+ <p>First, install the module with pip:</p>
170
+ <code>pip install spark-nlp-display</code>
171
+ <p>Then, you can use the <code>DependencyParserVisualizer</code> class to create a visualization of the dependency tree:</p>
172
+ </div>
173
+ """, unsafe_allow_html=True)
174
+
175
+ st.code('''
176
+ from sparknlp_display import DependencyParserVisualizer
177
+
178
+ # Initialize the visualizer
179
+ dependency_vis = DependencyParserVisualizer()
180
+
181
+ # Display the dependency tree
182
+ dependency_vis.display(
183
+ result.collect()[0], # single example result
184
+ pos_col="pos",
185
+ dependency_col="dependency",
186
+ dependency_type_col="dependency_type",
187
+ )
188
+ ''', language='python')
189
+
190
+ st.image('images/DependencyParserVisualizer.png', caption='The visualization of dependencies')
191
+
192
+ st.markdown("""
193
+ <div class="section">
194
+ <p>This code snippet will generate a visual dependency tree like shown above for the given sentence, clearly showing the grammatical relationships between words. The <code>spark-nlp-display</code> module provides an intuitive way to visualize complex dependency structures, aiding in the analysis and understanding of sentence grammar.</p>
195
+ </div>
196
+ """, unsafe_allow_html=True)
197
+
198
+ # Model Info Section
199
+ st.markdown('<div class="sub-title">Choosing the Right Model for Dependency Parsing</div>', unsafe_allow_html=True)
200
+
201
+ st.markdown("""
202
+ <div class="section">
203
+ <p>For dependency parsing, the models <b>"dependency_conllu"</b> and <b>"dependency_typed_conllu"</b> are used. These models are trained on a large corpus and are effective for extracting grammatical relations between words in English sentences.</p>
204
+ <p>To explore more models tailored for different NLP tasks, visit the <a class="link" href="https://sparknlp.org/models?annotator=DependencyParserModel" target="_blank">Spark NLP Models Hub</a>.</p>
205
+ </div>
206
+ """, unsafe_allow_html=True)
207
+
208
+ # References Section
209
+ st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
210
+
211
+ st.markdown("""
212
+ <div class="section">
213
+ <ul>
214
+ <li><a class="link" href="https://nlp.johnsnowlabs.com/docs/en/annotators" target="_blank" rel="noopener">Spark NLP documentation page</a> for all available annotators</li>
215
+ <li>Python API documentation for <a class="link" href="https://nlp.johnsnowlabs.com/api/python/reference/autosummary/sparknlp/annotator/pos/perceptron/index.html#sparknlp.annotator.pos.perceptron.PerceptronModel" target="_blank" rel="noopener">PerceptronModel</a> and <a href="https://nlp.johnsnowlabs.com/api/python/reference/autosummary/sparknlp/annotator/dependency/dependency_parser/index.html#sparknlp.annotator.dependency.dependency_parser.DependencyParserModel" target="_blank" rel="noopener">Dependency Parser</a></li>
216
+ <li>Scala API documentation for <a class="link" href="https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/annotators/pos/perceptron/PerceptronModel.html" target="_blank" rel="noopener">PerceptronModel</a> and <a href="https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/annotators/parser/dep/DependencyParserModel.html" target="_blank" rel="noopener">DependencyParserModel</a></li>
217
+ <li>For extended examples of usage of Spark NLP annotators, check the <a class="link" href="https://github.com/JohnSnowLabs/spark-nlp-workshop" target="_blank" rel="noopener">Spark NLP Workshop repository</a>.</li>
218
+ <li>Minsky, M.L. and Papert, S.A. (1969) Perceptrons. MIT Press, Cambridge.</li>
219
+ </ul>
220
+ </div>
221
+ """, unsafe_allow_html=True)
222
+
223
+ # Community & Support Section
224
+ st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
225
+
226
+ st.markdown("""
227
+ <div class="section">
228
+ <ul>
229
+ <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>
230
+ <li><a class="link" href="https://join.slack.com/t/spark-nlp/shared_invite/zt-198dipu77-L3UWNe_AJ8xqDk0ivmih5Q" target="_blank">Slack</a>: Live discussion with the community and team</li>
231
+ <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub</a>: Bug reports, feature requests, and contributions</li>
232
+ <li><a class="link" href="https://medium.com/spark-nlp" target="_blank">Medium</a>: Spark NLP articles</li>
233
+ <li><a class="link" href="https://www.youtube.com/channel/UCmFOjlpYEhxf_wJUDuz6xxQ/videos" target="_blank">YouTube</a>: Video tutorials</li>
234
+ </ul>
235
+ </div>
236
+ """, unsafe_allow_html=True)
237
+
238
+ # Quick Links Section
239
+ st.markdown('<div class="sub-title">Quick Links</div>', unsafe_allow_html=True)
240
+
241
+ st.markdown("""
242
+ <div class="section">
243
+ <ul>
244
+ <li><a class="link" href="https://sparknlp.org/docs/en/quickstart" target="_blank">Getting Started</a></li>
245
+ <li><a class="link" href="https://nlp.johnsnowlabs.com/models" target="_blank">Pretrained Models</a></li>
246
+ <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples/python/annotation/text/english" target="_blank">Example Notebooks</a></li>
247
+ <li><a class="link" href="https://sparknlp.org/docs/en/install" target="_blank">Installation Guide</a></li>
248
+ </ul>
249
+ </div>
250
+ """, unsafe_allow_html=True)