stopwords
stringlengths 1
11
|
---|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
a |
b |
c |
d |
e |
f |
g |
h |
i |
j |
k |
l |
m |
n |
o |
p |
q |
r |
s |
t |
u |
v |
w |
x |
y |
z |
said |
took |
like |
went |
know |
way |
day |
days |
going |
was n't |
was told |
told me |
i could |
have a |
- |
that we |
it is |
they did |
to go |
to have |
rv |
male |
female |
girl |
boy |
place |
want |
cuz |
to the |
a great |
was very |
& |
they are |
have to |
in a |
they had |
had a |
do not |
to do |
would have |
they would |
they have |
have been |
going to |
i did |
that was |
the best |
back to |
thank you |
and the |
and they |
and i |
and it |
and we |
and he |
and she |
the rv |
i will |
he was |
she was |
told |
check |
place |
it 's |
End of preview. Expand
in Dataset Viewer.
stopwords-en
Overview
The stopword-en dataset contains a stopword list of frequently used in the English language. These words do not carry significant meaning and are often removed from text data during preprocessing and training in shallower models on a text classification task.
Dataset Details
- Dataset Name: stopwords-en
- Total Size: 220 demonstrations
Contents
The dataset consists of one column with strings like all the letters of the Roman alphabet, numbers from 1 to 10, and words frequently used in the English language, such as "day", "days", "know", "went", "like", etc.
How to use
from sklearn.feature_extraction.text import TfidfVectorizer
# Download the English stopword list.
stopwords = load_dataset('AiresPucrs/stopwords-en', split='train')['stopwords']
# Create a vectorization object via `TfidfVectorizer`
vectorizer = TfidfVectorizer(min_df=10,
max_features=100000,
analyzer='word',
ngram_range=(1, 2),
stop_words=stopwords, # Our list of stopwords.
lowercase=True)
# Fit the TfidfVectorizer to our dataset.
vectorizer.fit(dataset['text'])
License
This dataset is licensed under the Apache License, version 2.0.
- Downloads last month
- 8