text
stringlengths 0
27.6k
| python
int64 0
1
| DeepLearning or NLP
int64 0
1
| Other
int64 0
1
| Machine Learning
int64 0
1
| Mathematics
int64 0
1
| Trash
int64 0
1
|
|---|---|---|---|---|---|---|
I would like to read each word from a given text file and then want to compare these word with an existing English dictionary which may be a system dictionary or any other way. Here is the code I have tried, but in the following code, there is a problem. The following codes reading brackets or any other unnecessary characters.
f=open('words.txt')
M=[word for line in f for word in line.split()]
S=list(set(M))
for i in S:
print i
How can I do the job?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to do task for system calls classification. The code bellow is inspired from a text classification project. My system calls are represented as sequences of integers between 1 and 340. The error I got is:
**valueError: input arrays should have the same number of samples as target arrays. Find 1 input samples and 0 target samples**.
I don't know what to do as it's my first time
Thank you in advance
`
df = pd.read_csv("data.txt")
df_test = pd.read_csv("validation.txt")
#split arrays into train and test data (cross validation)
train_text, test_text, train_y, test_y = train_test_split(df,df,test_size = 0.2)
MAX_NB_WORDS = 5700
# get the raw text data
texts_train = train_text.astype(str)
texts_test = test_text.astype(str)
# finally, vectorize the text samples into a 2D integer tensor
tokenizer = Tokenizer(nb_words=MAX_NB_WORDS, char_level=False)
tokenizer.fit_on_texts(texts_train)
sequences = tokenizer.texts_to_sequences(texts_train)
sequences_test = tokenizer.texts_to_sequences(texts_test)
word_index = tokenizer.word_index
type(tokenizer.word_index), len(tokenizer.word_index)
index_to_word = dict((i, w) for w, i in tokenizer.word_index.items())
" ".join([index_to_word[i] for i in sequences[0]])
seq_lens = [len(s) for s in sequences]
MAX_SEQUENCE_LENGTH = 100
# pad sequences with 0s
x_train = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
x_test = pad_sequences(sequences_test, maxlen=MAX_SEQUENCE_LENGTH)
#print('Shape of data train:', x_train.shape) #cela a donnée (1,100)
#print('Shape of data test tensor:', x_test.shape)
y_train = train_y
y_test = test_y
print('Shape of label tensor:', y_train.shape)
EMBEDDING_DIM = 32
N_CLASSES = 2
y_train = keras.utils.to_categorical( y_train , N_CLASSES )
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='float32')
embedding_layer = Embedding(MAX_NB_WORDS, EMBEDDING_DIM,
input_length=MAX_SEQUENCE_LENGTH,
trainable=True)
embedded_sequences = embedding_layer(sequence_input)
average = GlobalAveragePooling1D()(embedded_sequences)
predictions = Dense(N_CLASSES, activation='softmax')(average)
model = Model(sequence_input, predictions)
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['acc'])
model.fit(x_train, y_train, validation_split=0.1,
nb_epoch=10, batch_size=1)
output_test = model.predict(x_test)
print("test auc:", roc_auc_score(y_test,output_test[:,1]))
`
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm working on an NLP project and I have a list with character spans in some text.
This list could look like the following:
[(1,4),(1,7),(4,9),(8,15)]
So my task is to return all non-overlapping pairs.
If two or more number pairs are overlapping, then the pair with the longest span should be returned. In my example, I want to return [(1,7),(8,15)] . How can I do this?
EDIT
I don't want to merge my intervals like in Merge overlap here. But i wan't to return all pairs/intervals/tuples except if the values in some tuples overlaps. e.g. (1,4) and (1,7) overlaps, (4,9) overlaps with (1,4) and (1,7) . If there is some overlap i want to return the tuple with the largest span e.g. (1,7) = span 7, (1,4) = span 4, (4,9) = span 5. That means that it should return (1,7) and (8,15) as well since (8,15) are not overlapping (1,7)
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to print the variable based on the index number based on the following dataset:
Here I used the following code:
import pandas as pd
airline = pd.read_csv("AIR-LINE.csv")
pnr = input("Enter the PNR Number ")
index = airline.PNRNum[airline.PNRNum==pnr].index.tolist()
zzz = int(index[0])
print( "The flight number is " + airline.FlightNo[zzz] )
I get the following error:
TypeError: can only concatenate str (not "numpy.int64") to str
I know that the error is because the FlightNo variable contains int value. But I don't know how to solve it. Any idea?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to integrate a custom PhraseMatcher() component into my nlp pipeline in a way that will allow me to load the custom Spacy model without having to re-add my custom components to a generic model on each load.
How can I load a Spacy model containing custom pipeline components?
I create the component, add it to my pipeline and save it with the following:
import requests
from spacy.lang.en import English
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc, Span, Token
class RESTCountriesComponent(object):
name = 'countries'
def __init__(self, nlp, label='GPE'):
self.countries = [u'MyCountry', u'MyOtherCountry']
self.label = nlp.vocab.strings[label]
patterns = [nlp(c) for c in self.countries]
self.matcher = PhraseMatcher(nlp.vocab)
self.matcher.add('COUNTRIES', None, *patterns)
def __call__(self, doc):
matches = self.matcher(doc)
spans = []
for _, start, end in matches:
entity = Span(doc, start, end, label=self.label)
spans.append(entity)
doc.ents = list(doc.ents) + spans
for span in spans:
span.merge()
return doc
nlp = English()
rest_countries = RESTCountriesComponent(nlp)
nlp.add_pipe(rest_countries)
nlp.to_disk('myNlp')
I then attempt to load my model with,
nlp = spacy.load('myNlp')
But get this error message:
KeyError: u"[E002] Can't find factory for 'countries'. This usually
happens when spaCy calls nlp.create_pipe with a component name
that's not built in - for example, when constructing the pipeline from
a model's meta.json. If you're using a custom component, you can write
to Language.factories['countries'] or remove it from the model meta
and add it via nlp.add_pipe instead."
I can't just add my custom components to a generic pipeline in my programming environment. How can I do what I'm trying to do?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to create a dataframe consisting reviews on 20 banks and in the following code I am trying to get the 20 customers rating score value but finding it difficult as I am new BeautifulSoup and Webscraping.
import pandas as pd
import requests
from bs4 import BeautifulSoup
url = 'https://www.bankbazaar.com/reviews.html'
page = requests.get(url)
print(page.text)
soup = BeautifulSoup(page.text,'html.parser')
Rating = []
rat_elem = soup.find_all('span')
for rate in rat_elem:
Rating.append(rate.find_all('div').get('value'))
print(Rating)
| 1
| 1
| 0
| 0
| 0
| 0
|
Is it possible to write a program that determines if an image is bad quality or not (saturation dimness etc.) More specifically, I want to compare good photos of food vs bad photos. I have a large database of good and bad photos, but very little experience with ML. Is what I'm trying to do even possible/feasible? If so, how do I start?
| 1
| 1
| 0
| 1
| 0
| 0
|
I have sentences stored in text file which looks like this.
radiologicalreport =1. MDCT OF THE CHEST History: A 58-year-old male, known case lung s/p LUL segmentectomy. Technique: Plain and enhanced-MPR CT chest is performed using 2 mm interval. Previous study: 03/03/2018 (other hospital) Findings: Lung parenchyma: The study reveals evidence of apicoposterior segmentectomy of LUL showing soft tissue thickening adjacent surgical bed at LUL, possibly post operation.
My ultimate goal is to apply LDA to classify each sentence to one topic. Before that, I want to do one hot encoding to the text. The problem I am facing is I want to one hot encode per sentence in a numpy array to be able to feed it into LDA. If I want to one hot encode the full text, I can easily do it using these two lines.
sent_text = nltk.sent_tokenize(text)
hot_encode=pd.Series(sent_text).str.get_dummies(' ')
However, my goal is to one hot encoding per sentence in a numpy array. So, I try the following code.
from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
import nltk
import pandas as pd
from nltk.tokenize import TweetTokenizer, sent_tokenize
with open('radiologicalreport.txt', 'r') as myfile:
report=myfile.read().replace('
', '')
tokenizer_words = TweetTokenizer()
tokens_sentences = [tokenizer_words.tokenize(t) for t in
nltk.sent_tokenize(report)]
tokens_np = array(tokens_sentences)
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(tokens_np)
# binary encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
I get an error at this line as "TypeError: unhashable type: 'list'"
integer_encoded = label_encoder.fit_transform(tokens_np)
And hence cannot proceed further.
Also, my tokens_sentences look like this as shown in the image.
Please Help!!
| 1
| 1
| 0
| 0
| 0
| 0
|
I have written a code where I am trying to pick out some lines from text files and append them to another text file;
I have a folder :
E:\Adhiraj Chattopadhyay\NLG Dataset\FYP DB
I have several sub-folders in it, each of which contains a text file.
So I have entered this directory in my python intrpreter;
import os
path = "E:\\Adhiraj Chattopadhyay\\NLG Dataset\\FYP DB"
os.chdir(path)
I now created a file with read & write permissions;
file1 = open('file1.txt', 'r+' )
data = file1.read()
Now, I have written a python code which is supposed to walk through all the the folders in FYP DB to search for text files in them.
If text file(s) is found, the code searches the text to extract all lines with the word Table in them;
for (dirname, dirs, files) in os.walk('.'):
for filename in files:
if filename.endswith('.txt'):
for line in filename:
if 'Table' in line:
# print (line.split(':'))
file1.write(line.split(':'))
print(data)
The code is then supposed to write these lines to file1
This is where I am facing my problem!
When I print the contents of file1 ( data ), there is no output.
When I , then open file1 directly from the directory, a blank file opens.
Could somebody please help me with this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a data set of approximately 3000 questions and I want to perform intent classification. The data set is not labelled yet, but from the business perspective, there's a requirement of identifying approximately 80 various intent classes. Let's assume my training data has approximately equal number of each classes and is not majorly skewed towards some of the classes. I am intending to convert the text to word2vec or Glove and then feed into my classifier.
I am familiar with cases in which I have a smaller number of intent classes, such as 8 or 10 and the choice of machine learning classifiers such as SVM, naive bais or deeplearning (CNN or LSTM).
My question is that if you have had experience with such large number of intent classes before, and which of machine learning algorithm do you think will perform reasonably? do you think if i use deep learning frameworks, still large number of labels will cause poor performance given the above training data?
We need to start labelling the data and it is rather laborious to come up with 80 classes of labels and then realise that it is not performing well, so I want to ensure that I am making the right decision on how many classes of intent maximum I should consider and what machine learning algorithm do you suggest?
Thanks in advance...
| 1
| 1
| 0
| 0
| 0
| 0
|
when I chunk text, I get lots of codes in the output like
NN, VBD, IN, DT, NNS, RB.
Is there a list documented somewhere which tells me the meaning of these?
I have tried googling nltk chunk code nltk chunk grammar nltk chunk tokens.
But I am not able to find any documentation which explains what these codes mean.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using the following function to determine if a text has words (or expressions) from a list:
def is_in_text(text, lista=[]):
return any(i in text for i in lista)
I can pass to this function a list of words and expressions that I would like to find in a text. For example, the following code:
text_a = 'There are white clouds in the sky'
print(is_in_text(text_a, ['clouds in the sky']))
Will return
True
This works if I'm interested in texts that mention "clouds" and "sky". However, if the text varies slightly, I may no longer detect it. For example:
text_b = 'There are white clouds in the beautiful sky'
print(is_in_text(text_b, ['clouds in the sky']))
Will return False.
How can I modify this function to be able to find texts that contain both words, but not necessarily in a predetermined order? In this example, I would like to look for "'clouds' + 'sky' ".
Just to be clear, I am interested in texts that contain both words. I would like to have a function that searchs for these kind of combinations, without me having to enter all these conditions manually.
| 1
| 1
| 0
| 0
| 0
| 0
|
Is it possible to change one single entity in Spacy?
I have some docs objects in a list, and some of the docs contains a "FRAUD" label. However, I need to change a few of the "FRAUD" entities labels to "FALSE_ALARM". I'm using Spacy's matcher to find the "FALSE_ALARM" entities, but I can't override the existing label. I have tried the following:
def add_event_ent(matcher, doc, i, matches):
match_id, start, end = matches[i]
match_doc = doc[start:end]
for entity in match_doc.ents:
# k.label = neg_hash <-- says " attribute 'label' of 'spacy.tokens.span.Span' objects is not writable"
span = Span(doc, entity.start, entity.end, label=false_alarm_hash)
doc.ents = list(doc.ents) + [span] # add span to doc.ents
ValueError: [E098] Trying to set conflicting doc.ents: '(14, 16,
'FRAUD')' and '(14, 16, 'FALSE_ALARM')'. A token can only be part of one entity, so make sure the entities you're setting don't overlap.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to build a CNN network and wuld like to probe the layer dimention using output_shape.
But it's giving me an error as follows:
ValueError: Input 0 is incompatible with layer conv2d_5: expected ndim=4, found ndim=2
Below is the code I am trying to execute
from keras.layers import Activation
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
print(model.output_shape)
| 1
| 1
| 0
| 0
| 0
| 0
|
I have written the following (crude) code to find the association strengths among the words in a given piece of text.
import re
## The first paragraph of Wikipedia's article on itself - you can try with other pieces of text with preferably more words (to produce more meaningful word pairs)
text = "Wikipedia was launched on January 15, 2001, by Jimmy Wales and Larry Sanger.[10] Sanger coined its name,[11][12] as a portmanteau of wiki[notes 3] and 'encyclopedia'. Initially an English-language encyclopedia, versions in other languages were quickly developed. With 5,748,461 articles,[notes 4] the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias. Overall, Wikipedia comprises more than 40 million articles in 301 different languages[14] and by February 2014 it had reached 18 billion page views and nearly 500 million unique visitors per month.[15] In 2005, Nature published a peer review comparing 42 science articles from Encyclopadia Britannica and Wikipedia and found that Wikipedia's level of accuracy approached that of Britannica.[16] Time magazine stated that the open-door policy of allowing anyone to edit had made Wikipedia the biggest and possibly the best encyclopedia in the world and it was testament to the vision of Jimmy Wales.[17] Wikipedia has been criticized for exhibiting systemic bias, for presenting a mixture of 'truths, half truths, and some falsehoods',[18] and for being subject to manipulation and spin in controversial topics.[19] In 2017, Facebook announced that it would help readers detect fake news by suitable links to Wikipedia articles. YouTube announced a similar plan in 2018."
text = re.sub("[\[].*?[\]]", "", text) ## Remove brackets and anything inside it.
text=re.sub(r"[^a-zA-Z0-9.]+", ' ', text) ## Remove special characters except spaces and dots
text=str(text).lower() ## Convert everything to lowercase
## Can add other preprocessing steps, depending on the input text, if needed.
from nltk.corpus import stopwords
import nltk
stop_words = stopwords.words('english')
desirable_tags = ['NN'] # We want only nouns - can also add 'NNP', 'NNS', 'NNPS' if needed, depending on the results
word_list = []
for sent in text.split('.'):
for word in sent.split():
'''
Extract the unique, non-stopword nouns only
'''
if word not in word_list and word not in stop_words and nltk.pos_tag([word])[0][1] in desirable_tags:
word_list.append(word)
'''
Construct the association matrix, where we count 2 words as being associated
if they appear in the same sentence.
Later, I'm going to define associations more properly by introducing a
window size (say, if 2 words seperated by at most 5 words in a sentence,
then we consider them to be associated)
'''
table = np.zeros((len(word_list),len(word_list)), dtype=int)
for sent in text.split('.'):
for i in range(len(word_list)):
for j in range(len(word_list)):
if word_list[i] in sent and word_list[j] in sent:
table[i,j]+=1
df = pd.DataFrame(table, columns=word_list, index=word_list)
# Count the number of occurrences of each word from word_list in the text
all_words = pd.DataFrame(np.zeros((len(df), 2)), columns=['Word', 'Count'])
all_words.Word = df.index
for sent in text.split('.'):
count=0
for word in sent.split():
if word in word_list:
all_words.loc[all_words.Word==word,'Count'] += 1
# Sort the word pairs in decreasing order of their association strengths
df.values[np.triu_indices_from(df, 0)] = 0 # Make the upper triangle values 0
assoc_df = pd.DataFrame(columns=['Word 1', 'Word 2', 'Association Strength (Word 1 -> Word 2)'])
for row_word in df:
for col_word in df:
'''
If Word1 occurs 10 times in the text, and Word1 & Word2 occur in the same sentence 3 times,
the association strength of Word1 and Word2 is 3/10 - Please correct me if this is wrong.
'''
assoc_df = assoc_df.append({'Word 1': row_word, 'Word 2': col_word,
'Association Strength (Word 1 -> Word 2)': df[row_word][col_word]/all_words[all_words.Word==row_word]['Count'].values[0]}, ignore_index=True)
assoc_df.sort_values(by='Association Strength (Word 1 -> Word 2)', ascending=False)
This produces the word associations like so:
Word 1 Word 2 Association Strength (Word 1 -> Word 2)
330 wiki encyclopedia 3.0
895 encyclopadia found 1.0
1317 anyone edit 1.0
754 peer science 1.0
755 peer encyclopadia 1.0
756 peer britannica 1.0
...
...
...
However, the code contains a lot of for loops which hampers its running time. Specially the last part (sort the word pairs in decreasing order of their association strengths) consumes a lot of time as it computes the association strengths of n^2 word pairs/combinations, where n is the number of words we are interested in (those in word_list in my code above).
So, the following are what I would like some help on:
How do I vectorize the code, or otherwise make it more efficient?
Instead of producing n^2 combinations/pairs of words in the last step, is there any way to prune some of them before producing them? I am going to prune some of the useless/meaningless pairs by inspection after they are produced anyway.
Also, and I know this does not fall into the purview of a coding question, but I would love to know if there's any mistake in my logic, specially when calculating the word association strengths.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataset of 27 files, each containing opcodes. I want to use stemming to map all versions of similar opcodes into the same opcode. For example: push, pusha, pushb, etc would all be mapped to push.
My dictionary contains 27 keys and each key has a list of opcodes as a value. Since the values contain opcodes and not normal english words, I cannot use the regular stemmer module. I need to write my own stemmer code. Also I cannot hard-code a custom dictionary that maps different versions of the opcodes to the root opcode because I have a huge dataset.
I think regex expression would be a good idea but I do not know how to use it. Can anyone help me with this or any other idea to write my own stemmer code?
| 1
| 1
| 0
| 0
| 0
| 0
|
Below is the code I am trying to execute, and the following error message I am receiving. Thank your for the assistance in advance.
----> 6 nn = nl.net.newlvq(nl.tool.minmax(data), num_input_neurons, weights)
# Define a neural network with 2 layers:
# 10 neurons in input layer and 4 neurons in output layer
num_input_neurons = 10
num_output_neurons = 4
weights = [1/num_output_neurons] * num_output_neurons
nn = nl.net.newlvq(nl.tool.minmax(data), num_input_neurons, weights)
The error I receive:
TypeError: slice indices must be integers or None or have an __index__ method
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to combine my own simple custom tagger with the nltk default tagger, in this case the perceptron tagger.
My code is as follows (based on this answer):
import nltk.tag, nltk.data
default_tagger = nltk.data.load(nltk.tag._POS_TAGGER)
model = {'example_one': 'VB' 'example_two': 'NN'}
tagger = nltk.tag.UnigramTagger(model=model, backoff=default_tagger)
However this gives the following error:
File "nltk_test.py", line 24, in <module>
default_tagger = nltk.data.load(nltk.tag._POS_TAGGER)
AttributeError: 'module' object has no attribute '_POS_TAGGER'
I tried to fix this by changing the default tagger to:
from nltk.tag.perceptron import PerceptronTagger
default_tagger = PerceptronTagger()
But then I get the following error:
File "nltk_test.py", line 26, in <module>
tagger = nltk.tag.UnigramTagger(model=model, backoff=default_tagger)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/nltk/tag/sequential.py", line 340, in __init__
backoff, cutoff, verbose)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/nltk/tag/sequential.py", line 284, in __init__
ContextTagger.__init__(self, model, backoff)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/nltk/tag/sequential.py", line 125, in __init__
SequentialBackoffTagger.__init__(self, backoff)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/nltk/tag/sequential.py", line 50, in __init__
self._taggers = [self] + backoff._taggers
AttributeError: 'PerceptronTagger' object has no attribute '_taggers'
Looking through the nltk.tag documentation it seems that _POS_TAGGER no longer exists. However changing it to _pos_tag or pos_tag also didn't work.
| 1
| 1
| 0
| 0
| 0
| 0
|
im trying to create a simple classification with tree classifier for disease symptoms. i have tried it using sklearn tree classifier.
it gives the following error. both my code and error is there.
Any suggestion ?
import numpy as np
from sklearn import tree
symptoms = [['flat face','poor moro','hypotonia'],['small head','small jaw','overlapping fingers'], ['small eyes','cleft lip','cleft palate']]
lables = [['Trisomy 21'],['Trisomy 18'],['Trisomy 13']]
classify = tree.DecisionTreeClassifier()
classify = classify.fit(symptoms, lables)
it gives the following error
ValueError Traceback (most recent call last)
<ipython-input-25-0f2c956618c2> in <module>
4 lables = [['Trisomy 21'],['Trisomy 18'],['Trisomy 13']]
5 classify = tree.DecisionTreeClassifier()
----> 6 classify = classify.fit(symptoms, lables)
c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\sklearn\tree\tree.py in fit(self, X, y, sample_weight, check_input, X_idx_sorted)
799 sample_weight=sample_weight,
800 check_input=check_input,
--> 801 X_idx_sorted=X_idx_sorted)
802 return self
803
c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\sklearn\tree\tree.py in fit(self, X, y, sample_weight, check_input, X_idx_sorted)
114 random_state = check_random_state(self.random_state)
115 if check_input:
--> 116 X = check_array(X, dtype=DTYPE, accept_sparse="csc")
117 y = check_array(y, ensure_2d=False, dtype=None)
118 if issparse(X):
c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
525 try:
526 warnings.simplefilter('error', ComplexWarning)
--> 527 array = np.asarray(array, dtype=dtype, order=order)
528 except ComplexWarning:
529 raise ValueError("Complex data not supported
"
c:\users\admin\appdata\local\programs\python\python36\lib\site-packages
umpy\core
umeric.py in asarray(a, dtype, order)
499
500 """
--> 501 return array(a, dtype, copy=False, order=order)
502
503
ValueError: could not convert string to float: 'flat face'
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm currently working on NLP project. Actually, when i researched how to deal with NLP, i found some articles about SpaCy. But, because i'm still newbie on python, i don't understand how SpaCy TextCategorizer Pipeline works.
Is there any detailed about how this pipeline works? Is TextCategorizer Pipeline also using text feature extraction such as Bag of Words, TF-IDF, Word2Vec or anything else? And what model architecture use in SpaCy TextCategorizer? Is there someone who could explain me about this?
| 1
| 1
| 0
| 0
| 0
| 0
|
In NLP task, I have some text files for some authors. Data are in folders like this:
|author1|
|text_file1|
|text_file2|
...
|author2|
|text_file1|
|text_file2|
...
...
I want to loop through these folders and create a train and validation datasets like the following. Validation data contains two random files from each author
id text author
0 This process, however, afforded me no means of... author1
1 It never once occurred to me that the fumbling... author1
. ...
. In his left hand was a gold snuff box, from wh... author2
. ...
What is the best approach for creating these datasets?
I tried something like this:
train = []
val = []
for folder_name in folders:
file_path = data_path +'/' + folder_name
files = os.listdir(file_path)
v1 = np.random.randint(0, len(files))
v2 = np.random.randint(0, len(files))
for i, fn in enumerate(files):
fn = file_path + '/' + fn
f = open(fn)
text = f.read()
# preprocessing text
if i == v1 or i == v2:
val.append(text)
else:
train.append(text)
f.close()
However, my problem is to associate the folder_name to each text and save the whole data in the format I described above.
| 1
| 1
| 0
| 0
| 0
| 0
|
Good day,
My objective is to create a function take in a text data which is a string, and convert it to lower case letters. I wish to then apply the function later on passing by passing in data.
However, I keep getting this error outputted when I call/apply the function and try to pass the data in it.
TypeError: 'generator' object is not callable
I did some further research and I am just curious if the mapping is causing this issue?
Is there any way of accomplishing this to make the function work in the most effective manner.
Here is my code below:
def preprocess_text(text):
""" The function takes a parameter which is a string.
The function should then return the processed text
"""
# Iterating over each case in the data and lower casing the text
edit_text = ''.join(map(((t.lower().strip()) for t in text), text))
return edit_text
Then to test function to see if it works:
# test function by passing in data.
""" This is when then the error occurs!"""
text_processed = preprocess_text(data)
I would really appreciate the help to know what the issue is and to know the correct way to do this.
Cheers in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
Hello I am trying an extremely simple project to just learn how things work in TensorFlow. I just gave 3 simple arrays and it doesn't find the relation between giving me an error. Why is that and how to overcome it? Here is my code
import tensorflow as tf
from tensorflow import keras
x = [[1,2,5,6],[12,5,1,7],[1,5,7,9]]
y = [[1],[4],[3]]
model = keras.Sequential()
model.add(keras.layers.Dense(4, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x,y,epochs=20,batch_size=4)
error :
ValueError: Please provide as model inputs either a single array or a list of arrays. You passed: x=[[1, 2, 5, 6], [12, 5, 1, 7], [1, 5, 7, 9]]
| 1
| 1
| 0
| 1
| 0
| 0
|
This may seem like an odd question but I'm new to this so thought I'd ask anyway.
I want to use this Google News model over various different files on my laptop. This means I will be running this line over and over again in different Jupyter notebooks:
model=word2vec.KeyedVectors.load_word2vec_format("GoogleNews-vectors-negative300.bin",binary=True)
Does this eat 1) Storage (I've noticed my storage filling up exponentially for no reason)
2) Less memory than it would otherwise if I close the previous notebook before running the next.
My storage has gone down by 50GB in one day and the only thing I have done on this computer is run the Google News model (I didn't do most_similar()). Restarting and closing notebooks hasn't helped and there aren't any big files on the laptop. Any ideas?
Thanks.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm building a chatbot and I'm new to NLP.
(api.ai & AlchemyAPI are too expensive for my use case. And wit.ai seems to be buggy and constantly changing at the moment.)
For the NLP experts, how easily can I replicate their services locally?
My vision so far (with node, but open to Python):
entity extraction via StanfordNER
intent via NodeNatural's LogisticRegressionClassifier
training UI with text and validate/invalidate buttons (any prebuilt tools for this?)
Are entities and intents all I'll need for a chatbot? How good will NodeNatural/StanfordNER be compared to NLP-as-a-service? What headaches am I not seeing?
| 1
| 1
| 0
| 0
| 0
| 0
|
When using GloVe embedding in NLP tasks, some words from the dataset might not exist in GloVe. Therefore, we instantiate random weights for these unknown words.
Would it be possible to freeze weights gotten from GloVe, and train only the newly instantiated weights?
I am only aware that we can set:
model.embedding.weight.requires_grad = False
But this makes the new words untrainable..
Or are there better ways to extract semantics of words..
| 1
| 1
| 0
| 0
| 0
| 0
|
Is auc better in handling imbalenced data. As in most of the cases if I am dealing with imbalenced data accuracy is not giving correct idea. Even though accuracy is high, model has poor perfomance. If it's not auc which is the best measure to handle imbalenced data.
| 1
| 1
| 0
| 1
| 0
| 0
|
i want to predict my data with 4 models that i have been trained. So i tried to merge my models into a list, but after i append my models, i can't call 'predict' and i got error like this:
AttributeError: 'list' object has no attribute 'predict'
my code is like this:
vect_tes = features.transform(frame['text'])
model = [[]]
for i in range(4):
mod = open('model_'+str(i+1)+'.pkl', 'rb')
model.append(pickle.load(mod))
mod.close()
predict = model.predict(vect_tes)
| 1
| 1
| 0
| 0
| 0
| 0
|
Here is the snippet of code from the book
Natural Language Processing with PyTorch:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
import seaborn as sns
corpus = ['Time flies flies like an arrow.', 'Fruit flies like a banana.']
one_hot_vectorizer = CountVectorizer()
vocab = one_hot_vectorizer.get_feature_names()
The value of vocab :
vocab = ['an', 'arrow', 'banana', 'flies', 'fruit', 'like', 'time']
Why is not there an 'a' among the extracted feature names? If it is excluded as too common word automatically, why "an" is not excluded for the same reasons? How to make .get_feature_names() filter other words as well?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to do stemming on a dask dataframe
wnl = WordNetLemmatizer()
def lemmatizing(sentence):
stemSentence = ""
for word in sentence.split():
stem = wnl.lemmatize(word)
stemSentence += stem
stemSentence += " "
stemSentence = stemSentence.strip()
return stemSentence
df['news_content'] = df['news_content'].apply(stemming).compute()
But I am getting the following error:
AttributeError: 'WordNetCorpusReader' object has no attribute '_LazyCorpusLoader__args'
I already tried what was recommended here, but without any luck.
Thanks for the help.
| 1
| 1
| 0
| 0
| 0
| 0
|
my input.txt contains the following:
__label__SPAM buy our products
__label__HAM Please send me the last business forecast
__label__SPAM buy viagra
__label__HAM important meeting at 10:00AM
But after running the command:
./fasttext skipgram -input ~/PycharmProjects/Pcat/input.txt -output modelskipgram
I get output as :
Read 0M words
Number of words: 0
Number of labels: 2
Progress: 100.0% words/sec/thread: 339 lr: 0.000000 loss: 0.000000 ETA: 0h 0m
What am I doing wrong?
| 1
| 1
| 0
| 1
| 0
| 0
|
I read a lot of tutorials on the web and topics on stackoverflow but one question is still foggy for me. If consider just the stage of collecting data for multi-label training, what way (see below) are better and whether are both of them acceptable and effective?
Try to find 'pure' one-labeled examples at any cost.
Every example can be multi labeled.
For instance, I have articles about war, politics, economics, culture. Usually, politics tied to economics, war connected to politics, economics issues may appear in culture articles etc. I can assign strictly one main theme for each example and drop uncertain works or assign 2, 3 topics.
I'm going to train data using Spacy, volume of data will be about 5-10 thousand examples per topic.
I'd be grateful for any explanation and/or a link to some relevant discussion.
| 1
| 1
| 0
| 0
| 0
| 0
|
i'm trying to write a simple generation algorithm with python, which should give to me asnwer "Hello World". It's work fine, but it cannot give me corect answer with "max iteration" constant. It just works in infinite loop.
Here is my code belowe:
import random
class GAHello():
POPULATION_SIZE = 1000
ELITE_RATE = 0.1
SURVIVE_RATE = 0.5
MUTATION_RATE = 0.2
TARGET = "Hello World!"
MAX_ITER = 1000
def InitializePopulation(self):
tsize: int = len(self.TARGET)
population = list()
for i in range(0, self.POPULATION_SIZE):
str = ''
for j in range(0, tsize):
str += chr(int(random.random() * 255))
citizen: Genome = Genome(str)
population.append(citizen)
return population
def Mutation(self, strng):
tsize: int = len(self.TARGET)
ipos: int = int(random.random() * tsize)
delta: chr = chr(int(random.random() * 255))
return strng[0: ipos] + delta + strng[ipos + 1:]
def mate(self, population):
esize: int = int(self.POPULATION_SIZE * self.ELITE_RATE)
tsize: int = len(self.TARGET)
children = self.select_elite(population, esize)
for i in range(esize, self.POPULATION_SIZE):
i1: int = int(random.random() * self.POPULATION_SIZE * self.SURVIVE_RATE)
i2: int = int(random.random() * self.POPULATION_SIZE * self.SURVIVE_RATE)
spos: int = int(random.random() * tsize)
strng: str = population[i1][0: spos] + population[i2][spos:]
if(random.random() < self.MUTATION_RATE):
strng = self.Mutation(strng)
child = Genome(strng)
children.append(child)
return children
def go(self):
popul = self.InitializePopulation()
for i in range(0, self.MAX_ITER):
popul.sort()
print("{} > {}".format(i, str(popul[0])))
if(popul[0].fitness == 0):
break
popul = self.mate(popul)
def select_elite(self, population, esize):
children = list()
for i in range(0, esize):
children.append(population[i])
return children
class Genome():
strng = ""
fitness = 0
def __init__(self, strng):
self.strng = strng
fitness = 0
for j in range(0, len(strng)):
fitness += abs(ord(self.strng[j]) - ord(GAHello.TARGET[j]))
self.fitness = fitness
def __lt__(self, other):
return self.fitness - other.fitness
def __str__(self):
return "{} {}".format(self.fitness, self.strng)
def __getitem__(self, item):
return self.strng[item]
Thank you for an advice. I am realy noob and such things and i just training and experiment with such algorithms and optimization things to explore an ai methods.
UPDATE
The place, where it runs
if __name__ == '__main__':
algo = GAHello()
algo.go()
My output:
0 > 1122 Ü<pñsÅá׺Ræ¾
1 > 1015 ÷zËÔ5AÀ©«
2 > 989 "ÆþõZi±Pmê
3 > 1076 ØáíAÀ©«
4 > 1039 #ÆþÕRæ´Ìosß
5 > 946 ×ZÍG¤'ÒÙË
6 > 774 $\àPÉ
7 > 1194 A®Ä§ö
ÝÖ Ð
8 > 479 @r=q^Ü´{J
9 > 778 X'YþH_õÏÆ
10 > 642 z¶$oKÐ{
...
172 > 1330 ê¸EïôÀ«ä£ü
173 > 1085 ÔOÕÛ½e·À×äÒU
174 > 761 OÕÛ½¤¯£+}
175 > 903 P½?-´ëÎm|4Ô
176 > 736 àPSÈe<1
177 > 1130 ªê/*ñ¤îã¹¾^
178 > 772 OÐS8´°jÓ£
...
990 > 1017 6ó¨QøÇ?¨Úí
991 > 1006 |5ÇÐR·Ü¸í
992 > 968 ×5QÍË?1V í
993 > 747 B ªÄ*¶R·Ü$F
994 > 607 `ªLaøVLº
995 > 744 Ìx7eøi;ÄÝ[
996 > 957 ¹8/ñ^ ¤
997 > 916 Ú'dúý8}û« [
998 > 892 ÛWòeTùv6ç®
999 > 916 õg8g»}à³À
And sample output, that should be:
0 > 419 Un~?z^Kr??p┬
1 > 262 Un~?z^Kr?j?↨
2 > 262 Un~?z^Kr?j?↨
…
15 > 46 Afpdm'Ynosa"
16 > 46 Afpdm'Ynosa"
17 > 42 Afpdm'Ynoia"
18 > 27 Jfpmm↓Vopoa"
…
33 > 9 Ielmo▼Wnole"
34 > 8 Ielmo▲Vopld"
35 > 8 Ielmo▲Vopld"
…
50 > 1 Hello World"
51 > 1 Hello World"
52 > 0 Hello World!
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a script which lists top n words (words with higher chi-squared value). However, instead of extracting fixed n number of words I want to extract all the words for which p-value is smaller than 0.05 i.e. rejects the null hypothesis.
Here is my code:
from sklearn.feature_selection import chi2
#vectorize top 100000 words
tfidf = TfidfVectorizer(max_features=100000,ngram_range=(1, 3))
X_tfidf = tfidf.fit_transform(df.review_text)
y = df.label
chi2score = chi2(X_tfidf, y)[0]
scores = list(zip(tfidf.get_feature_names(), chi2score))
chi2 = sorted(scores, key=lambda x:x[1])
allchi2 = list(zip(*chi2))
#lists top 20 words
allchi2 = allchi2[0][-20:]
So, In this case instead of listing top 20 words I want all the words that reject null hypothesis i.e. all the words in reviews that are dependent on the sentiment class(positive or negative)
| 1
| 1
| 0
| 0
| 0
| 0
|
I have the below data, stored as a Series (called data_counts), showing words in the Index and count values in the '0' column. Series contains 30k words however I use the below as an example :
Index | 0
the | 3425
American | 431
a | 213
I | 124
hilarious | 53
Mexican | 23
is | 2
I'd like to convert the words in the Index to lowercase and remove the stopwords using NLTK. I have seen some examples on SO achieving this using 'lambdas' (see below example for a dataframe), however I'd like to do this by running a DEF function instead (I am a Python newbie and this seems to me the easiest to understand).
df['Index'] = df['Index'].apply(lambda stop_remove: [word.lower() for word in stop_remove.split() if word not in stopwords])
Many thanks in advance
| 1
| 1
| 0
| 0
| 0
| 0
|
Currently, I have:
[re.sub(r'\W', '', i) for i in training_data.loc[:, 'Text']]
However with this the Hindi characters remain and all the spaces are removed. Any ideas?
| 1
| 1
| 0
| 0
| 0
| 0
|
Using the stackoverflow data dump, I am analyzing SO posts that are tagged with pytorch or keras. Specifically, I count how many times each co tag occurs (ie the tags that aren't pytorch in a pytorch tagged post).
I'd like to filter out the tags that are so common they've lost real meaning for my analysis (like the python tag).
I am looking into Tf-idf
TF reprensents the frequency of word for each document. However, each co-tag can only occur once for a given post (ie you can't tag your post 'html' five times). So the tf for most words would be 1/5, and others less (because post only has 4 tags for instance). Is it still possible to do Tf-Idf given this context?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to search for a list of words, and so I have generated this code:
narrative = "Lasix 40 mg b.i.d., for three days along with potassium chloride slow release 20 mEq b.i.d. for three days, Motrin 400 mg q.8h"
meds_name_final_list = ["lasix", "potassium chloride slow release", ...]
def all_occurences(file, str):
initial = 0
while True:
initial = file.find(str, initial)
if initial == -1:
return
yield initial
initial += len(str)
offset = []
for item in meds_name_final_list:
number = list(all_occurences(narrative.lower(), item))
offset.append(number)
Desired output: list of the starting index/indices in the corpora of the word being a search for, e.g:
offset = [[1], [3, 10], [5, 50].....]
This code works perfectly for not so long words such as antibiotics, emergency ward, insulin etc. However, long words that are broken by new line spacing are not detected by the function above.
Desired word: potassium chloride slow release
Any suggestion to solve this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to develop a chatbot using python
and im looking for NLP libraries that supports Arabic language
any suggestions?
Thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
I am new in using spacy. I want to extract text values from sentences
training_sentence="I want to add a text field having name as new data"
OR
training_sentence=" add a field and label it as advance data"
So from the above sentence, I want to extract "new data" and "advance data"
For now, I am able to extract entities like "add", "field" and "label" using Custom NER.
But I am unable to extract text values as these value can be anything and I am not sure how to extract it using custom NER in spacy.
I have seen code snippet here of entity relations in the spacy documentation
But don't know to implement it as per my use case.
I can't share the code. Please assist how to tackle this problem
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a text file ;
... Above in Table 5 , we understood the relationship between pressure
and volume. It said ... and now we know ... . Table 9: represents the
graph of x and y. Table 6 was all about force and it implications on
objects....
Now I have written a code to extract the lines that have the word table in it;
with open file( <pathname + filename.txt>, 'r+') as f:
k = f.readlines()
for line in k:
if ' Table ' in line:
print(line)
Now I desire to print the output in a particular format;
(txt file name),(Table id),(Table content)
I do this by using the .split method of python;
x = 'Paper ID:' + filename.split('.')[0] + '|' + 'Table ID:' + line.split(':')[0] + '|' + 'Table Content:' + line.split(':')[1] + '|'
Now,as you can see, I can separate the table id and table content where there is a delimiter ( :) after some .
How do I do the same where there is no delimiter, i.e. for these lines;
Above in Table 5 , we understood the relationship between pressure
and volume. It said ... and now we know ..
Or
In table 7 we saw....
?
Could anyone please help?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm working on implementaion of word2vec architecture from scratch. But my model doesn't converge.
class SkipGramBatcher:
def __init__(self, text):
self.text = text.results
def get_batches(self, batch_size):
n_batches = len(self.text)//batch_size
pairs = []
for idx in range(0, len(self.text)):
window_size = 5
idx_neighbors = self._get_neighbors(self.text, idx, window_size)
#one_hot_idx = self._to_one_hot(idx)
#idx_pairs = [(one_hot_idx, self._to_one_hot(idx_neighbor)) for idx_neighbor in idx_neighbors]
idx_pairs = [(idx,idx_neighbor) for idx_neighbor in idx_neighbors]
pairs.extend(idx_pairs)
for idx in range(0, len(pairs), batch_size):
X = [pair[0] for pair in pairs[idx:idx+batch_size]]
Y = [pair[1] for pair in pairs[idx:idx+batch_size]]
yield X,Y
def _get_neighbors(self, text, idx, window_size):
text_length = len(text)
start = max(idx-window_size,0)
end = min(idx+window_size+1,text_length)
neighbors_words = set(text[start:end])
return list(neighbors_words)
def _to_one_hot(self, indexes):
n_values = np.max(indexes) + 1
return np.eye(n_values)[indexes]
I use text8 corpus and have applied preprocessing techniques such as stemming, lemmatization and subsampling. Also I've excluded English stop words and limited vocabulary
vocab_size = 20000
text_len = len(text)
test_text_len = int(text_len*0.15)
preprocessed_text = PreprocessedText(text,vocab_size)
I use tensorflow for graph computation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
n_embedding = 300
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((vocab_size, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
And apply negative sampling
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((vocab_size, n_embedding))) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(vocab_size), name="softmax_bias") # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(
weights=softmax_w,
biases=softmax_b,
labels=labels,
inputs=embed,
num_sampled=n_sampled,
num_classes=vocab_size)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Finally I train my model
epochs = 10
batch_size = 64
avg_loss = []
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = skip_gram_batcher.get_batches(batch_size)
start = time.time()
for batch_x,batch_y in batches:
feed = {inputs: batch_x,
labels: np.array(batch_y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Batch loss: {:.4f}".format(loss/iteration),
"{:.4f} sec/batch".format((end-start)/100))
#loss = 0
avg_loss.append(loss/iteration)
start = time.time()
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
But after running this model my average batch loss doesn't decrease dramatically
I guess I should have made a mistake somewhere. Any help is apprciated
| 1
| 1
| 0
| 0
| 0
| 0
|
I went through chapter 7 of the NLTK book looking for a solution to this but so far it unclear to me.
<NN>* means 0 or more of Nouns
<NN.*>* as explained by the book means 0 or more nouns of any type
In NLTK are NN, NNS, NNP, NNPS exclusive of each other ? (I might be wrong in thinking that NN is an umbrella for the rest)
In that case does <NN.*>* mean that 0 or more of any of NN, NNS, NNP, NNPS which itself can be repeated 0 or more times(that outer *)? Or does it simply mean NN repeated 0 or more times which again repeats 0 or more times?
Or am I completely mistaken ?
| 1
| 1
| 0
| 0
| 0
| 0
|
I was wondering if I have a file with the following format
and I want to put each column in a list of list since I have more than one sentences:
so the output can look like this
[['Learning centre of The University of Lahore is established for professional development.'],
['These events, destroyed the bond between them.']]
and the same for the verb column. This is what I tried but it put everything in a single list not a list of lists
train_fn="/content/data/wiki/wiki1.train.oie"
dfE = pandas.read_csv(train_fn, sep= "\t",
header=0,
keep_default_na=False)
train_textEI = dfE['word'].tolist()
train_textEI = [' '.join(t.split()) for t in train_textEI]
train_textEI = np.array(train_textEI, dtype=object)[:, np.newaxis]
it outputs each word in a list
[['Learning'],['Center'],['of'],['The'],['University'],['of'],
['Lahore'],['is'],['established'],['for'],['the'],
['professional'],['development'],['.'],['These'],['events'],[','],
['destroyed'],['the'],['bond'],['between'],['them'],['.']]
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a food application. It is an Android based application. The scenario is that there is a text box in that application for users to enter comments. Now I want to apply NLP (Semantic analysis) to these comments.
Please guide me that how could I pass the comments from Java to Python so that I can apply NLP to them.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am teaching myself python and have completed a rudimentary text summarizer. I'm nearly happy with the summarized text but want to polish the final product a bit more.
The code performs some standard text processing correctly (tokenization, remove stopwords, etc). The code then scores each sentence based on a weighted word frequency. I am using the heapq.nlargest() method to return the top 7 sentences which I feel does a good job based on my sample text.
The issue I'm facing is that the top 7 sentences are returned sorted from highest score -> lowest score. I understand the why this is happening. I would prefer to maintain the same sentence order as present in the original text. I've included the relevant bits of code and hope someone can guide me on a solution.
#remove all stopwords from text, build clean list of lower case words
clean_data = []
for word in tokens:
if str(word).lower() not in stoplist:
clean_data.append(word.lower())
#build dictionary of all words with frequency counts: {key:value = word:count}
word_frequencies = {}
for word in clean_data:
if word not in word_frequencies.keys():
word_frequencies[word] = 1
else:
word_frequencies[word] += 1
#print(word_frequencies.items())
#update the dictionary with a weighted frequency
maximum_frequency = max(word_frequencies.values())
#print(maximum_frequency)
for word in word_frequencies.keys():
word_frequencies[word] = (word_frequencies[word]/maximum_frequency)
#print(word_frequencies.items())
#iterate through each sentence and combine the weighted score of the underlying word
sentence_scores = {}
for sent in sentence_list:
for word in nltk.word_tokenize(sent.lower()):
if word in word_frequencies.keys():
if len(sent.split(' ')) < 30:
if sent not in sentence_scores.keys():
sentence_scores[sent] = word_frequencies[word]
else:
sentence_scores[sent] += word_frequencies[word]
#print(sentence_scores.items())
summary_sentences = heapq.nlargest(7, sentence_scores, key = sentence_scores.get)
summary = ' '.join(summary_sentences)
print(summary)
I'm testing using the following article: https://www.bbc.com/news/world-australia-45674716
Current output: "Australia bank inquiry: 'They didn't care who they hurt'
The inquiry has also heard testimony about corporate fraud, bribery rings at banks, actions to deceive regulators and reckless practices. A royal commission this year, the country's highest form of public inquiry, has exposed widespread wrongdoing in the industry. The royal commission came after a decade of scandalous behaviour in Australia's financial sector, the country's largest industry. "[The report] shines a very bright light on the poor behaviour of our financial sector," Treasurer Josh Frydenberg said. "When misconduct was revealed, it either went unpunished or the consequences did not meet the seriousness of what had been done," he said. The bank customers who lost everything
He also criticised what he called the inadequate actions of regulators for the banks and financial firms. It has also received more than 9,300 submissions of alleged misconduct by banks, financial advisers, pension funds and insurance companies."
As an example of the desired output: The third sentence above, "A royal commission this year, the country's highest form of public inquiry, has exposed widespread wrongdoing in the industry." actually comes before "Australia bank inquiry: They didnt care who they hurt" in the original article and I would like the output to maintain that sentence order.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a multi-class text classification problem that has to provide the top 5 matches as opposed to just the best match. Therefore, “success” is defined as at least one of the top 5 matches being a correct classification. The algorithm must achieve at least a 95% success rate rate given how we have defined success above. We will of course train our model on a subset of the data and test on the remaining subset in order to validate the success of our model.
I have been using python’s scikit-learn’s predict_proba() function in order to select the top 5 matches and calculating the success rates below using a custom script which seems to run fine on my sample data, however, I noticed that the top 5 rate of success was less than that from the top 1 success rate using .predict() on my own custom data, which is mathematically impossible. This is because the top result will automatically be included in the top 5 results, the success rate must therefore be, at the very least, equal to the top 1 success rate if not more. In order to trouble shoot, I am comparing the top 1 success rate using predict() vs predict_proba() to make sure they are equal, and making sure that the success rate on the top 5 is greater than the top 1.
I have set up the script below to walk you through my logic to see if am making an incorrect assumption somewhere, or if there might a problem with my data that needs to be fixed. I am testing many classifiers and features, but just for the sake of simplicity you will see that I am just using count vectors as features and Logistic Regression as the classifier since I do not believe (to my knowledge, that this is part of the issue).
I would very much appreciate any insight that anyone may have to explain why I am finding this discrepancy.
Code:
# Set up environment
from sklearn.datasets import fetch_20newsgroups
from sklearn.linear_model import LogisticRegression
from sklearn import metrics, model_selection
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
import numpy as np
#Read in data and do just a bit of preprocessing
# User's Location of git repository
Git_Location = 'C:/Documents'
# Set Data Location:
data = Git_Location + 'Data.csv'
# load the data
df = pd.read_csv(data,low_memory=False,thousands=',', encoding='latin-1')
df = df[['CODE','Description']] #select only these columns
df = df.rename(index=float, columns={"CODE": "label", "Description": "text"})
#Convert label to float so you don't need to encode for processing later on
df['label']=df['label'].str.replace('-', '',regex=True, case = False).str.strip()
df['label'].astype('float64', raise_on_error = True)
# drop any labels with count LT 500 to build a strong model and make our testing run faster -- we will get more data later
df = df.groupby('label').filter(lambda x : len(x)>500)
#split data into testing and training
train_x, valid_x, train_y, valid_y = model_selection.train_test_split(df.text, df.label,test_size=0.33, random_state=6,stratify=df.label)
# Other examples online use the following data types... we will do the same to remain consistent
train_y_npar = pd.Series(train_y).values
train_x_list = pd.Series.tolist(train_x)
valid_x_list = pd.Series.tolist(valid_x)
# cast validation datasets to dataframes to allow to merging later on
valid_x_df = pd.DataFrame(valid_x)
valid_y_df = pd.DataFrame(valid_y)
# Extracting features from data
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(train_x_list)
X_test_counts = count_vect.transform(valid_x_list)
# Define the model training and validation function
def TV_model(classifier, feature_vector_train, label, feature_vector_valid, valid_y, valid_x, is_neural_net=False):
# fit the training dataset on the classifier
classifier.fit(feature_vector_train, label)
# predict the top n labels on validation dataset
n = 5
#classifier.probability = True
probas = classifier.predict_proba(feature_vector_valid)
predictions = classifier.predict(feature_vector_valid)
#Identify the indexes of the top predictions
top_n_predictions = np.argsort(probas, axis = 1)[:,-n:]
#then find the associated SOC code for each prediction
top_class = classifier.classes_[top_n_predictions]
#cast to a new dataframe
top_class_df = pd.DataFrame(data=top_class)
#merge it up with the validation labels and descriptions
results = pd.merge(valid_y, valid_x, left_index=True, right_index=True)
results = pd.merge(results, top_class_df, left_index=True, right_index=True)
top5_conditions = [
(results.iloc[:,0] == results[0]),
(results.iloc[:,0] == results[1]),
(results.iloc[:,0] == results[2]),
(results.iloc[:,0] == results[3]),
(results.iloc[:,0] == results[4])]
top5_choices = [1, 1, 1, 1, 1]
#Top 1 Result
#top1_conditions = [(results['0_x'] == results[4])]
top1_conditions = [(results.iloc[:,0] == results[4])]
top1_choices = [1]
# Create the success columns
results['Top 5 Successes'] = np.select(top5_conditions, top5_choices, default=0)
results['Top 1 Successes'] = np.select(top1_conditions, top1_choices, default=0)
print("Are Top 5 Results greater than Top 1 Result?: ", (sum(results['Top 5 Successes'])/results.shape[0])>(metrics.accuracy_score(valid_y, predictions)))
print("Are Top 1 Results equal from predict() and predict_proba()?: ", (sum(results['Top 1 Successes'])/results.shape[0])==(metrics.accuracy_score(valid_y, predictions)))
print(" ")
print("Details: ")
print("Top 5 Accuracy Rate (predict_proba)= ", sum(results['Top 5 Successes'])/results.shape[0])
print("Top 1 Accuracy Rate (predict_proba)= ", sum(results['Top 1 Successes'])/results.shape[0])
print("Top 1 Accuracy Rate = (predict)=", metrics.accuracy_score(valid_y, predictions))
Example of Output using scikit learn’s built in twentynewsgroups dataset (this is my goal):
Note: I ran this exact code on another dataset and was able to produce these results which tells me that the function and it's dependencies work therefore the issue must be in the data somehow.
Are Top 5 Results greater than Top 1 Result?: True
Are Top 1 Results equal from predict() and predict_proba()?: True
Details:
Top 5 Accuracy Rate (predict_proba)= 0.9583112055231015
Top 1 Accuracy Rate (predict_proba)= 0.8069569835369091
Top 1 Accuracy Rate = (predict)= 0.8069569835369091
Now run on my data:
TV_model(LogisticRegression(), X_train_counts, train_y_npar, X_test_counts, valid_y_df, valid_x_df)
Output:
Are Top 5 Results greater than Top 1 Result?: False
Are Top 1 Results equal from predict() and predict_proba()?: False
Details:
Top 5 Accuracy Rate (predict_proba)= 0.6581632653061225
Top 1 Accuracy Rate (predict_proba)= 0.2010204081632653
Top 1 Accuracy Rate = (predict)= 0.8091187478734263
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm using Gensim for loading the german .bin files from Fasttext in order to get vector representations for out-of-vocabulary words and phrases. So far it works fine and I achieve good results overall.
I am familiar with the KeyError :'all ngrams for word <word> absent from model'. Clearly the model doesn't provide a vector representation for every possible ngram combination.
But now I ran into a confusing (at least for me) issue.
I'll just give a quick example:
the model provides a representation for the phrase AuM Wert.
But when I want to get a representation for AuM Wert 50 Mio. Eur, I'll get the KeyError mentioned above. So the model obviously has a representation for the shorter phrase but not for the extended one.
It even returns a representation for AuM Wert 50 Mio.Eur (I just removed the space between 'Mio' and 'Eur')
I mean, the statement in the Error is simply not true, because the first example shows that it knows some of the ngrams. Can someone explain that to me? What don't I understand here? Is my understanding of ngrams wrong?
Heres the code:
from gensim.models.wrappers import FastText
model = FastText.load_fasttext_format('cc.de.300.bin')
model.wv['AuM Wert'] #returns a vector
model.wv['AuM Wert 50 Mio.EUR'] #returns a vector
model.wv['AuM Wert 50 Mio. EUR'] #triggers the error
Thanks in advance,
Amos
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm writing a python code that involves analyzing a dataset using natural language processing and validating a twitter update . My Random forest model is working perfectly.
dataset = pd.read_csv('bully.txt', delimiter ='\t', quoting = 3)
corpus = []
for i in range(0,8576):
tweet = re.sub('[^a-zA-Z]', ' ', dataset['tweet'][i])
tweet = tweet.lower()
tweet = tweet.split()
ps = PorterStemmer()
tweet = [ps.stem(word) for word in tweet if not word in
set(stopwords.words('english'))]
tweet = ' '.join(tweet)
corpus.append(tweet)
Converting dataset to vector
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features = 10000)
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:, 1].values
Split into Train and Test data
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state = 0)
Classifier model
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
This is my code to access the tweets:
for status in tweepy.Cursor(api.home_timeline).items(1):
print "tweet: "+ status.text.encode('utf-8')
corpus1 = []
update = status.text
update = re.sub('[^a-zA-Z]', ' ', update)
update = update.lower()
update = update.split()
ps = PorterStemmer()
update = [ps.stem(word) for word in update if not word in set(stopwords.words('english'))]
update = ' '.join(update)
corpus1.append(update)
When I try to classify the extracted twitter update using the model:
if classifier.predict(update):
print "bullying"
else:
print "not bullying"
I get this error:
ValueError: could not convert string to float: dude
How to feed a single tweet to the model?
My data set is this: https://drive.google.com/open?id=1BG3cFszsZjAJ_pcST2jRxDH0ukf411M-
| 1
| 1
| 0
| 1
| 0
| 0
|
basically just don't understand what this line means, did everything else already https://github.com/adityasarvaiya/Automatic_Question_Generation#environment-variables
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to develop a text classifier that will classify a piece of text as Private or Public. Take medical or health information as an example domain. A typical classifier that I can think of considers keywords as the main distinguisher, right? What about a scenario like bellow? What if both of the pieces of text contains similar keywords but carry a different meaning.
Following piece of text is revealing someone's private (health) situation (the patient has cancer):
I've been to two clinics and my pcp. I've had an ultrasound only to be told it's a resolving cyst or a hematoma, but it's getting larger and starting to make my leg ache. The PCP said it can't be a cyst because it started out way too big and I swear I have NEVER injured my leg, not even a bump. I am now scared and afraid of cancer. I noticed a slightly uncomfortable sensation only when squatting down about 9 months ago. 3 months ago I went to squat down to put away laundry and it kinda hurt. The pain prompted me to examine my leg and that is when I noticed a lump at the bottom of my calf muscle and flexing only made it more noticeable. Eventually after four clinic visits, an ultrasound and one pcp the result seems to be positive and the mass is getting larger.
[Private] (Correct Classification)
Following piece of text is a comment from a doctor which is definitely not revealing is health situation. It introduces the weaknesses of a typical classifier model:
Don’t be scared and do not assume anything bad as cancer. I have gone through several cases in my clinic and it seems familiar to me. As you mentioned it might be a cyst or a hematoma and it's getting larger, it must need some additional diagnosis such as biopsy. Having an ache in that area or the size of the lump does not really tells anything bad. You should visit specialized clinics few more times and go under some specific tests such as biopsy, CT scan, pcp and ultrasound before that lump become more larger.
[Private] (Which is the Wrong Classification. It should be [Public])
The second paragraph was classified as private by all of my current classifiers, for obvious reason. Similar keywords, valid word sequences, the presence of subjects seemed to make the classifier very confused. Even, both of the content contains subjects like I, You (Noun, Pronouns) etc. I thought about from Word2Vec to Doc2Vec, from Inferring meaning to semantic embeddings but can't think about a solution approach that best suits this problem.
Any idea, which way I should handle the classification problem? Thanks in advance.
Progress so Far:
The data, I have collected from a public source where patients/victims usually post their own situation and doctors/well-wishers reply to those. I assumed while crawling is that - posts belongs to my private class and comments belongs to public class. All to gether I started with 5K+5K posts/comments and got around 60% with a naive bayes classifier without any major preprocessing. I will try Neural Network soon. But before feeding into any classifier, I just want to know how I can preprocess better to put reasonable weights to either class for better distinction.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to calculate the document similarity (nearest neighbor) for two arbitrary documents using word embeddings based on Google's BERT.
In order to obtain word embeddings from Bert, I use bert-as-a-service.
Document similarity should be based on Word-Mover-Distance with the python wmd-relax package.
My previous tries are orientated along this tutorial from the wmd-relax github repo: https://github.com/src-d/wmd-relax/blob/master/spacy_example.py
import numpy as np
import spacy
import requests
from wmd import WMD
from collections import Counter
from bert_serving.client import BertClient
# Wikipedia titles
titles = ["Germany", "Spain", "Google", "Apple"]
# Standard model from spacy
nlp = spacy.load("en_vectors_web_lg")
# Fetch wiki articles and prepare as specy document
documents_spacy = {}
print('Create spacy document')
for title in titles:
print("... fetching", title)
pages = requests.get(
"https://en.wikipedia.org/w/api.php?action=query&format=json&titles=%s"
"&prop=extracts&explaintext" % title).json()["query"]["pages"]
text = nlp(next(iter(pages.values()))["extract"])
tokens = [t for t in text if t.is_alpha and not t.is_stop]
words = Counter(t.text for t in tokens)
orths = {t.text: t.orth for t in tokens}
sorted_words = sorted(words)
documents_spacy[title] = (title, [orths[t] for t in sorted_words],
np.array([words[t] for t in sorted_words],
dtype=np.float32))
# This is the original embedding class with the model from spacy
class SpacyEmbeddings(object):
def __getitem__(self, item):
return nlp.vocab[item].vector
# Bert Embeddings using bert-as-as-service
class BertEmbeddings:
def __init__(self, ip='localhost', port=5555, port_out=5556):
self.server = BertClient(ip=ip, port=port, port_out=port_out)
def __getitem__(self, item):
text = nlp.vocab[item].text
emb = self.server.encode([text])
return emb
# Get the nearest neighbor of one of the atricles
calc_bert = WMD(BertEmbeddings(), documents_spacy)
calc_bert.nearest_neighbors(titles[0])
Unfortunately, the calculations fails with a dimensions mismatch in the distance calculation:
ValueError: shapes (812,1,768) and (768,1,812) not aligned: 768 (dim 2) != 1 (dim 1)
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to write a hate speech detection code but i am stuck with a problem. I am getting the error SklearnClassifier' object has no attribute 'fit' I am looking from source which used python 2 but I am using python 3 maybe the problem occurs because of this but I couldn't solve this. How can I fix this problem ?
training_set = nltk.classify.apply_features(extract_features, train_tweets)
classifier = nltk.NaiveBayesClassifier.train(training_set)
from sklearn.ensemble import AdaBoostClassifier
from nltk.classify.scikitlearn import SklearnClassifier
# SKlearn Wrapper
classifier = SklearnClassifier(LinearSVC())
classifier.fit(X_train, X_test)
predicted_labels = [classifier.classify(extract_features(tweet[0])) for tweet in test_tweets]
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a JSON file ...
"1": {"address": "1",
"ctag": "Ne",
"feats": "_",
"head": "6",
"lemma": "Ghani",
"rel": "SBJ",
"tag": "Ne",
"word": "Ghani"},
"2": {"address": "2",
"ctag": "AJ",
"feats": "_",
"head": "1",
"lemma": "born",
"rel": "NPOSTMOD",
"tag": "AJ",
"word": "born"},
"3": {"address": "3",
"ctag": "P",
"feats": "_",
"head": "6",
"lemma": "in",
"rel": "ADV",
"tag": "P",
"word": "in"},
"4": {"address": "4",
"ctag": "N",
"feats": "_",
"head": "3",
"lemma": "Kabul",
"rel": "POSDEP",
"tag": "N",
"word": "Kabul"},
"5": {"address": "5",
"ctag": "PUNC",
"feats": "_",
"head": "6",
"lemma": ".",
"rel": "PUNC",
"tag": "PUNC",
"word": "."},
I read the JSON file and stored in a dict.
import json
# read file
with open('../data/data.txt', 'r') as JSON_file:
obj = json.load(JSON_file)
d = dict(obj) # stored it in a dict
I extracted two list from this dict that each list contains relation from text and entities as follow:
entities(d) = ['Ghani', 'Kabul', 'Afghanistan'....]
relation(d) = ['president', 'capital', 'located'...]
Now I want to check in each sentence of dict d, if any element of entities(d) and relation(d) exist, it should be stored to another list.
What I did?
to_match = set(relation(d) + entities(d))
entities_and_relation = [[j for j in to_match if j in i]
for i in ''.join(d).split('.')[:-1]]
print(entities_and_relation)
But this return me an empty list. Can you tell me what is wrong here.
OUTPUT should be like:
[Ghani, president, Afghanistan] ...
| 1
| 1
| 0
| 0
| 0
| 0
|
These are the possible cases of text I have,
4 bedrooms 2 bathrooms 3 carparks
3 bedroom house
Bedrooms 2,
beds 5,
Bedrooms 1,
2 bedrooms, 1 bathroom,
Four bedrooms home, double garage
Four bedrooms home
Three double bedrooms home, garage
Three bedrooms home,
2 bedroom home unit with single carport.
Garage car spaces: 2, Bathrooms: 4, Bedrooms: 7,
I am trying to get the number of bedrooms out of this text. I managed to write the below ones,
def get_bedroom_num(s):
if ':' in s:
out = re.search(r'(?:Bedrooms:|Bedroom:)(.*)', s,re.I).group(1)
elif ',' in s:
out = re.search(r'(?:bedrooms|bedroom|beds)(.*)', s,re.I).group(1)
else:
out = re.search(r'(.*)(?:bedrooms|bedroom).*', s,re.I).group(1)
out = filter(lambda x: x.isdigit(), out)
return out
But it is not capturing all the possible cases. The key here is the word 'bedroom', text will always have the text bedroom either in the front or back of the number. Any better approach to handle this? If not through regex, may be Named Entity Recognition in NLP?
Thanks.
EDIT : -
For case 7 to 10, I managed to convert the word numbers to integer using the below function,
#Convert word to number
def text2int (textnum, numwords={}):
if not numwords:
units = [
"zero", "one", "two", "three", "four", "five", "six", "seven", "eight",
"nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen",
"sixteen", "seventeen", "eighteen", "nineteen",
]
tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"]
scales = ["hundred", "thousand", "million", "billion", "trillion"]
numwords["and"] = (1, 0)
for idx, word in enumerate(units): numwords[word] = (1, idx)
for idx, word in enumerate(tens): numwords[word] = (1, idx * 10)
for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0)
ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12}
ordinal_endings = [('ieth', 'y'), ('th', '')]
textnum = textnum.replace('-', ' ')
current = result = 0
curstring = ""
onnumber = False
for word in textnum.split():
if word in ordinal_words:
scale, increment = (1, ordinal_words[word])
current = current * scale + increment
if scale > 100:
result += current
current = 0
onnumber = True
else:
for ending, replacement in ordinal_endings:
if word.endswith(ending):
word = "%s%s" % (word[:-len(ending)], replacement)
if word not in numwords:
if onnumber:
curstring += repr(result + current) + " "
curstring += word + " "
result = current = 0
onnumber = False
else:
scale, increment = numwords[word]
current = current * scale + increment
if scale > 100:
result += current
current = 0
onnumber = True
if onnumber:
curstring += repr(result + current)
return curstring
so, 'Four bedrooms home, double garage' can be converted to '4 bedrooms home, double garage' with this function before doing any regex to get the number.
| 1
| 1
| 0
| 0
| 0
| 0
|
I wrote a samll program to test how tf.control_dependencies work, result seems to be confused to me however. My test code is below:
import tensorflow as tf
x = tf.Variable(0.0)
y = None
for i in range(5):
assign_op = tf.assign(x, i)
with tf.control_dependencies([assign_op]):
y = tf.identity(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(x))
print(sess.run(y))
when i run the program, value of x and y are 0.0 and 4.0 respectively. Since value of y gets the right answer, assign_op in tf.control_dependencies works in this example. Then as the op works correctly, why does't the value of x equal to 4.0?
Please correct me if I have any misunderstanding of how tf.control_dependencies really work.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a question-answering task. I am planning to use dependency parsing to find candidate answers from a passage to a query. However, I am not sure how I can find similarity between dependency trees of the query and the sentences from the passage, respectively. Below is the reproducible code.
import spacy
from spacy import displacy
nlp = spacy.load('en_core_web_sm')
doc1 = nlp('Wall Street Journal just published an interesting piece on crypto currencies')
doc2 = nlp('What did Wall Street Journal published')
displacy.render(doc1, style='dep', jupyter=True, options={'distance': 90})
displacy.render(doc2, style='dep', jupyter=True, options={'distance': 90})
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to get the text with its punctuation as it is important to consider the latter in my doc2vec model. However, the wikicorpus retrieve only the text. After searching the web I found these pages:
Page from gensim github issues section. It was a question by someone where the answer was to subclass WikiCorpus (answered by Piskvorky). Luckily, in the same page, there was a code representing the suggested 'subclass' solution. The code was provided by Rhazegh. (link)
Page from stackoverflow with a title: "Disabling Gensim's removal of punctuation etc. when parsing a wiki corpus". However, no clear answer was provided and was treated in the context of spaCy. (link)
I decided to use the code provided in page 1. My current code (mywikicorpus.py):
import sys
import os
sys.path.append('C:\\Users\\Ghaliamus\\Anaconda2\\envs\\wiki\\Lib\\site-packages\\gensim\\corpora\')
from wikicorpus import *
def tokenize(content):
# override original method in wikicorpus.py
return [token.encode('utf8') for token in utils.tokenize(content, lower=True, errors='ignore')
if len(token) <= 15 and not token.startswith('_')]
def process_article(args):
# override original method in wikicorpus.py
text, lemmatize, title, pageid = args
text = filter_wiki(text)
if lemmatize:
result = utils.lemmatize(text)
else:
result = tokenize(text)
return result, title, pageid
class MyWikiCorpus(WikiCorpus):
def __init__(self, fname, processes=None, lemmatize=utils.has_pattern(), dictionary=None, filter_namespaces=('0',)):
WikiCorpus.__init__(self, fname, processes, lemmatize, dictionary, filter_namespaces)
def get_texts(self):
articles, articles_all = 0, 0
positions, positions_all = 0, 0
texts = ((text, self.lemmatize, title, pageid) for title, text, pageid in extract_pages(bz2.BZ2File(self.fname), self.filter_namespaces))
pool = multiprocessing.Pool(self.processes)
for group in utils.chunkize(texts, chunksize=10 * self.processes, maxsize=1):
for tokens, title, pageid in pool.imap(process_article, group): # chunksize=10):
articles_all += 1
positions_all += len(tokens)
if len(tokens) < ARTICLE_MIN_WORDS or any(title.startswith(ignore + ':') for ignore in IGNORED_NAMESPACES):
continue
articles += 1
positions += len(tokens)
if self.metadata:
yield (tokens, (pageid, title))
else:
yield tokens
pool.terminate()
logger.info(
"finished iterating over Wikipedia corpus of %i documents with %i positions"
" (total %i articles, %i positions before pruning articles shorter than %i words)",
articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS)
self.length = articles # cache corpus length
And then, I used another code by Pan Yang (link). This code initiates WikiCorpus object and retrieve the text. The only change in my current code is initiating MyWikiCorpus instead of WikiCorpus. The code (process_wiki.py):
from __future__ import print_function
import logging
import os.path
import six
import sys
import mywikicorpus as myModule
if __name__ == '__main__':
program = os.path.basename(sys.argv[0])
logger = logging.getLogger(program)
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
logging.root.setLevel(level=logging.INFO)
logger.info("running %s" % ' '.join(sys.argv))
# check and process input arguments
if len(sys.argv) != 3:
print("Using: python process_wiki.py enwiki-20180601-pages- articles.xml.bz2 wiki.en.text")
sys.exit(1)
inp, outp = sys.argv[1:3]
space = " "
i = 0
output = open(outp, 'w')
wiki = myModule.MyWikiCorpus(inp, lemmatize=False, dictionary={})
for text in wiki.get_texts():
if six.PY3:
output.write(bytes(' '.join(text), 'utf-8').decode('utf-8') + '
')
else:
output.write(space.join(text) + "
")
i = i + 1
if (i % 10000 == 0):
logger.info("Saved " + str(i) + " articles")
output.close()
logger.info("Finished Saved " + str(i) + " articles")
Through command line I ran the process_wiki.py code. I got text of the corpus with the last line in the command prompt:
(2018-06-05 09:18:16,480: INFO: Finished Saved 4526191 articles)
When I read the file in python, I checked the first article and it was without punctuation. Example:
(anarchism is a political philosophy that advocates self governed societies based on voluntary institutions these are often described as stateless societies although several authors have defined them more specifically as institutions based on non hierarchical or free associations anarchism holds the state to be undesirable unnecessary and harmful while opposition to the state is central anarchism specifically entails opposing authority or hierarchical)
My two relevant questions, and I wish you can help me with them, please:
is there any thing wrong in my reported pipeline above?
regardless such pipeline, if I opened the gensim wikicorpus python code (wikicorpus.py) and wanted to edit it, what is the line that I should add it or remove it or update it (with what if possible) to get the same results but with punctuation?
Many thanks for your time reading this long post.
Best wishes,
Ghaliamus
| 1
| 1
| 0
| 0
| 0
| 0
|
I have data looks:
[[('Natural', 'JJ', 'B'), ('language', 'NN', 'I'), ('processing', 'NN', 'I'), ('is', 'VBZ', 'O'), ('one', 'CD', 'O'), ('of', 'IN', 'O'), ('the', 'DT', 'O'), ('important', 'JJ', 'O'), ('branch', 'NN', 'O'), ('of', 'IN', 'O'), ('CS', 'NNP', 'B'), ('.', '.', 'I')] ... ...]]
I want to group the consecutive words which have tags B or I and ignore which have 'O' tags.
The output keywords should look like:
Natural language processing,
CS,
Machine learning,
deep learning
I did code as follows:
data=[[('Natural', 'JJ', 'B'), ('language', 'NN', 'I'), ('processing', 'NN', 'I'), ('is', 'VBZ', 'O'), ('one', 'CD', 'O'), ('of', 'IN', 'O'), ('the', 'DT', 'O'), ('important', 'JJ', 'O'), ('branch', 'NN', 'O'), ('of', 'IN', 'O'), ('CS', 'NNP', 'B'), ('.', '.', 'I')],
[('Machine', 'NN', 'B'), ('learning', 'NN', 'I'), (',', ',', 'I'), ('deep', 'JJ', 'I'), ('learning', 'NN', 'I'), ('are', 'VBP', 'O'), ('heavily', 'RB', 'O'), ('used', 'VBN', 'O'), ('in', 'IN', 'O'), ('natural', 'JJ', 'B'), ('language', 'NN', 'I'), ('processing', 'NN', 'I'), ('.', '.', 'I')],
[('It', 'PRP', 'O'), ('is', 'VBZ', 'O'), ('too', 'RB', 'O'), ('cool', 'JJ', 'O'), ('.', '.', 'O')]]
Key_words = []
index = 0
for sen in data:
for i in range(len(sen)):
while index < len(sen):
I do not know what to do next. Could anyone please help me?.
Thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
The token <EOS> is ubiquitously used in NLP. As I haven't used it, the implementation of conditioning on it is a bit unclear to me. Could anyone provide a snippet of Python code. (If statements may be used.)
Example 1: There is a sequence of words with some <EOS> tokens interpolated. This sequence goes through a RNN to get encoded. Whenever encounters <EOS>, the timestep outputs its state.
Example 2: a machine translation task. When meets <EOS>, it stops generating tokens.
| 1
| 1
| 0
| 0
| 0
| 0
|
I’ve tried reimplementing a simple GRU language model using just a GRU and a linear layer (the full code is also at https://www.kaggle.com/alvations/gru-language-model-not-training-properly):
class Generator(nn.Module):
def __init__(self, vocab_size, embedding_size, hidden_size, num_layers):
super(Generator, self).__init__()
# Initialize the embedding layer with the
# - size of input (i.e. no. of words in input vocab)
# - no. of hidden nodes in the embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_size, padding_idx=0)
# Initialize the GRU with the
# - size of the input (i.e. embedding layer)
# - size of the hidden layer
self.gru = nn.GRU(embedding_size, hidden_size, num_layers)
# Initialize the "classifier" layer to map the RNN outputs
# to the vocabulary. Remember we need to -1 because the
# vectorized sentence we left out one token for both x and y:
# - size of hidden_size of the GRU output.
# - size of vocabulary
self.classifier = nn.Linear(hidden_size, vocab_size)
def forward(self, inputs, use_softmax=False, hidden=None):
# Look up for the embeddings for the input word indices.
embedded = self.embedding(inputs)
# Put the embedded inputs into the GRU.
output, hidden = self.gru(embedded, hidden)
# Matrix manipulation magic.
batch_size, sequence_len, hidden_size = output.shape
# Technically, linear layer takes a 2-D matrix as input, so more manipulation...
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
# Put it through the classifier
# And reshape it to [batch_size x sequence_len x vocab_size]
output = self.classifier(output).view(batch_size, sequence_len, -1)
return (F.softmax(output,dim=2), hidden) if use_softmax else (output, hidden)
def generate(self, max_len, temperature=1.0):
pass
And the training routine:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Set the hidden_size of the GRU
embed_size = 100
hidden_size = 100
num_layers = 1
# Setup the data.
batch_size=50
kilgariff_data = KilgariffDataset(tokenized_text)
dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True)
criterion = nn.CrossEntropyLoss(ignore_index=kilgariff_data.vocab.token2id['<pad>'], size_average=True)
model = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers).to(device)
learning_rate = 0.003
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
#model = nn.DataParallel(model)
losses = []
def train(num_epochs, dataloader, model, criterion, optimizer):
plt.ion()
for _e in range(num_epochs):
for batch in tqdm(dataloader):
x = batch['x'].to(device)
x_len = batch['x_len'].to(device)
y = batch['y'].to(device)
# Zero gradient.
optimizer.zero_grad()
# Feed forward.
output, hidden = model(x, use_softmax=True)
# Compute loss:
# Shape of the `output` is [batch_size x sequence_len x vocab_size]
# Shape of `y` is [batch_size x sequence_len]
# CrossEntropyLoss expects `output` to be [batch_size x vocab_size x sequence_len]
_, prediction = torch.max(output, dim=2)
loss = criterion(output.permute(0, 2, 1), y)
loss.backward()
optimizer.step()
losses.append(loss.float().data)
clear_output(wait=True)
plt.plot(losses)
plt.pause(0.05)
train(50, dataloader, model, criterion, optimizer)
#learning_rate = 0.05
#optimizer = optim.SGD(model.parameters(), lr=learning_rate)
#train(4, dataloader, model, criterion, optimizer)
But when the model is predicting, we see that it’s only predicting “the” and comma “,”.
Anyone spot something wrong with my code? Or hyperparameters?
The full code:
# coding: utf-8
# In[1]:
# IPython candies...
from IPython.display import Image
from IPython.core.display import HTML
from IPython.display import clear_output
# In[2]:
import numpy as np
from tqdm import tqdm
import pandas as pd
from gensim.corpora import Dictionary
import torch
from torch import nn, optim, tensor, autograd
from torch.nn import functional as F
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# In[3]:
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
sns.set(rc={'figure.figsize':(12, 8)})
torch.manual_seed(42)
# In[4]:
try: # Use the default NLTK tokenizer.
from nltk import word_tokenize, sent_tokenize
# Testing whether it works.
# Sometimes it doesn't work on some machines because of setup issues.
word_tokenize(sent_tokenize("This is a foobar sentence. Yes it is.")[0])
except: # Use a naive sentence tokenizer and toktok.
import re
from nltk.tokenize import ToktokTokenizer
# See https://stackoverflow.com/a/25736515/610569
sent_tokenize = lambda x: re.split(r'(?<=[^A-Z].[.?]) +(?=[A-Z])', x)
# Use the toktok tokenizer that requires no dependencies.
toktok = ToktokTokenizer()
word_tokenize = word_tokenize = toktok.tokenize
# In[5]:
import os
import requests
import io #codecs
# Text version of https://kilgarriff.co.uk/Publications/2005-K-lineer.pdf
if os.path.isfile('language-never-random.txt'):
with io.open('language-never-random.txt', encoding='utf8') as fin:
text = fin.read()
else:
url = "https://gist.githubusercontent.com/alvations/53b01e4076573fea47c6057120bb017a/raw/b01ff96a5f76848450e648f35da6497ca9454e4a/language-never-random.txt"
text = requests.get(url).content.decode('utf8')
with io.open('language-never-random.txt', 'w', encoding='utf8') as fout:
fout.write(text)
# In[6]:
# Tokenize the text.
tokenized_text = [list(map(str.lower, word_tokenize(sent)))
for sent in sent_tokenize(text)]
# In[7]:
class KilgariffDataset(nn.Module):
def __init__(self, texts):
self.texts = texts
# Initialize the vocab
special_tokens = {'<pad>': 0, '<unk>':1, '<s>':2, '</s>':3}
self.vocab = Dictionary(texts)
self.vocab.patch_with_special_tokens(special_tokens)
# Keep track of the vocab size.
self.vocab_size = len(self.vocab)
# Keep track of how many data points.
self._len = len(texts)
# Find the longest text in the data.
self.max_len = max(len(txt) for txt in texts)
def __getitem__(self, index):
vectorized_sent = self.vectorize(self.texts[index])
x_len = len(vectorized_sent)
# To pad the sentence:
# Pad left = 0; Pad right = max_len - len of sent.
pad_dim = (0, self.max_len - len(vectorized_sent))
vectorized_sent = F.pad(vectorized_sent, pad_dim, 'constant')
return {'x':vectorized_sent[:-1],
'y':vectorized_sent[1:],
'x_len':x_len}
def __len__(self):
return self._len
def vectorize(self, tokens, start_idx=2, end_idx=3):
"""
:param tokens: Tokens that should be vectorized.
:type tokens: list(str)
"""
# See https://radimrehurek.com/gensim/corpora/dictionary.html#gensim.corpora.dictionary.Dictionary.doc2idx
# Lets just cast list of indices into torch tensors directly =)
vectorized_sent = [start_idx] + self.vocab.doc2idx(tokens) + [end_idx]
return torch.tensor(vectorized_sent)
def unvectorize(self, indices):
"""
:param indices: Converts the indices back to tokens.
:type tokens: list(int)
"""
return [self.vocab[i] for i in indices]
# In[8]:
kilgariff_data = KilgariffDataset(tokenized_text)
len(kilgariff_data.vocab)
# In[9]:
batch_size = 10
dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True)
for data_dict in dataloader:
# Sort indices of data in batch by lengths.
sorted_indices = np.array(data_dict['x_len']).argsort()[::-1].tolist()
data_batch = {name:_tensor[sorted_indices]
for name, _tensor in data_dict.items()}
print(data_batch)
break
# In[97]:
class Generator(nn.Module):
def __init__(self, vocab_size, embedding_size, hidden_size, num_layers):
super(Generator, self).__init__()
# Initialize the embedding layer with the
# - size of input (i.e. no. of words in input vocab)
# - no. of hidden nodes in the embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_size, padding_idx=0)
# Initialize the GRU with the
# - size of the input (i.e. embedding layer)
# - size of the hidden layer
self.gru = nn.GRU(embedding_size, hidden_size, num_layers)
# Initialize the "classifier" layer to map the RNN outputs
# to the vocabulary. Remember we need to -1 because the
# vectorized sentence we left out one token for both x and y:
# - size of hidden_size of the GRU output.
# - size of vocabulary
self.classifier = nn.Linear(hidden_size, vocab_size)
def forward(self, inputs, use_softmax=False, hidden=None):
# Look up for the embeddings for the input word indices.
embedded = self.embedding(inputs)
# Put the embedded inputs into the GRU.
output, hidden = self.gru(embedded, hidden)
# Matrix manipulation magic.
batch_size, sequence_len, hidden_size = output.shape
# Technically, linear layer takes a 2-D matrix as input, so more manipulation...
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
# Put it through the classifier
# And reshape it to [batch_size x sequence_len x vocab_size]
output = self.classifier(output).view(batch_size, sequence_len, -1)
return (F.softmax(output,dim=2), hidden) if use_softmax else (output, hidden)
def generate(self, max_len, temperature=1.0):
pass
# In[98]:
# Set the hidden_size of the GRU
embed_size = 12
hidden_size = 10
num_layers = 4
_encoder = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers)
# In[99]:
# Take a batch.
_batch = next(iter(dataloader))
_inputs, _lengths = _batch['x'], _batch['x_len']
_targets = _batch['y']
max(_lengths)
# In[100]:
_output, _hidden = _encoder(_inputs)
print('Output sizes:\t', _output.shape)
print('Input sizes:\t', batch_size, kilgariff_data.max_len -1, len(kilgariff_data.vocab))
print('Target sizes:\t', _targets.shape)
# In[101]:
_, predicted_indices = torch.max(_output, dim=2)
print(predicted_indices.shape)
predicted_indices
# In[103]:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Set the hidden_size of the GRU
embed_size = 100
hidden_size = 100
num_layers = 1
# Setup the data.
batch_size=50
kilgariff_data = KilgariffDataset(tokenized_text)
dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True)
criterion = nn.CrossEntropyLoss(ignore_index=kilgariff_data.vocab.token2id['<pad>'], size_average=True)
model = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers).to(device)
learning_rate = 0.003
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
#model = nn.DataParallel(model)
losses = []
def train(num_epochs, dataloader, model, criterion, optimizer):
plt.ion()
for _e in range(num_epochs):
for batch in tqdm(dataloader):
x = batch['x'].to(device)
x_len = batch['x_len'].to(device)
y = batch['y'].to(device)
# Zero gradient.
optimizer.zero_grad()
# Feed forward.
output, hidden = model(x, use_softmax=True)
# Compute loss:
# Shape of the `output` is [batch_size x sequence_len x vocab_size]
# Shape of `y` is [batch_size x sequence_len]
# CrossEntropyLoss expects `output` to be [batch_size x vocab_size x sequence_len]
_, prediction = torch.max(output, dim=2)
loss = criterion(output.permute(0, 2, 1), y)
loss.backward()
optimizer.step()
losses.append(loss.float().data)
clear_output(wait=True)
plt.plot(losses)
plt.pause(0.05)
train(50, dataloader, model, criterion, optimizer)
#learning_rate = 0.05
#optimizer = optim.SGD(model.parameters(), lr=learning_rate)
#train(4, dataloader, model, criterion, optimizer)
# In[ ]:
list(kilgariff_data.vocab.items())
# In[105]:
start_token = '<s>'
hidden_state = None
max_len = 20
temperature=0.8
i = 0
while start_token not in ['</s>', '<pad>'] and i < max_len:
i += 1
start_state = torch.tensor(kilgariff_data.vocab.token2id[start_token]).unsqueeze(0).unsqueeze(0).to(device)
model.embedding(start_state)
output, hidden_state = model.gru(model.embedding(start_state), hidden_state)
batch_size, sequence_len, hidden_size = output.shape
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
output = model.classifier(output).view(batch_size, sequence_len, -1)
_, prediction = torch.max(F.softmax(output, dim=2), dim=2)
start_token = kilgariff_data.vocab[int(prediction.squeeze(0).squeeze(0))]
print(start_token, end=' ')
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using SpaCy to get named entities. However, it always mis-tags new line symbols as named entities.
Below is the input text.
mytxt = """<?xml version="1.0"?>
<nitf>
<head>
<title>KNOW YOUR ROLE ON SUPER BOWL LIII.</title>
</head>
<body>
<body.head>
<hedline>
<hl1>KNOW YOUR ROLE ON SUPER BOWL LIII.</hl1>
</hedline>
<distributor>Gale Group</distributor>
</body.head>
<body.content>
<p>Montpelier: <org>Department of Motor Vehicles</org>, has issued the following
news release:</p>
<p>Be a designated sober driver, help save lives. Remember these tips
on game night:</p>
<p>Know your State's laws: refusing to take a breath test in many
jurisdictions could result in arrest, loss of your driver's
license, and impoundment of your vehicle. Not to mention the
embarrassment in explaining your situation to family, friends, and
employers.</p>
<p>In case of any query regarding this article or other content needs
please contact: <a href="mailto:editorial@plusmediasolutions.com">editorial@plusmediasolutions.com</a></p>
</body.content>
</body>
</nitf>
"""
Below is my code:
CONTENT_XML_TAG = ('p', 'ul', 'h3', 'h1', 'h2', 'ol')
soup = BeautifulSoup(mytxt, 'xml')
spacy_model = spacy.load('en_core_web_sm')
content = "
".join([p.get_text() for p in soup.find('body.content').findAll(CONTENT_XML_TAG)])
print(content)
section_spacy = spacy_model(content)
tokenized_sentences = []
for sent in section_spacy.sents:
tokenized_sentences.append(sent)
for s in tokenized_sentences:
labels = [(ent.text, ent.label_) for ent in s.ents]
print(Counter(labels))
The print out:
Counter({('
', 'GPE'): 2, ('Department of Motor Vehicles', 'ORG'): 1})
Counter({('
', 'GPE'): 1})
Counter({('
', 'GPE'): 2, ('State', 'ORG'): 1})
Counter({('
', 'GPE'): 3})
Counter({('
', 'GPE'): 1})
I cannot believe SpaCy has such kind of misclassification. Did I miss anything?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have downloaded the popular 20 newsgroups data set which has 20 classes, but I want to re-classify the whole documents into six classes since some classes are very related.
So for example, all computer related docs should have a new class say 1. As it is now, the docs are assigned from 1-20 reflecting the classes. The computer related classes are 2,3,4,5,and 6.
I want say, 1 to be the class of all the computer related(2,3,4,5,6). I tested it by using 20_newsgroups.target[0], and it gave me 7. Meaning the class of the doc at 0 is 7.
I re-assigned it to a new class using 20_newsgroups.target[0]='1' and when I try 20_newsgroups.target[0], it shows 1 which is OK.
But how can I do this for all the documents that currently have (2,3,4,5,6) as their class? I can easily extend it to other classes if I understand that one.I also try for d in 20_newsgroups:
if 20_newsgroups.target in [2,3,4,5,6], 20_newsgroups.target='1'.
But this is showing an error that "the truth value of an array with more than one element is unambiguous, use a.any() or a.all".
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to remove all the proper nouns from a large corpus. Due to the large volume, I take a shortcut and remove all words starting with capital letters. For the first word of each sentence, I also want to check if it is a proper noun. How can I do this without using a tagger. One option is to do a screening using a list of common proper nouns. Is there a better way and where can I get such a list? Thanks.
I tried NLTK pos_tag and Standford NER. Without context, they do not work well.
ner_tagger = StanfordNERTagger(model,jar)
names = ner_tagger.tag(first_words)
types = ["DATE", "LOCATION", "ORGANIZATION", "PERSON", "TIME"]
for name, type in names:
if type in types:
print(name, type)
Below are some results.
Abnormal ORGANIZATION
Abnormally ORGANIZATION
Abraham ORGANIZATION
Absorption ORGANIZATION
Abundant ORGANIZATION
Abusive ORGANIZATION
Academic ORGANIZATION
Acadia ORGANIZATION
There are too many false positives since the first letter of a sentence is always capitalized. After I changed the words to all lower cases, NER even missed common entities such as America and American.
| 1
| 1
| 0
| 0
| 0
| 0
|
I've extracted keywords based on 1-gram, 2-gram, 3-gram within a tokenized sentence
list_of_keywords = []
for i in range(0, len(stemmed_words)):
temp = []
for j in range(0, len(stemmed_words[i])):
temp.append([' '.join(x) for x in list(everygrams(stemmed_words[i][j], 1, 3)) if ' '.join(x) in set(New_vocabulary_list)])
list_of_keywords.append(temp)
I've obtained keywords list as
['blood', 'pressure', 'high blood', 'blood pressure', 'high blood pressure']
['sleep', 'anxiety', 'lack of sleep']
How can I simply the results by removing all substring within the list and remain:
['high blood pressure']
['anxiety', 'lack of sleep']
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to properly split words to fit my corpus. I'm already using this approach which fixes hyphenated words, what I can't seem to figure out is how to keep words with apostrophes for contractions like: can't, won't, don't, he's, etc. together as one token in spacy.
More specifically I am searching how to do this for Dutch words: zo'n, auto's, massa's, etc. but this problem should be language-independent.
I have the following tokenizer:
def custom_tokenizer(nlp):
prefix_re = compile_prefix_regex(nlp.Defaults.prefixes)
suffix_re = compile_suffix_regex(nlp.Defaults.suffixes)
infix_re = re.compile(r'''[.\,\?\:\;\...\‘\’'\`\“\”\"'~]''')
return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
token_match=None)
nlp = spacy.load('nl_core_news_sm')
nlp.tokenizer = custom_tokenizer(nlp)
with this the tokens I get are:
'Mijn','eigen','huis','staat','zo',"'",'n','zes','meter','onder','het','wateroppervlak','van','de','Noordzee','.'
...but the tokens I expected should be:
'Mijn','eigen','huis','staat',"zo'n",'zes','meter','onder','het','wateroppervlak','van','de','Noordzee','.'
I know it is possible to add custom rules like:
case = [{ORTH: "zo"}, {ORTH: "'n", LEMMA: "een"}]
tokenizer.add_special_case("zo'n",case)
But I am looking for a more general solution.
I've tried editing the infix_re regex from the other thread, but I doesn't seem to have any impact on the issue. Is there any setting or change I can do to fix this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm doing the following:
from spacy.lang.nb import Norwegian
nlp = Norwegian()
doc = nlp(u'Jeg heter Marianne Borgen og jeg er ordføreren i Oslo.')
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,token.shape_, token.is_alpha, token.is_stop)
Lemmatization seems to not work at all, as this is the output:
(u'Jeg', u'Jeg', u'', u'', u'', u'Xxx', True, False)
(u'heter', u'heter', u'', u'', u'', u'xxxx', True, False)
(u'Marianne', u'Marianne', u'', u'', u'', u'Xxxxx', True, False)
(u'Borgen', u'Borgen', u'', u'', u'', u'Xxxxx', True, False)
(u'og', u'og', u'', u'', u'', u'xx', True, True)
(u'jeg', u'jeg', u'', u'', u'', u'xxx', True, True)
(u'er', u'er', u'', u'', u'', u'xx', True, True)
(u'ordf\xf8reren', u'ordf\xf8reren', u'', u'', u'', u'xxxx', True, False)
(u'i', u'i', u'', u'', u'', u'x', True, True)
(u'Oslo', u'Oslo', u'', u'', u'', u'Xxxx', True, False)
(u'.', u'.', u'', u'', u'', u'.', False, False)
However, looking at https://github.com/explosion/spaCy/blob/master/spacy/lang/nb/lemmatizer/_verbs_wordforms.py, the verb heter should at least be transformed into hete.
So it looks like spaCy has support, but it's not working? What could be the problem?
| 1
| 1
| 0
| 0
| 0
| 0
|
I used Scikit learn selectKbest to select the best features, around 500 from 900 of them. as follows where d is the dataframe of all the features.
from sklearn.feature_selection import SelectKBest, chi2, f_classif
X_new = SelectKBest(chi2, k=491).fit_transform(d, label_vs)
when I print X_new it now, it gives me numbers only but I need name of the selected features to use them later on.
I tried things like X_new.dtype.names but I did't got back anything and I tried to convert X_new into data frame but the only columns names I got were
1, 2, 3, 4...
so is there a way to know what are the names of the selected features?
| 1
| 1
| 0
| 0
| 0
| 0
|
Good day,
I have a function that should have the ability to lower and tokenize text and returns tokens.
Here is the function below:
def preprocess_text(text):
""" A function to lower and tokenize text data """
# Lower the text
lower_text = text.lower()
# tokenize the text into a list of words
tokens = nltk.tokenize.word_tokenize(lower_text)
return tokens
I then wish to apply the function to my actual text data called data which is a list with strings within it. I want to iterate over each string within data and apply the function to lower and tokenize the text data.
Finally, I wish to append the tokenized words to a final list called tokenized_final which should be the final list containing the tokenized words.
Here is the next bit of code below:
# Final list with tokenized words
tokenized_final = []
# Iterating over each string in data
for x in data:
# Calliing preprocess text function
token = preprocess_text(x)
tokenized_final.append(token)
However, when I do all this and print the list tokenized_final. It outputs a big list containing lists within it.
print (tokeninized_final)
Output:
[['pfe', 'bulls', 'have', 'reasons', 'on'],
['to', 'pay', 'more', 'attention'],
['there', 'is', 'still']]
When my desired output for tokenized_final is to be like this in one list:
['pfe', 'bulls', 'have', 'reasons', 'on','to', 'pay','more', 'attention','there','is', 'still']
Is there any way to rectify the preprocess function and apply it to the data to get the desired output. Or is there any way to do this?...
Help would truly be appreciated here.
Thanks in advance
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to create an AI for a Battleship game. As a beginner, I am having some trouble transforming my flowchart into Python code. How can I get started?
Here is my flowchart:
| 1
| 1
| 0
| 0
| 0
| 0
|
I have been given one problem to solve:
The problem is explained below:
The company maintains a dataset for specifications of all the products (nearly 4,500 at present) which it sells. Now each customer shares the details (name, quantity, brand etc.) of the products which he/she wants to buy from the company. Now, the customer while entering details in his/her dataset may spell the name of the product incorrectly. Also a product can be referred by many different ways in the company dataset. Example : red chilly can be referred as guntur chilly, whole red chilly, red chilly with stem, red chilly without stem etc.
I am absolutely confused about how to approach this problem. Should I use any machine learning based technique? If yes, then plz explain me what to do. Or, if it is possible to solve this problem without machine learning then also explain your approach. I am using Python.
The challenge : customer can refer to a product in many ways and the company also stores a single product in many ways with different specifications like variations in name, quantity, unit of measurements etc. With a labeled dataset I can find out that red bull energy drink(data entered by customer) is red bull (label) and red bull(entered by customer) is also red bull. But what's the use of finding this label? Because in my company dataset also red bull is present in many ways. Again I have to find all the different names of red bull in which they present in company dataset.
My approach:
I will prepare a Python dictionary like this:
{
"red chilly" : ['red chilly', 'guntur chilly', 'red chilly with stem'],
"red bull" : ['red bull energy drink', 'red bull']
}
Each entry in the dictionary is a product. whose keys are the sort of stem names of the products and the values are the all possible names for a product. Now customer enters a product name, say red bull energy drink. I will check in the dictionary for each key. If any value of that key matches, then I'll understand that the product is actually red bull and it can be referred as red bull and red bull energy drink, both ways in the company dataset. How's this approach ?
| 1
| 1
| 0
| 1
| 0
| 0
|
My code produced the following error:
AttributeError: 'function' object has no attribute 'translate'
More detail:
What is wrong with my code?
import pandas as pd
import numpy as np
from textblob import TextBlob
df_file2= df_file['Repair Details']. apply.translate(from_lang='zh-CN',to ='en')
| 1
| 1
| 0
| 0
| 0
| 0
|
I have two separate files, one is a text file, with each line being a single text. The other file contains the class label of that corresponding line. How do I load this into PyTorch and carry out further tokenization, embedding, etc?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to implement Markov property on a set of lines. I need all the unique words along the corresponding frequency of the following words.
Example
Input
Filename : Example.txt
I Love you
I Miss you
Miss you Baby
You are the best
I Miss you
Code Snippet
from collections import Counter
import pprint
class TextAnalyzer:
text_file = 'example.txt'
def __init__(self):
self.raw_data = ''
self.word_map = dict()
self.prepare_data()
self.analyze()
pprint.pprint(self.word_map)
def prepare_data(self):
with open(self.text_file, 'r') as example:
self.raw_data=example.read().replace('
', ' ')
example.close()
def analyze(self):
words = self.raw_data.split()
word_pairs = [[words[i],words[i+1]] for i in range(len(words)-1)]
self.word_map = dict()
for word in list(set(words)):
for pair in word_pairs:
if word == pair[0]:
self.word_map.setdefault(word, []).append(pair[1])
self.word_map[word] = Counter(self.word_map[word]).most_common(11)
TextAnalyzer()
Actual Output
{'Baby': ['You'],
'I': ['Love', 'Miss', 'Miss'],
'Love': ['you'],
'Miss': ['you', 'you', 'you'],
'You': ['are'],
'are': ['the'],
'best': ['I'],
'the': ['best'],
'you': [('I', 1), ('Miss', 1), ('Baby', 1)]}
Expected Output:
{'Miss': [('you',3)],
'I': [('Love',1), ('Miss',2)],
'Love': ['you',1],
'Baby': ['You',1],
'You': ['are',1],
'are': ['the',1],
'best': ['I',1],
'the': ['best'],
'you': [('I', 1), ('Miss', 1), ('Baby', 1)]}
I want the output to be sorted based on maximum frequency. How can I improve my code to achieve that output.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have two tasks to do.
1)I have to extract the headers of any CVS file containing invoices data.
In specific: invoice number, address, location, physical good.
I have been asked to create a text classifier for this task, therefore the classifier will go over any CVS file and identify those 4 headers.
2)After the classifier identifies the 4 words I have to find the attach the data of that column and create a class.
I researched the matter and the three methodologies that I thought were must be appropriated are:
1)bad of words
2)word embedded
3)K-means clustering
Bag of words can identify the word but it does not give me the location of the word itself to go and grab the column and create the class.
Word embedded is over complicated for this task, I believe, and even if give me the position of the word in the file is too time-consuming for this
K-means seems simple and effective it tells me where the word is.
My question before I start coding
did I miss something. Is my reasoning correct?
And most important the second question
Once the position of the word is identified in the CSV file how I translate that into coding so I can attach the data in that column
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to use pre-trained word embeddings in my machine learning model. The word embedings file I have is about 4GB. I currently read the entire file into memory in a dictionary and whenever I want to map a word to its vector representation I perform a lookup in that dictionary.
The memory usage is very high and I would like to know if there is another way of using word embeddings without loading the entire data into memory.
I have recently come across generators in Python. Could they help me reduce the memory usage?
Thank you!
| 1
| 1
| 0
| 1
| 0
| 0
|
I am new to text analytics and json file. I have to find the most accurate names in nested json nodes with names in the keyword.
[
{
"name": "Sachin Ramesh Tendulkar",
"DATE OF BIRTH": "",
"others": [
{
"name": "Sachin Tendulkar",
"fixedName": "Sachin Tendulkar",
"count": 17
},
{
"name": "Sri ajay Tendulkar",
"fixedName": "Sri ajay Tendulkar",
"count": 10
},
{
"name": "S R tendulkar",
"fixedName": "S R tendulkar",
"count": 4
},
{
"name":"/Rahul Dravid",
"fixedName": "/Rahul Dravid",
"count": 4
},
{
"name": "arjun tendulkar",
"fixedName": "arjun tendulkar",
"count": 1
}
]
}
]
},
{
"name": "Mahendra singh dhoni",
"DATE OF BIRTH": "",
"others": [
{
"name": "Yuvaraj singh",
"fixedName": "Yuvaraj singh",
"count": 62
},
{
"name": "M S Dhoni",
"fixedName": "M S Dhoni",
"count": 50
},
{
"name": "Dhoni M S",
"fixedName": "Dhoni M S",
"count": 30
},
{
"name": "M S Dutta",
"fixedName": "M S Dutta",
"count": 26
},]
I have to search the name Sachin Ramesh Tendulkar and Mahendra singh dhoni with the names in others node. And print the accurate matched names. How it can be done.
the output I am expecting
Sachin Ramesh Tendulkar : S R Tendulkar, Sachin Tendulkar
Mahendra singh Dhoni: M S Dhoni, Dhoni M S.
| 1
| 1
| 0
| 0
| 0
| 0
|
Context, I'm trying to port a Perl code into Python from https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/normalize-punctuation.perl#L87 and there is this regex here in Perl:
s/(\d) (\d)/$1.$2/g;
If I try it with the Perl script given the input text 123 45, it returns the same string with a dot. As a sanity check, I've tried on the command line too:
echo "123 45" | perl -pe 's/(\d) (\d)/$1.$2/g;'
[out]:
123.45
And it does so too when I convert the regex to Python,
>>> import re
>>> r, s = r'(\d) (\d)', '\g<1>.\g<2>'
>>> print(re.sub(r, s, '123 45'))
123.45
But when I use the Moses script:
$ wget https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/tokenizer/normalize-punctuation.perl
--2019-03-19 12:33:09-- https://raw.githubusercontent.com/moses-smt/mosesdecoder/master/scripts/tokenizer/normalize-punctuation.perl
Resolving raw.githubusercontent.com... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 905 [text/plain]
Saving to: 'normalize-punctuation.perl'
normalize-punctuation.perl 100%[================================================>] 905 --.-KB/s in 0s
2019-03-19 12:33:09 (8.72 MB/s) - 'normalize-punctuation.perl' saved [1912]
$ echo "123 45" > foobar
$ perl normalize-punctuation.perl < foobar
123 45
Even when we try to print the string before and after the regex in the Moses code, i.e.
if ($language eq "de" || $language eq "es" || $language eq "cz" || $language eq "cs" || $language eq "fr") {
s/(\d) (\d)/$1,$2/g;
}
else {
print $_;
s/(\d) (\d)/$1.$2/g;
print $_;
}
[out]:
123 45
123 45
123 45
We see that before and after the regex, there's no change in the string.
My question in parts are:
Is the Python \g<1>.\g<2> regex equivalent to the Perl's $1.$2?
Why is it that the Perl regex didn't add the full stop . between the two digit groups in Moses?
How to replicate Perl's behavior in Moses in Python regex?
How to replicate Python's behavior in Perl regex in Moses?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to reimplement this paper 1 in Keras as the authors used PyTorch 2. Here is the network architecture:
What I have done so far is:
number_of_output_classes = 1
hidden_size = 100
direc = 2
lstm_layer=Bidirectional(LSTM(hidden_size, dropout=0.2, return_sequences=True))(combined) #shape after this step (None, 200)
#weighted sum and attention should be here
attention = Dense(hidden_size*direc, activation='linear')(lstm_layer) #failed trial
drop_out_layer = Dropout(0.2)(attention)
output_layer=Dense(1,activation='sigmoid')(drop_out_layer) #shape after this step (None, 1)
I want to include the attention layer and the final FF layer after the LSTM but I am running into errors due to the dimensions and the return_sequence= True option.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have installed the NLTK package and other dependencies and set the environment variables as follows:
STANFORD_MODELS=/mnt/d/stanford-ner/stanford-ner-2018-10-16/classifiers/english.all.3class.distsim.crf.ser.gz:/mnt/d/stanford-ner/stanford-ner-2018-10-16/classifiers/english.muc.7class.distsim.crf.ser.gz:/mnt/d/stanford-ner/stanford-ner-2018-10-16/classifiers/english.conll.4class.distsim.crf.ser.gz
CLASSPATH=/mnt/d/stanford-ner/stanford-ner-2018-10-16/stanford-ner.jar
When I try to access the classifier like below:
stanford_classifier = os.environ.get('STANFORD_MODELS').split(':')[0]
stanford_ner_path = os.environ.get('CLASSPATH').split(':')[0]
st = StanfordNERTagger(stanford_classifier, stanford_ner_path, encoding='utf-8')
I get the following error. But I don't understand what is causing this error.
Error: Could not find or load main class edu.stanford.nlp.ie.crf.CRFClassifier
OSError: Java command failed : ['/mnt/c/Program Files (x86)/Common
Files/Oracle/Java/javapath_target_1133041234/java.exe', '-mx1000m', '-cp', '/mnt/d/stanford-ner/stanford-ner-2018-10-16/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/mnt/d/stanford-ner/stanford-ner-2018-10-16/classifiers/english.all.3class.distsim.crf.ser.gz', '-textFile', '/tmp/tmpaiqclf_d', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf8']
| 1
| 1
| 0
| 0
| 0
| 0
|
my goal is very simple: I have a set of strings or a sentence and I want to find the most similar one within a text corpus.
For example I have the following text corpus: "The front of the library is adorned with the Word of Life mural designed by artist Millard Sheets."
And I'd like to find the substring of the original corpus which is most similar to: "the library facade is painted"
So what I should get as output is: "fhe front of the library is adorned"
The only thing I came up with is to split the original sentence in substrings of variable lengths (eg. in substrings of 3,4,5 strings) and then use something like string.similarity(substring) from the spacy python module to assess the similarities of my target text with all the substrings and then keep the one with the highest value.
It seems a pretty inefficient method. Is there anything better I can do?
| 1
| 1
| 0
| 0
| 0
| 0
|
Thanks for stopping by! I had a quick question about appending stop words. I have a select few words that show up in my data set and I was hopping I could add them to gensims stop word list. I've seen a lot of examples using nltk and I was hoping there would be a way to do the same in gensim. I'll post my code below:
def preprocess(text):
result = []
for token in gensim.utils.simple_preprocess(text):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
nltk.bigrams(token)
result.append(lemmatize_stemming(token))
return result
| 1
| 1
| 0
| 0
| 0
| 0
|
I have decided to develop a Auto Text Summarization Tool using Python/Django.
Can someone please recommend books or articles on how to get started?
Is there any open source algorithm or made project in the Auto Text Summarization so that I can gain the idea?
Also, would you like to suggest me the new challenging FYP for me in Django/Python?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataframe with 3 columns namely 'word', 'pos-tag', 'label'. The words are originally from a text file.Now I would like to have another column 'sentences#' stating the index of sentences the words originally came from.
Current state:-
WORD POS-Tag Label
my PRP$ IR
name NN IR
is VBZ IR
ron VBN PERSON
. .
my PRP$ IR
name NN IR
is VBZ IR
harry VBN Person
. . IR
Desired state:-
Sentence# WORD Pos-Tag Label
1 My PRP IR
1 name NN IR
1 is VBZ IR
1 ron VBN Person
1 . . IR
2 My PRP IR
2 name NN IR
2 is VBZ IR
2 harry VBN Person
2 . . IR
code I used till now:-
#necessary libraries
import pandas as pd
import numpy as np
import nltk
import string
document=open(r'C:\Users\xyz
ewfile.txt',encoding='utf8')
content=document.read()
sentences = nltk.sent_tokenize(content)
sentences = [nltk.word_tokenize(sent) for sent in sentences]
sentences = [nltk.pos_tag(sent) for sent in sentences]
flat_list=[]
# flattening a nested list
for x in sentences:
for y in x:
flat_list.append(y)
df = pd.DataFrame(flat_list, columns=['word','pos_tag'])
#importing data to create the 'Label' column
data=pd.read_excel(r'C:\Users\xyz\pname.xlsx')
pname=list(set(data['Product']))
df['Label']=['drug' if x in fl else 'IR' for x in df['word']]
| 1
| 1
| 0
| 0
| 0
| 0
|
I have generated BoW for a pandas dataframae column called tech_raw_data['Product lower'].
count_vect = CountVectorizer()
smer_counts = count_vect.fit_transform(tech_raw_data['Product lower'].values.astype('U'))
smer_vocab = count_vect.get_feature_names()
Next to test string similarities with this BoW vectors I created BoW for only one entry in a column in a dataframe, toys['ITEM NAME'].
toys = pd.read_csv('toy_data.csv', engine='python')
print('-'*80)
print(toys['ITEM NAME'].iloc[0])
print('-'*80)
inp = [toys['ITEM NAME'].iloc[0]]
cust_counts = count_vect.transform(inp)
cust_vocab = count_vect.get_feature_names()
Checking similarities:
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
for x in cust_counts[0].toarray():
for y in smer_counts.toarray():
ratio = similar(x, y)
#print(ratio)
if ratio>=0.85:
should print the string corresponding to BoW y
Now whenever the match ratio exceeds 0.85, I need to print the string corresponding to the smer_counts in tech_raw_data['Product lower'] dataframe.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am currently developing a video game with some friends of mine, for a course we have in AI.
We all have different constraints ; our is to use Neural Networks to define the behavior of the AI. This part is in Python.
Basically, our game is like Towerfall, but much simpler. The map is static, the player has 5 lives, either the AI. You can move left, right, jump and click to shoot a bullet at cursor's position. So it is a battle to death.
Initially, we thought about using genetic algorithm to train our network. We defined a topology and whatever it is, we planned to optimize the weights using GA's.
The plan would be to generate populations, testing NNs directly within our game, gathering the results (fitness?) and generating a new population using the ranking of the previous ones.
But we do not really know how to implement this, or either if it is possible or if it would give good results.
Should we use a weighted average of weights during reproduction ? How to apply "mutations" ? What structure should we use to represent our NNs ?
If you have any clue or advice..!
Thanks a lot !
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm fairly new to NLP and trying to learn the techniques that can help me get my job done.
Here is my task: I have to classify stages of a drilling process based on text memos.
I have to classify labels for "Activity", "Activity Detail", "Operation" based on what's written in "Com" column.
I've been reading a lot of articles online and all the different kinds of techniques that I've read really confuses me.
The buzz words that I'm trying to understand are
Skip-gram (prediction based method, Word2Vec)
TF-IDF (frequency based method)
Co-Occurrence Matrix (frequency based method)
I am given about ~40,000 rows of data (pretty small, I know), and I came across an article that says neural-net based models like Skip-gram might not be a good choice if I have small number of training data. So I was also looking into frequency based methods too. Overall, I am unsure which technique is the best for me.
Here's what I understand:
Skip-gram: technique used to represent words in a vector space. But I don't understand what to do next once I vectorized my corpus
TF-IDF: tells how important each word is in each sentence. But I still don't know how it can be applied on my problem
Co-Occurence Matrix: I don'y really understand what it is.
All the three techniques are to numerically represent texts. But I am unsure what step I should take next to actually classify labels.
What approach & sequence of techniques should I use to tackle my problem? If there's any open source Jupyter notebook project, or link to an article (hopefully with codes) that did the similar job done, please share it here.
| 1
| 1
| 0
| 1
| 0
| 0
|
I’ m stuck for a couple of days trying to make and RNN network to learn a basic HTML template.
I tried different approaches and I even overfit on the following data:
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
Obtaining 100% accuracy on training and validation using Adam Optimizer and CrossEntropyLoss.
The problem is that when I try to sample from the network, the results are completely random and I don’t know whats the problem:
..<a<a<a<a<aa<ttp11111b11111b11111111b11b1bbbb<btttn111
My sampling function is the following:
def sample_sentence():
words = list()
count = 0
modelOne.eval()
with torch.no_grad():
# Setup initial input state, and input word (we use "the").
previousWord = torch.LongTensor(1, 1).fill_(trainData.vocabulary['letter2id']['[START]'])
hidden = Variable(torch.zeros(6, 1, 100).to(device))
while True:
# Predict the next word based on the previous hidden state and previous word.
inputWord = torch.autograd.Variable(previousWord.to(device))
predictions, newHidden = modelOne(inputWord, hidden)
hidden = newHidden
pred = torch.nn.functional.softmax(predictions.squeeze()).data.cpu().numpy().astype('float64')
pred = pred/np.sum(pred)
nextWordId = np.random.multinomial(1, pred, 1).argmax()
if nextWordId == 0:
continue
words.append(trainData.vocabulary['id2letter'][nextWordId])
# Setup the inputs for the next round.
previousWord.fill_(nextWordId)
# Keep adding words until the [END] token is generated.
if nextWordId == trainData.vocabulary['letter2id']['[END]']:
break
if count>20000:
break
count += 1
words.insert(0, '[START]')
return words
And my network architecture is here:
class ModelOne(Model) :
def __init__(self,
vocabulary_size,
hidden_size,
num_layers,
rnn_dropout,
embedding_size,
dropout,
num_directions):
super(Model, self).__init__()
self.vocabulary_size = vocabulary_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.rnn_dropout = rnn_dropout
self.dropout = dropout
self.num_directions = num_directions
self.embedding_size = embedding_size
self.embeddings = nn.Embedding(self.vocabulary_size, self.embedding_size)
self.rnn = nn.GRU(self.embedding_size,
self.hidden_size,
num_layers=self.num_layers,
bidirectional=True if self.num_directions==2 else False,
dropout=self.rnn_dropout,
batch_first=True)
self.linear = nn.Linear(self.hidden_size*self.num_directions, self.vocabulary_size)
def forward(self, paddedSeqs, hidden):
batchSequenceLength = paddedSeqs.size(1)
batchSize = paddedSeqs.size(0)
lengths = paddedSeqs.ne(0).sum(dim=1)
embeddingVectors = self.embeddings(paddedSeqs)
x = torch.nn.utils.rnn.pack_padded_sequence(embeddingVectors, lengths, batch_first=True)
self.rnn.flatten_parameters()
x,hid = self.rnn(x, hidden)
output, _ = torch.nn.utils.rnn.pad_packed_sequence(x, batch_first=True, padding_value=0, total_length=batchSequenceLength)
predictions = self.linear(output)
return predictions.view(batchSize, self.vocabulary_size, batchSequenceLength), hid
def init_hidden(self, paddedSeqs):
hidden = Variable(torch.zeros(self.num_layers*self.num_directions,
1,
self.hidden_size).to(device))
return hidden
modelOne =ModelOne(vocabulary_size=vocabularySize,
hidden_size=100,
embedding_size=50,
num_layers=3,
rnn_dropout=0.0,
dropout=0,
num_directions=2).to(device)
If you have any idea of what needs to be changed, please let me know.
I added all the code to github repository here: https://github.com/OverclockRo/HTMLGeneration/blob/SamplingTestTemplate/Untitled.ipynb
| 1
| 1
| 0
| 0
| 0
| 0
|
I have an array of (insurance) contracts (in .docx format) processing of which I'm trying to automate.
Current task at hand is to split every contract into so called clauses - parts of contract which describe some specific risk or exclusion from cover.
For example, it can be just one sentence – “This contract covers loss or damage due to fire” or several paragraphs of text that give more details and explain what type of fire this contract covers and what damage is reimbursed.
Good thing is that usually contracts are formatted in some way or another. In best possible scenario, whole contract is a numbered list with items and sub items and we simply can split it by certain level of list hierarchy.
Bad thing is that this is not always the case and the list can be not numbered, but alphabetical or not list at all in word terms: each line starts with a number or a letter user typed in manually. Or it can be not letters or numbers at all, but some amount of spaces or tabs. Or clauses can be separated by their titles that are typed in ALL CAPS.
So the visual representation of structure varies from contract to contract.
So my question is what is the best approach to this task? Regexp? Some ML algo? Maybe there are open source scripts out there that were written to deal with this or similar tasks? Any help will be most welcome!
EDIT (24.12.2019):
Found this repo on github: https://github.com/bmmidei/SliceCast
Form its description: "This repository explores a neural network approach to segment podcasts based on topic of discussion. We model the problem as a binary classification task where each sentence is either labeled as the first sentence of a new segment or a continuation of the current segment. We embed sentences using the Universal Sentence Encoder and use an LSTM-based classification network to obtain the cutoff probabilities. Our results indicate that neural network models are indeed suitable for topical segmentation on long, conversational texts, but larger datasets are needed for a truly viable product.
Read the full report for this work here: Neural Text Segmentation on Podcast Transcripts"
| 1
| 1
| 0
| 0
| 0
| 0
|
I am creating a python model that will classify a given document based on the text. Because each document still needs to be manually reviewed by a human, I am creating a suggestion platform that will give the user the top n-classes that a given document belongs too. Additionally each document can belong to more than one class. I have a training set of documents filled with rich text and their tags.
What I would like to do is perform a regression on each document to get a probabilistic score of each classification and return the top 5 highest scored classes.
I have looked into Bayes classification models, and recommendation systems and I think a logistic regression will help be better as it returns a score. I am new to machine learning and would appreciate any advice or examples that is modeled after this kind of problem. Thank you.
EDIT: Specifically, my problem is how should I parse my text data for ML modeling with logistic regression? Do I need to represent my text in a vector format using Word2Vec/Doc2Vec or a Bag-of-words model?
| 1
| 1
| 0
| 0
| 0
| 0
|
Let's suppose that I have a text document such as the following:
document = '<p> I am a sentence. I am another sentence <p> I am a third sentence.'
( or a more complex text example:
document = '<p>Forde Education are looking to recruit a Teacher of Geography for an immediate start in a Doncaster Secondary school.</p> <p>The school has a thriving and welcoming environment with very high expectations of students both in progress and behaviour. This position will be working until Easter with a <em><strong>likely extension until July 2011.</strong></em></p> <p>The successful candidates will need to demonstrate good practical subject knowledge but also possess the knowledge and experience to teach to GCSE level with the possibility of teaching to A’Level to smaller groups of students.</p> <p>All our candidate will be required to hold a relevant teaching qualifications with QTS successful applicants will be required to provide recent relevant references and undergo a Enhanced CRB check.</p> <p>To apply for this post or to gain information regarding similar roles please either submit your CV in application or Call Debbie Slater for more information. </p>'
)
I am applying a series of pre-processing NLP techniques to get a "cleaner" version of this document by also taking the stem word for each of its words.
I am using the following code for this:
stemmer_1 = PorterStemmer()
stemmer_2 = LancasterStemmer()
stemmer_3 = SnowballStemmer(language='english')
# Remove all the special characters
document = re.sub(r'\W', ' ', document)
# remove all single characters
document = re.sub(r'\b[a-zA-Z]\b', ' ', document)
# Substituting multiple spaces with single space
document = re.sub(r' +', ' ', document, flags=re.I)
# Converting to lowercase
document = document.lower()
# Tokenisation
document = document.split()
# Stemming
document = [stemmer_3.stem(word) for word in document]
# Join the words back to a single document
document = ' '.join(document)
This gives the following output for the text document above:
'am sent am anoth sent am third sent'
(and this output for the more complex example:
'ford educ are look to recruit teacher of geographi for an immedi start in doncast secondari school the school has thrive and welcom environ with veri high expect of student both in progress and behaviour nbsp this posit will be work nbsp until easter with nbsp em strong like extens until juli 2011 strong em the success candid will need to demonstr good practic subject knowledg but also possess the knowledg and experi to teach to gcse level with the possibl of teach to level to smaller group of student all our candid will be requir to hold relev teach qualif with qts success applic will be requir to provid recent relev refer and undergo enhanc crb check to appli for this post or to gain inform regard similar role pleas either submit your cv in applic or call debbi slater for more inform nbsp'
)
What I want to do now is to get an output like the one exactly above but after I have applied lemmatisation and not stemming.
However, unless I am missing something, this requires to split the original document into (sensible) sentences, apply POS tagging and then implement the lemmatisation.
But here things are a little bit complicated because the text data are coming from web scraping and hence you will encounter many HTML tags such as <br>, <p> etc.
My idea is that every time a sequence of words is ending with some common punctuation mark (fullstop, exclamation point etc) or with a HTML tag such as <br>, <p> etc then this should be considered as a separate sentence.
Thus for example the original document above:
document = '<p> I am a sentence. I am another sentence <p> I am a third sentence.'
Should be split in something like this:
['I am a sentence', 'I am another sentence', 'I am a third sentence']
and then I guess we will apply POS tagging at each sentence, split each sentence in words, apply lemmatization and .join() the words back to a single document as I am doing it with my code above.
How can I do this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I would like to tokenize a list of strings according to my self-defined dictionary.
The list of string looks like this:
lst = ['vitamin c juice', 'organic supplement']
The self-defined dictionary:
dct = {0: 'organic', 1: 'juice', 2: 'supplement', 3: 'vitamin c'}
My expected result:
vitamin c juice --> [(3,1), (1,1)]
organic supplement --> [(0,1), (2,1)]
My current code:
import gensim
import gensim.corpora as corpora
from gensim.utils import tokenize
dct = corpora.Dictionary([list(x) for x in tup_list])
corpus = [dct.doc2bow(text) for text in [s for s in lst]]
The error message I got is TypeError: doc2bow expects an array of unicode tokens on input, not a single string However, I do not want to simply tokenize "vitamin c" as vitamin and c. Instead, I want to tokenize based on my existing dct words. That is to say, it should be vitamin c.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have searched that all questions only provide the way to get synonym for one word, but it doesn't work when I try to use a for loop to get synonym for multiple words.
This is my code, but it doesn't work as expected.
str = "Action, Adventure, Drama"
def process_genre(str):
for genre in str.split(","):
result = []
for syn in wordnet.synsets(genre):
for l in syn.lemmas():
result.append(l.name())
print(result)
process_genre(str)
This is the output
['action', 'action', 'activity', 'activeness', 'military_action', 'action', 'natural_process', 'natural_action', 'action', 'activity', 'action', 'action', 'action', 'action_mechanism', 'legal_action', 'action', 'action_at_law', 'action', 'action', 'action', 'sue', 'litigate', 'process', 'carry_through', 'accomplish', 'execute', 'carry_out', 'action', 'fulfill', 'fulfil']
[]
[]
The list for Adventure and Drama prints empty, which is supposed to have its synonym.
Can anyone explain to me why? Is there a way to maybe reset it? Or...?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm very new to using spaCy. I have been reading the documentation for hours and I'm still confused if it's possible to do what I have in my question. Anyway...
As the title says, is there a way to actually get a given noun chunk using a token containing it. For example, given the sentence:
"Autonomous cars shift insurance liability toward manufacturers"
Would it be possible to get the "autonomous cars" noun chunk when what I only have the "cars" token? Here is an example snippet of the scenario that I'm trying to go for.
startingSentence = "Autonomous cars and magic wands shift insurance liability toward manufacturers"
doc = nlp(startingSentence)
noun_chunks = doc.noun_chunks
for token in doc:
if token.dep_ == "dobj":
print(child) # this will print "liability"
# Is it possible to do anything from here to actually get the "insurance liability" token?
Any help will be greatly appreciated. Thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
Hi I am struck here please help me with this issue
I am getting this error
TypeError: language_model_learner() missing 1 required positional argument: 'arch'
I am following this tutorial :- https://www.analyticsvidhya.com/blog/2018/11/tutorial-text-classification-ulmfit-fastai-library/
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using Keras embedding layers to create entity embeddings made popular on the Kaggle Rossmann Store Sales 3rd place entry. However, I am unsure about how to map back the embeddings back to the actual categorical values. Let's take a look at a very basic example:
In the code below, I create a dataset with two numeric and one categorical feature.
import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from keras.models import Model
from keras.layers import Input, Dense, Concatenate, Reshape, Dropout
from keras.layers.embeddings import Embedding
# create some fake data
data, labels = make_classification(n_classes=2, class_sep=2, n_informative=2,
n_redundant=0, flip_y=0, n_features=2,
n_clusters_per_class=1, n_samples=100,
random_state=10)
cat_col = np.random.choice(a=[0,1,2,3,4], size=100)
data = pd.DataFrame(data)
data[2] = cat_col
embed_cols = [2]
# converting data to list of lists, as the network expects to
# see the data in this format
def preproc(df):
data_list = []
# convert cols to list of lists
for c in embed_cols:
vals = np.unique(df[c])
val_map = {}
for i in range(len(vals)):
val_map[vals[i]] = vals[i]
data_list.append(df[c].map(val_map).values)
# the rest of the columns
other_cols = [c for c in df.columns if (not c in embed_cols)]
data_list.append(df[other_cols].values)
return data_list
data = preproc(data)
There are 5 unique values for the categorical column:
print("Unique Values: ", np.unique(data[0]))
Out[01]: array([0, 1, 2, 3, 4])
which then get fed into a Keras model with an embedding layer:
inputs = []
embeddings = []
input_cat_col = Input(shape=(1,))
embedding = Embedding(5, 3, input_length=1, name='cat_col')(input_cat_col)
embedding = Reshape(target_shape=(3,))(embedding)
inputs.append(input_cat_col)
embeddings.append(embedding)
# add the remaining two numeric columns from the 'data array' to the network
input_numeric = Input(shape=(2,))
embedding_numeric = Dense(8)(input_numeric)
inputs.append(input_numeric)
embeddings.append(embedding_numeric)
x = Concatenate()(embeddings)
output = Dense(1, activation='sigmoid')(x)
model = Model(inputs, output)
model.compile(loss='binary_crossentropy', optimizer='adam')
history = model.fit(data, labels,
epochs=10,
batch_size=32,
verbose=1,
validation_split=0.2)
I can get the actual embeddings by getting the weight for the embedding layer:
embeddings = model.get_layer('cat_col').get_weights()[0]
print("Unique Values: ", np.unique(data[0]))
print("3 Dimensional Embedding:
", embeddings)
Unique Values: [0 1 2 3 4]
3 Dimensional Embedding:
[[ 0.02749949 0.04238378 0.0080842 ]
[-0.00083209 0.01848664 0.0130044 ]
[-0.02784528 -0.00713446 -0.01167112]
[ 0.00265562 0.03886909 0.0138318 ]
[-0.01526615 0.01284053 -0.0403452 ]]
However, I am unsure how to map these back. Is it safe to assume that the weights are ordered? For example, 0=[ 0.02749949 0.04238378 0.0080842 ]?
| 1
| 1
| 0
| 1
| 0
| 0
|
So far, I have this code below
from textblob import TextBlob
class BrinBot:
def __init__(self, message): #Accepts the message from the user as the argument
parse(message)
class parse:
def __init__(self, message):
self.message = message
blob = TextBlob(self.message)
print(blob.tags)
BrinBot("Handsome Bob's dog is a beautiful Chihuahua")
This is the output:
[('Handsome', 'NNP'), ('Bob', 'NNP'), ("'s", 'POS'), ('dog', 'NN'), ('is', 'VBZ'), ('a', 'DT'), ('beautiful', 'JJ'), ('Chihuahua', 'NNP')]
My question is that apparently TextBlob thinks "Handsome" is a singular proper noun, which is not correct as "Handsome" is supposed to be an adjective. Is there a way to fix that, I tried this on NLTK also but got the same results.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a text file with million of rows which I wanted to convert into word vectors and later on I can compare these vectors with a search keyword and see which all texts are closer to the search keyword.
My Dilemma is all the training files that I have seen for the Word2vec are in the form of paragraphs so that each word has some contextual meaning within that file. Now my file here is independent and contains different keywords in each row.
My question is whether is it possible to create word embedding using this text file or not, if not then what's the best approach for searching a matching search keyword in this million of texts
**My File Structure: **
Walmart
Home Depot
Home Depot
Sears
Walmart
Sams Club
GreenMile
Walgreen
Expected
search Text : 'WAL'
Result from My File:
WALGREEN
WALMART
WALMART
| 1
| 1
| 0
| 1
| 0
| 0
|
From a user given input of job description, i need to extract the keywords or phrases, using python and its libraries. I am open for suggestions and guidance from the community of what libraries work best and if in case, its simple, please guide through.
Example of user input:
user_input = "i want a full stack developer. Specialization in python is a must".
Expected output:
keywords = ['full stack developer', 'python']
| 1
| 1
| 0
| 0
| 0
| 0
|
How can I check the strings tokenized inside TfidfVertorizer()? If I don't pass anything in the arguments, TfidfVertorizer() will tokenize the string with some pre-defined methods. I want to observe how it tokenizes strings so that I can more easily tune my model.
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ['This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?']
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
I want something like this:
>>>vectorizer.get_processed_tokens()
[['this', 'is', 'first', 'document'],
['this', 'document', 'is', 'second', 'document'],
['this', 'is', 'the', 'third', 'one'],
['is', 'this', 'the', 'first', 'document']]
How can I do this?
| 1
| 1
| 0
| 0
| 0
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.