text stringlengths 0 27.6k | python int64 0 1 | DeepLearning or NLP int64 0 1 | Other int64 0 1 | Machine Learning int64 0 1 | Mathematics int64 0 1 | Trash int64 0 1 |
|---|---|---|---|---|---|---|
If I am training a NER model completely from scratch, does the language matter? In the API I set the language, but I also give the API the spans of the named entities. The command-line format goes one step further and I give the NER labels for each token for each sentence. For example, could I tokenize Japanese using ICU, label the tokens, then feed that to Spacy?
| 1 | 1 | 0 | 0 | 0 | 0 |
Here I got a pandas.series named 'traindata'.
0 Published: 4:53AM Friday August 29, 2014 Sourc...
1 8 Have your say
Playing low-level club c...
2 Rohit Shetty has now turned producer. But the ...
3 A TV reporter in Serbia almost lost her job be...
4 THE HAGUE -- Tony de Brum was 9 years old in 1...
5 Australian TV cameraman Harry Burton was kille...
6 President Barack Obama sharply rebuked protest...
7 The car displaying the DIE FOR SYRIA! sticker....
8
If you've ever been, you know that seeing th...
9
The former executive director of JBWere has ...
10 Waterloo Road actor Joe Slater has revealed hi...
...
**Name: traindata, Length: 2284, dtype: object**
and what I want to do is to replace the series.values with the stemmed sentences.
my thought is to build a new series and put the stemmed sentence in.
my code is as below:
from nltk.stem.porter import PorterStemmer
stem_word_data = np.zeros([2284,1])
ps = PorterStemmer()
for i in range(0,len(traindata)):
tst = word_tokenize(traindata[i])
for word in tst:
word = ps.stem(word)
stem_word_data[i] = word
and then an error occurs:
ValueError: could not convert string to float: 'publish'
Anyone knows how to fix this error or anyone has a better idea on how to replace the series.values with the stemmed sentence? thanks.
| 1 | 1 | 0 | 0 | 0 | 0 |
I already know how to train a neural net with NeuroLab and get the error every X epochs, but I want to get the final error after training the net.
nn = nl.net.newff([[min_val, max_val]], [40, 26, 1])
# Gradient descent
nn.trainf = nl.train.train_gd
# Train the neural network
error_progress = nn.train(data, labels, epochs=6000, show=100, goal=0.0005)
# CODE TO GET THE ERROR AFTER TRAINING HERE
# final_error = ?
EDIT: By final_error I mean the final value of the Error variable that the net.train command plots (ONLY the error, not the complete string, as it plots in the following format).
Epoch: 1700; Error: 0.0005184049;
| 1 | 1 | 0 | 0 | 0 | 0 |
Having loaded a pre-trained word2vec model with the gensim toolkit, I would like to find a synonym of a word given a context such as intelligent for 'she is a bright person'.
| 1 | 1 | 0 | 0 | 0 | 0 |
My code for the minimax algorithm tic tac toe AI seems to be not working, and I cannot figure out why. It seems to be something wrong with the recusrion aspect and returning a negative value if a move results in a loss; it doesn't differentiate between a defensive move vs. an offensive move.
Instead of choosing to place X on position 6 to stop the opponent from reaching 3 in a row, it instead places it on another tile
board = [
"X", "X", "O",
"O", "O", "X",
"-", "-", "-",
]
opp = "O"
plyr = "X"
def getOpenPos(board):
openPos = []
for index, state in enumerate(board):
if state == "-":
openPos.append(index)
return openPos
def winning(board, plyr):
if ((board[0] == plyr and board[1] == plyr and board[2] == plyr) or
(board[3] == plyr and board[4] == plyr and board[5] == plyr) or
(board[6] == plyr and board[7] == plyr and board[8] == plyr) or
(board[0] == plyr and board[4] == plyr and board[8] == plyr) or
(board[1] == plyr and board[4] == plyr and board[7] == plyr) or
(board[2] == plyr and board[4] == plyr and board[6] == plyr) or
(board[0] == plyr and board[3] == plyr and board[6] == plyr) or
(board[2] == plyr and board[5] == plyr and board[8] == plyr)):
return True
else:
return False
def minimax(board, turn, FIRST):
possibleMoves = getOpenPos(board)
#check if won
if (winning(board, opp)):
return -10
elif (winning(board, plyr)):
return 10
scores = []
#new board created for recursion, and whoevers turn it is
for move in possibleMoves:
newBoard = board
newBoard[move] = turn
if (turn == plyr):
scores.append( [move,minimax(newBoard, opp, False)] )
elif (turn == opp):
scores.append( [move, minimax(newBoard, plyr, False)] )
#collapse recursion by merging all scores to find optimal position
#see if there is a negative value (loss) and if there is its a -10
if not FIRST:
bestScore = 0
for possibleScore in scores:
move = possibleScore[0]
score = possibleScore[1]
if score == -10:
return -10
else:
if score > bestScore:
bestScore = score
return bestScore
else:
bestMove, bestScore = 0, 0
for possibleScore in scores:
move = possibleScore[0]
score = possibleScore[1]
if score > bestScore:
bestMove = move
bestScore = score
#returns best position
return bestMove
print(minimax(board, plyr, True))
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm loading a dataset of reviews into pandas, as part of the processing I want to get all the unique words to create a Bag of Words.
Since the text is contained in several rows I first have to merge them.
I tried this:
all_text = df['review_body'].to_string()
words = set(a.split(' '))
words = list(words)
But I'm getting incorrect words from there like:
u'fel...
1093'
| 1 | 1 | 0 | 0 | 0 | 0 |
Im trying to do some POS_Tagging using nltk (code below) and im running into the above issue when i try to write to a new file. if i run #fout.write("
".join(tagged))this then it says the above error and totry and solve that when i run #fout.write(str.join(tagged)) this it says 'join' requires a 'str' object but received a 'list'
The text file is locally stored and is relatively large
from pathlib import Path
from nltk.tokenize import word_tokenize as wt
import nltk
import pprint
output_dir = Path ("\\Path\\")
output_file = (output_dir / "Token2290newsML.txt")
news_dir = Path("\\Path\\")
news_file = (news_dir / "2290newsML.txt")
tagged_dir = Path("\\Path\\")
tagged_file = (tagged_dir / "tagged2290newsML.txt")
file = open(news_file, "r")
data = file.readlines()
f = open(tagged_file, "w")
def process_content():
try:
for i in data:
words = wt(i)
pprint.pprint(words)
tagged = nltk.pos_tag(words)
pprint.pprint(tagged)
#f.write("
".join(tagged))
f.write(str.join(tagged))
except Exception as e:
print(str(e))
process_content()
file.close()
Any help will be appreciated
thanks :)
| 1 | 1 | 0 | 0 | 0 | 0 |
What is the best way in this case, to store for every Speaker the spoken text in a form of a dict or a better option? I want to map every spoken text to each speaker like this try. But the output is not as I expected it.
def speaker_texts(cleanedList):
dictspeaker = {"Speaker": "", "Group": "", "Text": ""}
pattern_speaker = r"([A-Z]+[a-z]*)([\s]*)(\([A-Z]*\))"
for sent in cleanedList:
speaker = re.findall(pattern_speaker, sent)
for info in speaker:
dictspeaker.update({"Speaker":info[0], "Group":info[2], "Text": sent})
Output:
{'Speaker': 'Rische', 'Group': '(KPD)', 'Text': ', Antragsteller: Meine Damen und
Herren! Anläßlich der Regierungserklärung und
\x0c
30
(Rische)
auch in der heutigen Debatte zum Flüchtlings-
problem wurden viele Worte über eine sinnvolle,
den sozialen Belangen entsprechende Verwendung
öffentlicher Mittel gesprochen. Di e Regierung gab
in ihrem Programm zu verstehen, daß sie eine ver-
antwortungsbewußte Sozialpolitik durchzuführen
gedenke. Sie hat die Flüchtlingshilfe, den Woh-
nungsbau, die Verbe.'}
In the file a speaker comes forward several times. I would like to assign the spoken texts to the respective speaker. That is, whenever a speaker occurs, update it in the dictionary so that the new text is added without overwriting the old one.
Or should I create for every Speaker a own dict?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to update an existing spacy model "en_core_web_sm" with some different country currency such as "euro", "rupees", "eu", "Rs.", "INR" etc. How can I achieve that ? The spacy tutorial didn't quite help me as training a fixed string such as "horses" as "ANIMAL" seems different than my requirements. The reason is I can have currency value indifferent formats : "1 million euros", "Rs. 10,000", "INR 1 thousand" etc. My sample dataset contains around 1000 samples with the following format :
TRAIN_DATA = [
(" You have activated International transaction limit for Debit Card ending XXXX1137 on 2017-07-05 12:48:20.0 via NetBanking. The new limit is Rs. 250,000.00", {'entities' : [(140, 154, 'MONEY')] }),...
]
Can anyone please help me out with this with the data format, training size or any other relevant information ?
| 1 | 1 | 0 | 0 | 0 | 0 |
Word2Vec
Currently I am trying to perform text classification on a text corpus. In order to do so, I have decided to perform word2vec with the help of gensim. In order to do so, I have the code below:
sentences = MySentences("./corpus_samples") # a memory-friendly iterator
model = gensim.models.Word2Vec(sentences, size=100, window=5, min_count=5, workers=4)
My sentences is basically a class that handles the File I/O
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
Now we can get the vocabulary of the model that has been created through these lines:
print(model.wv.vocab)
The output of which is below(sample):
t at 0x106f19438>, 'raining.': <gensim.models.keyedvectors.Vocab object at 0x106f19470>, 'fly': <gensim.models.keyedvectors.Vocab object at 0x106f194a8>, 'rain.': <gensim.models.keyedvectors.Vocab object at 0x106f194e0>, 'So…': <gensim.models.keyedvectors.Vocab object at 0x106f19518>, 'Ohhh,': <gensim.models.keyedvectors.Vocab object at 0x106f19550>, 'weird.': <gensim.models.keyedvectors.Vocab object at 0x106f19588>}
As of now, the dictionary that is the vocabulary, contains the word string and a <gensim.models.keyedvectors.Vocab object at 0x106f19588> object or such. I want to be able to query an index of a particular word. In order to make my training data like:
w91874 w2300 w6 w25363 w6332 w11 w767 w297441 w12480 w256 w23270 w13482 w22236 w259 w11 w26959 w25 w1613 w25363 w111 __label__4531492575592394249
w17314 w5521 w7729 w767 w10147 w111 __label__1315009618498473661
w305 w6651 w3974 w1005 w54 w109 w110 w3974 w29 w25 w1513 w3645 w6 w111 __label__-400525901828896492
w30877 w72 w11 w2828 w141417 w77033 w10147 w111 __label__4970306416006110305
w3332 w1107 w4809 w1009 w327 w84792 w6 w922 w11 w2182 w79887 w1099 w111 __label__-3645735357732416904
w471 w14752 w1637 w12348 w72 w31330 w930 w11569 w863 w25 w1439 w72 w111 __label__-5932391056759866388
w8081 w5324 w91048 w875 w13449 w1733 w111 __label__3812457715228923422
Where the wxxxx represents the index of the word within the vocabulary and the label represents the class.
Corpora
Some of the solutions that I have been experimenting with, is the corpora utility of gensim:
corpora = gensim.corpora.dictionary.Dictionary(sentences, prune_at=2000000)
print(corpora)
print(getKey(corpora,'am'))
This gives me a nice dictionary of the words, but this corpora vocabulary is not the same as the one created by the word2vec function mentioned above.
| 1 | 1 | 0 | 0 | 0 | 0 |
I took some code from the SpaCy docs that allows you to assign custom dependency labels to text, I want to use this to interpret intent from the user. It's mostly working but for example when I run the code it labels "delete" as 'ROOT' where it should label it as 'INTENT' like it shows in the deps dictionary.
from __future__ import unicode_literals, print_function
import plac
import random
import spacy
from pathlib import Path
# training data: texts, heads and dependency labels
# for no relation, we simply chose an arbitrary dependency label, e.g. '-'
TRAIN_DATA = [
("How do I delete my account?", {
'heads': [3, 3, 3, 3, 5, 3, 3], # index of token head
'deps': ['ROOT', '-', '-', 'INTENT', '-', 'OBJECT', '-']
}),
("How do I add a balance?", {
'heads': [3, 3, 3, 3, 5, 3, 3],
'deps': ['ROOT', '-', '-', 'INTENT', '-', 'OBJECT', '-']
}),
("How do I deposit my funds into my bank account?", {
'heads': [3, 3, 3, 3, 5, 3, 3, 9, 9, 6, 3],
'deps': ['ROOT', '-', '-', 'INTENT', '-', '-', '-', '-', '-', 'OBJECT', '-']
}),
("How do I fill out feedback forms?", {
'heads': [3, 3, 3, 3, 3, 6, 3, 3],
'deps': ['ROOT', '-', '-', 'INTENT', '-', '-', 'OBJECT', '-']
}),
#("How does my profile impact my score?", {
#'heads': [4, 4, 4, 4, 4, 6, 4, 4],
#'deps': ['ROOT', '-', '-', '-', 'INTENT', '-', 'OBJECT' '-']
#}),
("What are the fees?", {
'heads': [1, 1, 3, 1, 1],
'deps': ['ROOT', '-', '-', 'INTENT', '-']
}),
("How do I update my profile picture?", {
'heads': [3, 3, 3, 3, 6, 6, 3, 3],
'deps': ['ROOT', '-', '-', 'INTENT', '-', 'OBJECT', 'OBJECT', '-']
}),
("How do I add a referral to the marketplace?", {
'heads': [3, 3, 3, 3, 5, 3, 3, 8, 6, 3],
'deps': ['ROOT', '-', '-', 'INTENT', '-', 'OBJECT', '-', '-', 'OBJECT', '-']
}),
]
@plac.annotations(
model=("Model name. Defaults to blank 'en' model.", "option", "m", str),
output_dir=("Optional output directory", "option", "o", Path),
n_iter=("Number of training iterations", "option", "n", int))
def main(model=None, output_dir=None, n_iter=5):
"""Load the model, set up the pipeline and train the parser."""
if model is not None:
nlp = spacy.load(model) # load existing spaCy model
print("Loaded model '%s'" % model)
else:
nlp = spacy.blank('en') # create blank Language class
print("Created blank 'en' model")
# We'll use the built-in dependency parser class, but we want to create a
# fresh instance – just in case.
if 'parser' in nlp.pipe_names:
nlp.remove_pipe('parser')
parser = nlp.create_pipe('parser')
nlp.add_pipe(parser, first=True)
#add new labels to the parser
for text, annotations in TRAIN_DATA:
for dep in annotations.get('deps', []):
parser.add_label(dep)
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'parser']
with nlp.disable_pipes(*other_pipes): # only train parser
optimizer = nlp.begin_training()
for itn in range(n_iter):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update([text], [annotations], sgd=optimizer, losses=losses)
print(losses)
# test the trained model
test_model(nlp)
# save model to output directory
if output_dir is not None:
output_dir = Path(output_dir)
if not output_dir.exists():
output_dir.mkdir()
nlp.to_disk(output_dir)
print("Saved model to", output_dir)
# test the saved model
print("Loading from", output_dir)
nlp2 = spacy.load(output_dir)
test_model(nlp2)
def test_model(nlp):
texts = ["How do I delete my account?"]
docs = nlp.pipe(texts)
for doc in docs:
print(doc.text)
print([(t.text, t.dep_, t.head.text) for t in doc if t.dep_ != '-'])
if __name__ == '__main__':
plac.call(main)
This is the output:
How do I delete my account?
[(u'How', u'ROOT', u'delete'), (u'delete', u'ROOT', u'delete'), (u'account', u'OBJECT', u'delete')]
| 1 | 1 | 0 | 1 | 0 | 0 |
I am trying to create a new language model (Luxembourgish) in spaCy, but I am confused on how to do this.
I followed the instructions on their website and did a similar thing as in this post. But what I do not understand is, how to add data like a vocab or wordvectors. (e.g. "fill" the language template)
I get that there are some dev tools for same of these operations, but their execution is poorly documented so I do not get how to install and use them properly, especially as they seem to be in python 2.7 which clashes with my spacy installation as it uses python 3.
As for now I have a corpus.txt (from a wikipediadump) on which I want to train and a language template with the defaults like stop_words.py, tokenizer_exceptions.py etc. that I created and filled by hand.
Anyone ever done this properly and could help me here?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am doing sentiment analysis on tweets. Most of the tweets contains short words and i want to replace them as original/full word.
Suppose that tweet is:
I was wid Ali.
I want to convert:
wid -> with
Similarly
wud -> would
u -> you
r -> are
i have 6000 tweets in which there are lots of short words.
How i can replace them ? is there any library available in python for this task? or any dictionary of shorts words available online?
i read answer of Replace appostrophe/short words in python Question but it provides dictionary of appostrophe only.
Currently i am using NLTK but this task is not possible with NLTK.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a list of string representations of sentences that looks something like this:
original_format = ["This is a question", "This is another question", "And one more too"]
I want to convert this list into a set of unique words in my corpus. Given the above list, the output would look something like this:
{'And', 'This', 'a', 'another', 'is', 'more', 'one', 'question', 'too'}
I've figured out a way to do this, but it takes a very long time to run. I am interested in a more efficient way of converting from one format to another (especially since my actual dataset contains >200k sentences).
FYI, what I'm doing right now is creating an empty set for the vocab and then looping through each sentence (split by spaces) and unioning with the vocab set. Using the original_format variable as defined above, it looks like this:
vocab = set()
for q in original_format:
vocab = vocab.union(set(q.split(' ')))
Can you help me run this conversion more efficiently?
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to have a model that only predicts a certain syntactic category, for example verbs, can I update the weights of the LSTM so that they are set to 1 if the word is a verb and 0 if it is any other category?
This is my current code:
model = Sequential()
model.add(Embedding(vocab_size, embedding_size, input_length=5, weights=[pretrained_weights]))
model.add(Bidirectional(LSTM(units=embedding_size)))
model.add(Dense(2000, activation='softmax'))
for e in zip(model.layers[-1].trainable_weights, model.layers[-1].get_weights()):
print('Param %s:
%s' % (e[0], e[1]))
weights = [layer.get_weights() for layer in model.layers]
print(weights)
print(model.summary())
# compile network
model.compile(loss='categorical_crossentropy',
optimizer = RMSprop(lr=0.001),
metrics=['accuracy'])
# fit network
history = model.fit(X_train_fit, y_train_fit, epochs=100, verbose=2, validation_data=(X_val, y_val))
score = model.evaluate(x=X_test, y=y_test, batch_size=32)
These are the weights that I am returning:
Param <tf.Variable 'dense_1/kernel:0' shape=(600, 2000) dtype=float32_ref>:
[[-0.00803087 0.0332068 -0.02052244 ... 0.03497869 0.04023124
-0.02789269]
[-0.02439511 0.02649114 0.00163587 ... -0.01433908 0.00598045
0.00556619]
[-0.01622458 -0.02026448 0.02620039 ... 0.03154427 0.00676246
0.00236203]
...
[-0.00233192 0.02012364 -0.01562861 ... -0.01857186 -0.02323328
0.01365903]
[-0.02556716 0.02962652 0.02400535 ... -0.01870854 -0.04620285
-0.02111554]
[ 0.01415684 -0.00216265 0.03434955 ... 0.01771339 0.02930249
0.002172 ]]
Param <tf.Variable 'dense_1/bias:0' shape=(2000,) dtype=float32_ref>:
[0. 0. 0. ... 0. 0. 0.]
[[array([[-0.023167 , -0.0042483, -0.10572 , ..., 0.089398 , -0.0159 ,
0.14866 ],
[-0.11112 , -0.0013859, -0.1778 , ..., 0.063374 , -0.12161 ,
0.039339 ],
[-0.065334 , -0.093031 , -0.017571 , ..., 0.16642 , -0.13079 ,
0.035397 ],
and so on.
Can I do it by updating the weights? Or is there a more efficient way to be able to only output verbs?
Thank you for the help!
| 1 | 1 | 0 | 0 | 0 | 0 |
I loaded frames from video files with OpenCV to an array and i used sklearn to split the data into X_train and X_test.
My X_train.shape is (363, 1, 40, 40, 15), currently I'm working with 4 classes, and the model I'm using to learn from this data is coded bellow:
model = Sequential()
model.add(Conv3D(32, (3,3,3), activation='relu', input_shape=(1, 40, 40, 15), data_format='channels_first'))
model.add(MaxPooling3D(pool_size=(1, 2, 2), strides=(1, 2, 2)))
model.add(Conv3D(64, (3,3,3), activation='relu'))
model.add(MaxPooling3D(pool_size=(1, 2, 2), strides=(1, 2, 2)))
model.add(Conv3D(128, (3,3,3), activation='relu'))
model.add(Conv3D(128, (3,3,3), activation='relu'))
model.add(MaxPooling3D(pool_size=(1, 2, 2), strides=(1, 2, 2)))
model.add(Conv3D(256, (2,2,2), activation='relu'))
model.add(Conv3D(256, (2,2,2), activation='relu'))
model.add(MaxPooling3D(pool_size=(1, 2, 2), strides=(1, 2, 2)))
model.add(Flatten())
model.add(Dense(1024))
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
I'm getting this error when I try to load the model:
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'conv3d_44/convolution' (op: 'Conv3D') with input shapes: [?,25,1,1,256], [2,2,2,256,256].
Someone can help me?
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a long list of 1.5m sentences and a similarly long list of words that I am looking for within the list of sentences. For example:
list_of_words = ['Turin', 'Milan']
list_of_sents = ['This is a sent about turin.', 'This is a sent about manufacturing.']
I have the following function that is able to quickly identify those sentences with the keywords and computational time is rather important so I would like to avoid for loops if able:
def find_keyword_comments(test_comments,test_keywords):
keywords = '|'.join(test_keywords)
word = re.compile(r"^.*\b({})\b.*$".format(keywords), re.I)
newlist = filter(word.match, test_comments)
final = list(newlist)
return final
Instead of returning a list of strings that contain the keyword, I would like it to return a list of tuples with the word matched on and the string that contains the location. So it currently returns:
final = ['This is a sent about turin.']
and I would like it to return
final = [('Turin', 'This is a sent about turin.')]
Is there a syntax functionality that I am misusing or forgetting?
| 1 | 1 | 0 | 0 | 0 | 0 |
Is it possible to use non-standard part of speech tags when making a grammar for chunking in the NLTK? For example, I have the following sentence to parse:
complication/patf associated/qlco with/prep breast/noun surgery/diap
independent/adj of/prep the/det use/inpr of/prep surgical/diap device/medd ./pd
Locating the phrases I need from the text is greatly assisted by specialized tags such as "medd" or "diap". I thought that because you can use RegEx for parsing, it would be independent of anything else, but when I try to run the following code, I get an error:
grammar = r'TEST: {<diap>}'
cp = nltk.RegexpParser(grammar)
cp.parse(sentence)
ValueError: Transformation generated invalid chunkstring:
<patf><qlco><prep><noun>{<diap>}<adj><prep><det><inpr><prep>{<diap>}<medd><pd>
I think this has to do with the tags themselves, because the NLTK can't generate a tree from them, but is it possible to skip that part and just get the chunked items returned? Maybe the NLTK isn't the best tool, and if so, can anyone recommend another module for chunking text?
I'm developing in python 2.7.6 with the Anaconda distribution.
Thanks in advance!
| 1 | 1 | 0 | 0 | 0 | 0 |
My goal is to create a basic program which semantically compares strings and decides which is more similar (in terms of semantics) to which. For now I did not want to built from scratch a new (doc2vec?) model in NTLK or in SKlearn or in Gensim but I wanted to test the already existing APIs which can do semantic analysis.
Specifically, I chose to test ParallelDots AI API and for this reason I wrote the following program in python:
import paralleldots
api_key = "*******************************************"
paralleldots.set_api_key(api_key)
phrase1 = "I have a swelling on my eyelid"
phrase2 = "I have a lump on my hand"
phrase3 = "I have a lump on my lid"
print(phrase1, " VS ", phrase3, "
")
print(paralleldots.similarity(phrase1, phrase3), "
")
print(phrase2, " VS ", phrase3, "
")
print(paralleldots.similarity(phrase2, phrase3))
This is the response I get from the API:
I have a swelling on my eyelid VS I have a lump on my lid
{'normalized_score': 1.38954, 'usage': 'By accessing ParallelDots API or using information generated by ParallelDots API, you are agreeing to be bound by the ParallelDots API Terms of Use: http://www.paralleldots.com/terms-and-conditions', 'actual_score': 0.114657, 'code': 200}
I have a lump on my hand VS I have a lump on my lid
{'normalized_score': 3.183968, 'usage': 'By accessing ParallelDots API or using information generated by ParallelDots API, you are agreeing to be bound by the ParallelDots API Terms of Use: http://www.paralleldots.com/terms-and-conditions', 'actual_score': 0.323857, 'code': 200}
This response is rather disappointing for me. It is obvious that the phrase
I have a lump on my lid
is almost semantically identical to the phrase
I have a swelling on my eyelid
and it is also related to the phrase
I have a lump on my hand
as they are referring to lumps but obviously it is not at all as close as to the former one. However, ParallelDots AI API outputs almost the exact opposite results.
If I am right, ParallelDots AI API is one of most popular APIs for semantic analysis along with others such as Dandelion API etc but it fetches so disappointing results. I expected that these APIs were using some rich databases of synonyms. I have also tested Dandelion API with these three phrases but the results are poor too (and actually they are even worse).
What can I fix at my program above to retrieve more reasonable results?
Is there any other faster way to semantically compare strings?
| 1 | 1 | 0 | 0 | 0 | 0 |
We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer).
In our system, new document is uploaded every minute and mostly about the same topic (where there are more chance of duplication).
Our current implementation includes gensim doc2vec model trained on documents (tagged with unique document ids). We infer vector for new document and find most_similar docs (ids) with it. Reason behind choosing doc2vec model is that we wanted to take advantage of semantics to improve results. As far as we know, it does not support online training, so we might have to schedule a cron or something that periodically updates the model. But scheduling cron will be disadvantageous as documents come in a burst. User may upload duplicates while model is not yet trained for new data. Also given huge amount of data, training time will be higher.
So i would like to know how such cases are handled in big companies. Are there any better alternative? or better algorithm for such problem?
| 1 | 1 | 0 | 1 | 0 | 0 |
Is there any method to obtain the page number of a particular section in a pdf using pdfminer or any other package suitable for python.I need to obtain the page number of the index section of a pdf.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm researching and trying to implement a Q-Learning example. So far, I've been able to follow the code slowly by breaking it apart and figuring out how it works, however I've stumbled upon a tiny snippet that I can't figure out why it exists...
action = np.argmax(q_learning_table[state,:] + np.random.randn(1, 4))
From what I gather, an action is being chosen from the Q-Learning table but only from a specific row in the matrix, whatever value state is. What I don't understand is why the need for the np.random.randn(1, 4).
Locally, I've done the following to try and understand it:
A = np.matrix([[0, 0, 5, 0], [4, 0, 0, 0], [0, 0, 0, 9])
a = np.argmax(A[2,:] + 100)
print(a)
My understanding is that I should see the result 103 rather than 3 (location of 9). So, why do I still see 3. What's the purpose of adding 100?
| 1 | 1 | 0 | 0 | 0 | 0 |
hey guys so i have this txt files which is like the following:
parent text
-reply to first text
--reply to second text
now what i want is that something like the following:
parent text
- reply to parent text
-reply to parent text
-- reply to second text
i know i can do this in python:
group = re.findall(r"--",data)
which will get all the -- without the text following it but since i have multiple "-" in each text this make me confuse on how to process the data any kind of insight ?
edit1:
This is my data
Why can't I find a girlfriend?
-/u/remainenthroned
-try tinder
--r/incels
--Maybe you should find a TreeHugger.
---Got friendzoned there once, so nah
----Trees are not so good with motion, you know.
-----try grindr
which after .split
is like :
"Why can't I find a girlfriend?",
'-/u/remainenthroned ',
'-try tinder',
'--r/incels',
'--Maybe you should find a TreeHugger.',
'---Got friendzoned there once, so nah',
'----Trees are not so good with motion, you know.',
'-----try grindr',
what i would want is :
"Why can't I find a girlfriend?" -/u/remainenthroned,
"Why can't I find a girlfriend?" -try tinder'
-/u/remainenthroned --r/incels',',
so on and so forth
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a webpage of unordered lists, and I want to turn them into a pandas dataframe as the first step of an NLP workflow.
import pandas as pd
from bs4 import BeautifulSoup
html = '''<html>
<body>
<ul>
<li>
Name
<ul>
<li>Many</li>
<li>Stories</li>
</ul>
</li>
</ul>
<ul>
<li>
More
</li>
</ul>
<ul>
<li>Stuff
<ul>
<li>About</li>
</ul>
</li>
</ul>
</body>
</html>'''
soup = BeautifulSoup(html, 'lxml')
The goal is for each top level list to turn into a dataframe, that would look something like this output:
0 1 2
0 Name Many Stories
1 More null null
2 Stuff About null
I tried to use the following code to get all the list items (complete with sublists)
target = soup.find_all('ul')
But it returns double outputs:
[<li>
Name
<ul>
<li>Many</li>
<li>Stories</li>
</ul>
</li>, <li>Many</li>, <li>Stories</li>, <li>
More
</li>, <li>Stuff
<ul>
<li>About</li>
</ul>
</li>, <li>About</li>]
Really lost here. Thanks.
| 1 | 1 | 0 | 0 | 0 | 0 |
So I'm currently trying to identify comments wherein a person is talking about him/her-self. I'm using Spacy's POS tagging and have chosen to use 'nsubj', 'poss' and 'nsubjpass' as indicator tags for first person. Ofcourse, this fails with more complex sentences such as
"Yeah, mostly delusion. Occasionally laughter"
or "Brain down dragging in the dirt, parasites and grubs all around the folds. Whisper whisper, will it all go away?"
Yeah intj
mostly advmod
delusions ROOT
. punct
Occasional amod
voices ROOT
. punct
Would appreciate some help to identify such sentences as being personal.
Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
I would like to extract the origin and destination from the given text.
For example,
I am travelling from London to New York.
I am flying to Sydney from Singapore.
Origin -- > London, Singapore.
Destination --> Sydney, New York.
NER would give only the Location names, but couldn't fetch the Origin and destination.
Is it possible to train a neural model to detect the same ?
I have tried training the neural networks to classify the text like,
{"tag": "Origin",
"patterns": ["Flying from ", "Travelling from ", "My source is", ]
This way we could classify the text as origin, but I need to get the values as well (London , Singapore in this case).
Is there anyway we can achieve this?
| 1 | 1 | 0 | 0 | 0 | 0 |
I used Anaconda to install spaCy as per instructions given on download page
When I run the following code to download the English models
python -m spacy download en
I get the following error.
/anaconda3/bin/python: No module named spacy.__main__; 'spacy' is a package and cannot be directly executed
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm a beginner in Reinforcement Learning and was trying to implement a DQN to solve the CartPole-v0 task in the OpenAI Gym. Unfortunately, my implementation's performance does not seem to be improving.
Currently, as the training occurs, the episode reward actually decreases whereas the goal is to find better policies that increase this value.
I am using experience replay and a separate target network to back up my q values. I tried adding/deleting layers and neurons in the agent; this did not work. I altered the schedule for decaying the exploration rate; this did not work either. I've grown increasingly convinced that something's wrong with my loss function, but I'm not sure how I could change it to improve performance.
Here's my code for the loss function:
with tf.variable_scope('loss'):
one_hot_mask = self.one_hot_actions
eval = tf.reduce_max(self.q * one_hot_mask, axis=1)
print(eval)
trg = tf.reduce_max(self.q_targ, axis = 1) * self.gamma
print(trg)
label = trg + self.rewards
self.loss = tf.reduce_mean(tf.square(label - eval))
Where one_hot_actions is a placeholder defined as:
self.one_hot_actions = tf.placeholder(tf.float32, [None, self.env.action_space.n], 'one_hot_actions')
Any help is greatly appreciated. Here's my full code:
import tensorflow as tf
import numpy as np
import gym
import sys
import random
import math
import matplotlib.pyplot as plt
class Experience(object):
"""Experience buffer for experience replay"""
def __init__(self, size):
super(Experience, self).__init__()
self.size = size
self.memory = []
def add(self, sample):
self.memory.append(sample)
if len(self.memory) > self.size:
self.memory.pop(0)
class Agent(object):
def __init__(self, env, ep_max, ep_len, gamma, lr, batch, epochs, s_dim, minibatch_size):
super(Agent, self).__init__()
self.ep_max = ep_max
self.ep_len = ep_len
self.gamma = gamma
self.experience = Experience(100)
self.lr = lr
self.batch = batch
self.minibatch_size = minibatch_size
self.epochs = epochs
self.s_dim = s_dim
self.sess = tf.Session()
self.env = gym.make(env).unwrapped
self.state_0s = tf.placeholder(tf.float32, [None, self.s_dim], 'state_0s')
self.actions = tf.placeholder(tf.int32, [None, 1], 'actions')
self.rewards = tf.placeholder(tf.float32, [None, 1], 'rewards')
self.states = tf.placeholder(tf.float32, [None, self.s_dim], 'states')
self.one_hot_actions = tf.placeholder(tf.float32, [None, self.env.action_space.n], 'one_hot_actions')
# q nets
self.q, q_params = self.build_dqn('primary', trainable=True)
self.q_targ, q_targ_params = self.build_dqn('target', trainable=False)
with tf.variable_scope('update_target'):
self.update_target_op = [targ_p.assign(p) for p, targ_p in zip(q_params, q_targ_params)]
with tf.variable_scope('loss'):
one_hot_mask = self.one_hot_actions
eval = tf.reduce_max(self.q * one_hot_mask, axis=1)
print(eval)
trg = tf.reduce_max(self.q_targ, axis = 1) * self.gamma
print(trg)
label = trg + self.rewards
self.loss = tf.reduce_mean(tf.square(label - eval))
with tf.variable_scope('train'):
self.train_op = tf.train.AdamOptimizer(self.lr).minimize(self.loss)
tf.summary.FileWriter("log/", self.sess.graph)
self.sess.run(tf.global_variables_initializer())
def build_dqn(self, name, trainable):
with tf.variable_scope(name):
if name == "primary":
l1 = tf.layers.dense(self.state_0s, 100, tf.nn.relu, trainable=trainable)
else:
l1 = tf.layers.dense(self.states, 100, tf.nn.relu, trainable=trainable)
l2 = tf.layers.dense(l1, 50, tf.nn.relu, trainable=trainable)
q = tf.layers.dense(l2, self.env.action_space.n, trainable=trainable)
params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=name)
return q, params
def choose_action(self, s, t):
s = s[np.newaxis, :]
if random.uniform(0,1) < self.get_explore_rate(t):
a = self.env.action_space.sample()
else:
a = np.argmax(self.sess.run(self.q, {self.state_0s: s})[0])
return a
def get_explore_rate(self, t):
return max(0.01, min(1, 1.0 - math.log10((t+1)/25)))
def update(self):
# experience is [ [s_0, a, r, s], [s_0, a, r, s], ... ]
self.sess.run(self.update_target_op)
indices = np.random.choice(range(len(self.experience.memory)), self.batch)
# indices = range(len(experience))
state_0 = [self.experience.memory[index][0] for index in indices]
a = [self.experience.memory[index][1] for index in indices]
rs = [self.experience.memory[index][2] for index in indices]
state = [self.experience.memory[index][3] for index in indices]
[self.sess.run(self.train_op, feed_dict = {self.state_0s: state_0,
self.one_hot_actions: a, self.rewards: np.asarray(rs).reshape([-1,1]), self.states: state}) for _ in range(self.epochs)]
def run(self):
all_ep_r = []
for ep in range(self.ep_max):
s_0 = self.env.reset()
ep_r = 0
for t in range(self.ep_len):
fake_ac = [0.0, 0.0] # used to make one hot actions
# self.env.render()
a = self.choose_action(s_0, ep)
s, r, done, _ = self.env.step(a)
if done:
s = np.zeros(np.shape(s_0))
fake_ac[a] = 1.0
print(fake_ac)
self.experience.add([s_0, fake_ac, r, s])
s_0 = s
ep_r += r
if done:
break
all_ep_r.append(ep_r)
print(
'Ep: %i' % ep,
"|Ep_r: %i" % ep_r,
)
if len(self.experience.memory) > self.batch -1:
self.update()
return all_ep_r
agent = Agent("CartPole-v0", 200, 200, 0.99, 0.00025, 32, 10, 4, 16)
all_ep_r = agent.run()
plt.plot(range(len(all_ep_r)), all_ep_r)
plt.show()
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a list of several thousands locations and a list of millions of sentences. My objective is to return a list of tuples that report the comment that was matched and the location mentioned within the comment. For example:
locations = ['Turin', 'Milan']
state_init = ['NY', 'OK', 'CA']
sent = ['This is a sent about turin. ok?', 'This is a sent about milano.' 'Alan Turing was not from the state of OK.'
result = [('Turin', 'This is a sent about turin. ok?'), ('Milan', 'this is a sent about Melan'), ('OK', 'Alan Turing was not from the state of OK.')]
In words, I do not want to match on locations embedded within other words, I do not want to match state initials if they are not capitalized. If possible, I would like to catch misspellings or fuzzy matches of locations that either omit a correct letter, replace one correct letter with an incorrect letter or have one error in the ordering of all of the correct letters. For example:
Milan
should match
Melan, Mlian, or Mlan but not Milano
The below function works very well at doing everything except the fuzzy matching and returning a tuple but I do not know how to do either of these things without a for loop. Not that I am against using a for loop but I still would not know how to implement this in a way that is computationally efficient.
Is there a way to add these functionalities that I am interested in having or am I trying to do too much in a single function?
def find_keyword_comments(sents, locations, state_init):
keywords = '|'.join(locations)
keywords1 = '|'.join(state_init)
word = re.compile(r"^.*\b({})\b.*$".format(locations), re.I)
word1 = re.compile(r"^.*\b({})\b.*$".format(state_init))
newlist = filter(word.match, test_comments)
newlist1 = filter(word1.match, test_comments)
final = list(newlist) + list(newlist1)
return final
| 1 | 1 | 0 | 0 | 0 | 0 |
I've checked other problems and I've read their solutions, they do not work. I've tested the regular expression it works on non-locale characters. Code is simply to find any capital letters in a string and doing some procedure on them. Such as minikŞeker bir kedi would return kŞe however my code do not recognize Ş as a letter within [A-Z]. When I try re.LOCALE as some people request I get error ValueError: cannot use LOCALE flag with a str pattern when I use re.UNICODE
import re
corp = "minikŞeker bir kedi"
pattern = re.compile(r"([\w]{1})()([A-Z]{1})", re.U)
corp = re.sub(pattern, r"\1 \3", corp)
print(corp)
Works for minikSeker bir kedi doesn't work for minikŞeker bir kedi and throws error for re.L. The Error I'm getting is ValueError: cannot use LOCALE flag with a str pattern Searching for it yielded some git discussions but nothing useful.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am not sure why we have only output vector of size 32, while have LSTM 100?
What I am confuse is that if we have only 32 words vector, if fetch into LSTM, 32 LSTM should big enough to hold it?
Model.add(Embedding(5000,32)
Model.add(LSTM(100))
| 1 | 1 | 0 | 0 | 0 | 0 |
I am pretty much a beginner in tensorflow and simply following a tutorial. There is no problem with my code but I have a question regarding the output
accuracy: 0.95614034
accuracy_baseline: 0.6666666
auc: 0.97714674
auc_precision_recall: 0.97176754
average_loss: 0.23083039
global_step: 760
label/mean: 0.33333334
loss: 6.578666
prediction/mean: 0.3428335
I would like to know what prediction/mean and label/mean represents?
Thank you in advance
| 1 | 1 | 0 | 1 | 0 | 0 |
I am trying to implement the type of character level embeddings described in this paper in Keras. The character embeddings are calculated using a bidirectional LSTM.
To recreate this, I've first created a matrix of containing, for each word, the indexes of the characters making up the word:
char2ind = {char: index for index, char in enumerate(chars)}
max_word_len = max([len(word) for sentence in sentences for word in sentence])
X_char = []
for sentence in X:
for word in sentence:
word_chars = []
for character in word:
word_chars.append(char2ind[character])
X_char.append(word_chars)
X_char = sequence.pad_sequences(X_char, maxlen = max_word_len)
I then define a BiLSTM model with an embedding layer for the word-character matrix. I assume the input_dimension will have to be equal to the number of characters. I want a size of 64 for my character embeddings, so I set the hidden size of the BiLSTM to 32:
char_lstm = Sequential()
char_lstm.add(Embedding(len(char2ind) + 1, 64))
char_lstm.add(Bidirectional(LSTM(hidden_size, return_sequences=True)))
And this is where I get confused. How can I retrieve the embeddings from the model? I'm guessing I would have to compile the model and fit it then retrieve the weights to get the embeddings, but what parameters should I use to fit it ?
Additional details:
This is for an NER task, so the dataset technically could be be anything in the word - label format, although I am specifically working with the WikiGold ConLL corpus available here: https://github.com/pritishuplavikar/Resume-NER/blob/master/wikigold.conll.txt
The expected output from the network are the labels (I-MISC, O, I-PER...)
I expect the dataset to be large enough to be training character embeddings directly from it. All words are coded with the index of their constituting characters, alphabet size is roughly 200 characters. The words are padded / cut to 20 characters. There are around 30 000 different words in the dataset.
I hope to be able learn embeddings for each characters based on the info from the different words. Then, as in the paper, I would concatenate the character embeddings with the word's glove embedding before feeding into a Bi-LSTM network with a final CRF layer.
I would also like to be able to save the embeddings so I can reuse them for other similar NLP tasks.
| 1 | 1 | 0 | 0 | 0 | 0 |
As I was just experimenting with NLP then I was working on sarcasm detection but in meanwhile I had put this code.
sarcasmextractor.py
# coding: utf-8
# Importing the library
# In[2]:
import io
import sys
import os
import numpy as np
import pandas as pd
import nltk
import gensim
import csv, collections
from textblob import TextBlob
from sklearn.utils import shuffle
from sklearn.svm import LinearSVC
from sklearn.metrics import classification_report
from sklearn.feature_extraction import DictVectorizer
import pickle
import replace_emoji
# Define a class to load the SentimentWordnet and write methods to calculate the scores
# In[4]:
class load_senti_word_net(object):
"""
constructor to load the file and read the file as CSV
6 columns - pos, ID, PosScore, NegScore, synsetTerms, gloss
synsetTerms can have multiple similar words like abducting#1 abducent#1 and will read each one and calculaye the scores
"""
def __init__(self):
sent_scores = collections.defaultdict(list)
with io.open("SentiWordNet_3.0.0_20130122.txt") as fname:
file_content = csv.reader(fname, delimiter='\t',quotechar='"')
for line in file_content:
if line[0].startswith('#') :
continue
pos, ID, PosScore, NegScore, synsetTerms, gloss = line
for terms in synsetTerms.split(" "):
term = terms.split("#")[0]
term = term.replace("-","").replace("_","")
key = "%s/%s"%(pos,term.split("#")[0])
try:
sent_scores[key].append((float(PosScore),float(NegScore)))
except:
sent_scores[key].append((0,0))
for key, value in sent_scores.items():
sent_scores[key] = np.mean(value,axis=0)
self.sent_scores = sent_scores
"""
For a word,
nltk.pos_tag(["Suraj"])
[('Suraj', 'NN')]
"""
def score_word(self, word):
pos = nltk.pos_tag([word])[0][1]
return self.score(word, pos)
def score(self,word, pos):
"""
Identify the type of POS, get the score from the senti_scores and return the score
"""
if pos[0:2] == 'NN':
pos_type = 'n'
elif pos[0:2] == 'JJ':
pos_type = 'a'
elif pos[0:2] =='VB':
pos_type='v'
elif pos[0:2] =='RB':
pos_type = 'r'
else:
pos_type = 0
if pos_type != 0 :
loc = pos_type+'/'+word
score = self.sent_scores[loc]
if len(score)>1:
return score
else:
return np.array([0.0,0.0])
else:
return np.array([0.0,0.0])
"""
Repeat the same for a sentence
nltk.pos_tag(word_tokenize("My name is Suraj"))
[('My', 'PRP$'), ('name', 'NN'), ('is', 'VBZ'), ('Suraj', 'NNP')]
"""
def score_sentencce(self, sentence):
pos = nltk.pos_tag(sentence)
print (pos)
mean_score = np.array([0.0, 0.0])
for i in range(len(pos)):
mean_score += self.score(pos[i][0], pos[i][1])
return mean_score
def pos_vector(self, sentence):
pos_tag = nltk.pos_tag(sentence)
vector = np.zeros(4)
for i in range(0, len(pos_tag)):
pos = pos_tag[i][1]
if pos[0:2]=='NN':
vector[0] += 1
elif pos[0:2] =='JJ':
vector[1] += 1
elif pos[0:2] =='VB':
vector[2] += 1
elif pos[0:2] == 'RB':
vector[3] += 1
return vector
# Now let's extract the features
#
# ###Stemming and Lemmatization
# In[5]:
porter = nltk.PorterStemmer()
sentiments = load_senti_word_net()
# In[7]:
def gram_features(features,sentence):
sentence_rep = replace_emoji.replace_reg(str(sentence))
token = nltk.word_tokenize(sentence_rep)
token = [porter.stem(i.lower()) for i in token]
bigrams = nltk.bigrams(token)
bigrams = [tup[0] + ' ' + tup[1] for tup in bigrams]
grams = token + bigrams
#print (grams)
for t in grams:
features['contains(%s)'%t]=1.0
# In[8]:
import string
def sentiment_extract(features, sentence):
sentence_rep = replace_emoji.replace_reg(sentence)
token = nltk.word_tokenize(sentence_rep)
token = [porter.stem(i.lower()) for i in token]
mean_sentiment = sentiments.score_sentencce(token)
features["Positive Sentiment"] = mean_sentiment[0]
features["Negative Sentiment"] = mean_sentiment[1]
features["sentiment"] = mean_sentiment[0] - mean_sentiment[1]
#print(mean_sentiment[0], mean_sentiment[1])
try:
text = TextBlob(" ".join([""+i if i not in string.punctuation and not i.startswith("'") else i for i in token]).strip())
features["Blob Polarity"] = text.sentiment.polarity
features["Blob Subjectivity"] = text.sentiment.subjectivity
#print (text.sentiment.polarity,text.sentiment.subjectivity )
except:
features["Blob Polarity"] = 0
features["Blob Subjectivity"] = 0
print("do nothing")
first_half = token[0:int(len(token)/2)]
mean_sentiment_half = sentiments.score_sentencce(first_half)
features["positive Sentiment first half"] = mean_sentiment_half[0]
features["negative Sentiment first half"] = mean_sentiment_half[1]
features["first half sentiment"] = mean_sentiment_half[0]-mean_sentiment_half[1]
try:
text = TextBlob(" ".join([""+i if i not in string.punctuation and not i.startswith("'") else i for i in first_half]).strip())
features["first half Blob Polarity"] = text.sentiment.polarity
features["first half Blob Subjectivity"] = text.sentiment.subjectivity
#print (text.sentiment.polarity,text.sentiment.subjectivity )
except:
features["first Blob Polarity"] = 0
features["first Blob Subjectivity"] = 0
print("do nothing")
second_half = token[int(len(token)/2):]
mean_sentiment_sechalf = sentiments.score_sentencce(second_half)
features["positive Sentiment second half"] = mean_sentiment_sechalf[0]
features["negative Sentiment second half"] = mean_sentiment_sechalf[1]
features["second half sentiment"] = mean_sentiment_sechalf[0]-mean_sentiment_sechalf[1]
try:
text = TextBlob(" ".join([""+i if i not in string.punctuation and not i.startswith("'") else i for i in second_half]).strip())
features["second half Blob Polarity"] = text.sentiment.polarity
features["second half Blob Subjectivity"] = text.sentiment.subjectivity
#print (text.sentiment.polarity,text.sentiment.subjectivity )
except:
features["second Blob Polarity"] = 0
features["second Blob Subjectivity"] = 0
print("do nothing")
# In[9]:
features = {}
sentiment_extract(features,"a long narrow opening")
# In[11]:
def pos_features(features,sentence):
sentence_rep = replace_emoji.replace_reg(sentence)
token = nltk.word_tokenize(sentence_rep)
token = [ porter.stem(each.lower()) for each in token]
pos_vector = sentiments.pos_vector(token)
for j in range(len(pos_vector)):
features['POS_'+str(j+1)] = pos_vector[j]
print ("done")
# In[12]:
features = {}
pos_features(features,"a long narrow opening")
# In[13]:
def capitalization(features,sentence):
count = 0
for i in range(len(sentence)):
count += int(sentence[i].isupper())
features['Capitalization'] = int(count > 3)
print (count)
# In[14]:
features = {}
capitalization(features,"A LoNg NArrow opening")
# In[15]:
import topic
topic_mod = topic.topic(nbtopic=200,alpha='symmetric')
# In[16]:
topic_mod = topic.topic(model=os.path.join('topics.tp'),dicttp=os.path.join('topics_dict.tp'))
# In[17]:
def topic_feature(features,sentence,topic_modeler):
topics = topic_modeler.transform(sentence)
for j in range(len(topics)):
features['Topic :'] = topics[j][1]
# In[18]:
topic_feature(features,"A LoNg NArrow opening",topic_mod)
# In[19]:
def get_features(sentence, topic_modeler):
features = {}
gram_features(features,sentence)
pos_features(features,sentence)
sentiment_extract(features, sentence)
capitalization(features,sentence)
topic_feature(features, sentence,topic_modeler)
return features
# In[20]:
df = pd.DataFrame()
df = pd.read_csv("dataset_csv.csv", header=0, sep='\t')
df.head()
# In[17]:
import re
for i in range(0,df.size):
temp = str(df["tweets"][i])
temp = re.sub(r'[^\x00-\x7F]+','',temp)
featureset.append((get_features(temp,topic_mod), df["label"][i]))
# In[20]:
c = []
for i in range(0,len(featureset)):
c.append(pd.DataFrame(featureset[i][0],index=[i]))
result = pd.concat(c)
# In[22]:
result.insert(loc=0,column="label",value='0')
# In[23]:
for i in range(0, len(featureset)):
result["label"].loc[i] = featureset[i][1]
# In[25]:
result.to_csv('feature_dataset.csv')
# In[3]:
df = pd.DataFrame()
df = pd.read_csv("feature_dataset.csv", header=0)
df.head()
# In[4]:
get_ipython().magic('matplotlib inline')
import matplotlib as matplot
import seaborn
result = df
# In[5]:
X = result.drop(['label','Unnamed: 0','Topic :'],axis=1).values
# In[6]:
Y = result['label']
# In[7]:
import pickle
import pefile
import sklearn.ensemble as ek
from sklearn import cross_validation, tree, linear_model
from sklearn.feature_selection import SelectFromModel
from sklearn.externals import joblib
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import make_pipeline
from sklearn import preprocessing
from sklearn import svm
from sklearn.linear_model import LinearRegression
import sklearn.linear_model as lm
# In[29]:
model = { "DecisionTree":tree.DecisionTreeClassifier(max_depth=10),
"RandomForest":ek.RandomForestClassifier(n_estimators=50),
"Adaboost":ek.AdaBoostClassifier(n_estimators=50),
"GradientBoosting":ek.GradientBoostingClassifier(n_estimators=50),
"GNB":GaussianNB(),
"Logistic Regression":LinearRegression()
}
# In[8]:
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, Y ,test_size=0.2)
# In[9]:
X_train = pd.DataFrame(X_train)
X_train = X_train.fillna(X_train.mean())
X_test = pd.DataFrame(X_test)
X_test = X_test.fillna(X_test.mean())
# In[38]:
results_algo = {}
for algo in model:
clf = model[algo]
clf.fit(X_train,y_train.astype(int))
score = clf.score(X_test,y_test.astype(int))
print ("%s : %s " %(algo, score))
results_algo[algo] = score
# In[39]:
winner = max(results_algo, key=results_algo.get)
# In[40]:
clf = model[winner]
res = clf.predict(X_test)
mt = confusion_matrix(y_test, res)
print("False positive rate : %f %%" % ((mt[0][1] / float(sum(mt[0])))*100))
print('False negative rate : %f %%' % ( (mt[1][0] / float(sum(mt[1]))*100)))
# In[41]:
from sklearn import metrics
print (metrics.classification_report(y_test, res))
# In[34]:
test_data = "public meetings are awkard for me as I can insult people but I choose not to and that is something that I find difficult to live with"
# In[101]:
test_data="I purchased this product 4.47 billion years ago and when I opened it today, it was half empty."
# In[82]:
test_data="when people see me eating and ask me are you eating? No no I'm trying to choke myself to death #sarcastic"
# In[102]:
test_feature = []
test_feature.append((get_features(test_data,topic_mod)))
# In[104]:
test_feature
# In[105]:
c = []
c.append(pd.DataFrame(test_feature[0],index=[i]))
test_result = pd.concat(c)
test_result = test_result.drop(['Topic :'],axis=1).values
# In[106]:
res= clf.predict(test_result)
But it is giving me the following error:
C:\ProgramData\Anaconda3\lib\site-packages\gensim\utils.py:1197: UserWarning: detected Windows; aliasing chunkize to chunkize_serial
warnings.warn("detected Windows; aliasing chunkize to chunkize_serial")
[('a', 'DT'), ('long', 'JJ'), ('narrow', 'JJ'), ('open', 'JJ')]
[('a', 'DT'), ('long', 'JJ')]
[('narrow', 'JJ'), ('open', 'JJ')]
done
5
Traceback (most recent call last):
File "C:\shubhamprojectwork\sarcasm detection\SarcasmDetection-master\SarcasmDetection-master\Code\sarcasm-extraction.py", line 276, in <module>
topic_feature(features,"A LoNg NArrow opening",topic_mod)
File "C:\shubhamprojectwork\sarcasm detection\SarcasmDetection-master\SarcasmDetection-master\Code\sarcasm-extraction.py", line 268, in topic_feature
topics = topic_modeler.transform(sentence)
File "C:\shubhamprojectwork\sarcasm detection\SarcasmDetection-master\SarcasmDetection-master\Code\topic.py", line 42, in transform
return self.lda[corpus_sentence]
File "C:\ProgramData\Anaconda3\lib\site-packages\gensim\models\ldamodel.py", line 1160, in __getitem__
return self.get_document_topics(bow, eps, self.minimum_phi_value, self.per_word_topics)
AttributeError: 'LdaModel' object has no attribute 'minimum_phi_value'
Code for topic.py:
from gensim import corpora, models, similarities
import nltk
from nltk.corpus import stopwords
import numpy as np
import pandas as pd
import replace_emoji
class topic(object):
def __init__(self, nbtopic = 100, alpha=1,model=None,dicttp=None):
self.nbtopic = nbtopic
self.alpha = alpha
self.porter = nltk.PorterStemmer()
self.stop = stopwords.words('english')+['.','!','?','"','...','\',"''",'[',']','~',"'m","'s",';',':','..','$']
if model!=None and dicttp!=None:
self.lda = models.ldamodel.LdaModel.load(model)
self.dictionary = corpora.Dictionary.load(dicttp)
def fit(self,documents):
documents_mod = documents
tokens = [nltk.word_tokenize(sentence) for sentence in documents_mod]
tokens = [[self.porter.stem(t.lower()) for t in sentence if t.lower() not in self.stop] for sentence in tokens]
self.dictionary = corpora.Dictionary(tokens)
corpus = [self.dictionary.doc2bow(text) for text in tokens]
self.lda = models.ldamodel.LdaModel(corpus,id2word=self.dictionary, num_topics=self.nbtopic,alpha=self.alpha)
self.lda.save('topics.tp')
self.dictionary.save('topics_dict.tp')
def get_topic(self,topic_number):
return self.lda.print_topic(topic_number)
def transform(self,sentence):
sentence_mod = sentence
tokens = nltk.word_tokenize(sentence_mod)
tokens = [self.porter.stem(t.lower()) for t in tokens if t.lower() not in self.stop]
corpus_sentence = self.dictionary.doc2bow(tokens)
return self.lda[corpus_sentence]
The overall code is found here overall code.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am having some problems understanding how to retrieve the predictions from a Keras model.
I want to build a simple system that predicts the next word, but I don't know how to output the complete list of probabilities for each word.
This is my code right now:
model = Sequential()
model.add(Embedding(vocab_size, embedding_size, input_length=55, weights=[pretrained_weights]))
model.add(Bidirectional(LSTM(units=embedding_size)))
model.add(Dense(23690, activation='softmax')) # 23690 is the total number of classes
model.compile(loss='categorical_crossentropy',
optimizer = RMSprop(lr=0.0005),
metrics=['accuracy'])
# fit network
model.fit(np.array(X_train), np.array(y_train), epochs=10)
score = model.evaluate(x=np.array(X_test), y=np.array(y_test), batch_size=32)
prediction = model.predict(np.array(X_test), batch_size=32)
First question:
Training set: list of sentences (vectorized and transformed to indices).
I saw some examples online where people divide X_train and y_train like this:
X, y = sequences[:,:-1], sequences[:,-1]
y = to_categorical(y, num_classes=vocab_size)
Should I instead transform the X_train and the y_train in order to have sliding sequences, where for example I have
X = [[10, 9, 4, 5]]
X_train = [[10, 9], [9, 4], [4, 5]]
y_train = [[9], [4], [5]]
Second question:
Right now the model returns only one element for each input. How can I return the predictions for each word? I want to be able to have an array of output words for each word, not a single output.
I read that I could use a TimeDistributed layer, but I have problems with the input, because the Embedding layer takes a 2D input, while the TimeDistributed takes a 3D input.
Thank you for the help!
| 1 | 1 | 0 | 0 | 0 | 0 |
BACKGROUND
I have vectors with some sample data and each vector has a category name (Places,Colors,Names).
['john','jay','dan','nathan','bob'] -> 'Names'
['yellow', 'red','green'] -> 'Colors'
['tokyo','bejing','washington','mumbai'] -> 'Places'
My objective is to train a model that take a new input string and predict which category it belongs to. For example if a new input is "purple" then I should be able to predict 'Colors' as the correct category. If the new input is "Calgary" it should predict 'Places' as the correct category.
APPROACH
I did some research and came across Word2vec. This library has a "similarity" and "mostsimilarity" function which i can use. So one brute force approach I thought of is the following:
Take new input.
Calculate it's similarity with each word in each vector and take an average.
So for instance for input "pink" I can calculate its similarity with words in vector "names" take a average and then do that for the other 2 vectors also. The vector that gives me the highest similarity average would be the correct vector for the input to belong to.
ISSUE
Given my limited knowledge in NLP and machine learning I am not sure if that is the best approach and hence I am looking for help and suggestions on better approaches to solve my problem. I am open to all suggestions and also please point out any mistakes I may have made as I am new to machine learning and NLP world.
| 1 | 1 | 0 | 1 | 0 | 0 |
This is code i am working on
Here is the code of short reviews of movies.
documents = []
all_words = []
allowed_words_types = ['J']
for p in short_pos.split('
'):
documents.append((p,"pos"))
words = word_tokenize(p)
pos = nltk.pos_tag(words)
for w in pos:
if w[1][0] in allowed_words_types:
all_words.append(w[0].lower())
for p in short_neg.split('
'):
documents.append((p,"neg"))
words = word_tokenize(p)
pos = nltk.pos_tag(words)
for w in pos:
if w[1][0] in allowed_words_types:
all_words.append(w[0].lower())
all_words = nltk.FreqDist(all_words)
words_features = list(all_words.keys())[:5000]
def find_features(document):
words = word_tokenize(document)
features = {}
for w in words_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev),category) for (rev,category) in documents]
random.shuffle(featuresets)
print(len(featuresets))
training_set = featuresets[:100]
testing_set = featuresets[100:]
classifier = nltk.NaiveBayesClassifier.train(training_set)
Here i want to calculate the confusion Mattrix and ROC
i just find the accuracy but i'm unable to find the roc and the confusion matrix, it will very helpfull anyone can help me out. thanks.
print(" Original Naive Bayes Algo accuracy percent : ",(nltk.classify.accuracy(classifier,testing_set))*100)
MNB_classifier = SklearnClassifier(MultinomialNB())
MNB_classifier.train(training_set)
print("MNB_classifier accuracy percent : ",(nltk.classify.accuracy(MNB_classifier,testing_set))*100)
voted_classifier = VoteClassifier(classifier,
MNB_classifier)
def sentiment(text):
feats = find_features(text)
return voted_classifier.classify(feats),voted_classifier.confidence(feats)
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying to code a minimal text classifier with spaCy. I wrote the following snippet of code to train just the text categorizer (without training the whole NLP pipeline):
import spacy
from spacy.pipeline import TextCategorizer
nlp = spacy.load('en')
doc1 = u'This is my first document in the dataset.'
doc2 = u'This is my second document in the dataset.'
gold1 = u'Category1'
gold2 = u'Category2'
textcat = TextCategorizer(nlp.vocab)
textcat.add_label('Category1')
textcat.add_label('Category2')
losses = {}
optimizer = textcat.begin_training()
textcat.update([doc1, doc2], [gold1, gold2], losses=losses, sgd=optimizer)
But when I run it, it returns an error. Here is the traceback it gives me when I start it:
Traceback (most recent call last):
File "C:\Users\Reuben\Desktop\Classification\Classification\Training.py", line
16, in <module>
textcat.update([doc1, doc2], [gold1, gold2], losses=losses, sgd=optimizer)
File "pipeline.pyx", line 838, in spacy.pipeline.TextCategorizer.update
File "D:\Program Files\Anaconda2\lib\site-packages\thinc\api.py", line 61, in
begin_update
X, inc_layer_grad = layer.begin_update(X, drop=drop)
File "D:\Program Files\Anaconda2\lib\site-packages\thinc\api.py", line 176, in
begin_update
values = [fwd(X, *a, **k) for fwd in forward]
File "D:\Program Files\Anaconda2\lib\site-packages\thinc\api.py", line 258, in
wrap
output = func(*args, **kwargs)
File "D:\Program Files\Anaconda2\lib\site-packages\thinc\api.py", line 61, in
begin_update
X, inc_layer_grad = layer.begin_update(X, drop=drop)
File "D:\Program Files\Anaconda2\lib\site-packages\spacy\_ml.py", line 95, in
_preprocess_doc
keys = [doc.to_array(LOWER) for doc in docs]
AttributeError: 'unicode' object has no attribute 'to_array'
How can I fix this?
| 1 | 1 | 0 | 0 | 0 | 0 |
After downloading and linking a spacy model (en large) by:
python -m spacy download en_core_web_lg
which is around 850 Mb of data.
How can it find and delete the data (downloaded model) on my mac to free some space?
Spacy: 2.0.18
Python: 3.6.9
en_core_web_lg: 2.0.0
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to understand the skipgrams() function in keras by using the following code
from keras.preprocessing.text import *
from keras.preprocessing.sequence import skipgrams
text = "I love money" #My test sentence
tokenizer = Tokenizer()
tokenizer.fit_on_texts([text])
word2id = tokenizer.word_index
wids = [word2id[w] for w in text_to_word_sequence(text)]
pairs, labels = skipgrams(wids, len(word2id),window_size=1)
for i in range(len(pairs)): #Visualizing the result
print("({:s} , {:s} ) -> {:d}".format(
id2word[pairs[i][0]],
id2word[pairs[i][1]],
labels[i]))
For the sentence "I love money", I would expect the following (context, word) pairs with the window size=1 as defined in keras:
([i, money], love)
([love], i)
([love], money)
From what I understand in Keras' documentation, it will output the label of 1 if (word, word in the same window) , and the label of 0 if (word, random word from the vocabulary).
Since I am using the windows size of 1, I would expect the label of 1 for the following pairs:
(love, i)
(love, money)
(i, love)
(money, love)
And the label of 0 for the following pairs
(i, money)
(money, i)
Yet, the code give me the result like this
(love , i ) -> 1
(love , money ) -> 1
(i , love ) -> 1
(money , love ) -> 1
(i , i ) -> 0
(love , love ) -> 0
(love , i ) -> 0
(money , love ) -> 0
How can the pair (love , i ) and (money , love ) be labelled as both 0 and 1?
and also where is the (i, money) and (money, i) result?
Am I understanding things wrongly that the labels of 0 are all out of my expectation? But it seems I understand the label of 1 quite well.
| 1 | 1 | 0 | 1 | 0 | 0 |
I want to train a LSTM model with Tensorflow. I have a text data as input and I get doc2vec of each paragraph of the text and pass it to the lstm layers but I get ValueError because of inconsistency of shape rank.
I've searched through Stackoverflow for similar questions and some tutorials, but I couldn't solve this error. Do you have any idea what should I do?
Here is the error:
Traceback (most recent call last):
File "writeRNN.py", line 97, in
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
File "myven/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 627, in dynamic_rnn
dtype=dtype)
File "myven/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 690, in _dynamic_rnn_loop
for input_ in flat_input)
File "myven/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 690, in
for input_ in flat_input)
File "myven/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 761, in with_rank_at_least
raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape (?, ?) must have rank at least 3
And below is the code:
lstm_size = 128
lstm_layers = 1
batch_size = 50
learning_rate = 0.001
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
with graph.as_default():
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs_, initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
I got error on outputs, final_state = tf.nn.dynamic_rnn(cell, inputs_, initial_state=initial_state) Line as the error I described.
doc2vec model is trained on gensim and converts each sentence into a vector with 100 value.
I tried to change inputs_ shape and labels_ shape but also I get same error!
I really don't know what should I do?!
I really thank if you could answer my question.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to classify a set of text documents using multiple sets of features. I am using sklearn's Feature Union to combine different features for fitting into a single model. One of the features includes word embeddings using gensim's word2vec.
import numpy as np
from gensim.models.word2vec import Word2Vec
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.linear_model import SGDClassifier
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_selection import chi2
from sklearn.feature_selection import SelectKBest
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
data = fetch_20newsgroups(subset='train', categories=categories)#dummy dataset
w2v_model= Word2Vec(data .data, size=100, window=5, min_count=5, workers=2)
word2vec={w: vec for w, vec in zip(w2v_model.wv.index2word, w2v_model.wv.syn0)} #dictionary of word embeddings
feat_select = SelectKBest(score_func=chi2, k=10) #other features
TSVD = TruncatedSVD(n_components=50, algorithm = "randomized", n_iter = 5)
#other features
In order to include transformers/estimators not already available in sklearn, I am attempting to wrap my word2vec results into a custom transformer class that returns the vector averages.
class w2vTransformer(TransformerMixin):
"""
Wrapper class for running word2vec into pipelines and FeatureUnions
"""
def __init__(self,word2vec,**kwargs):
self.word2vec=word2vec
self.kwargs=kwargs
self.dim = len(word2vec.values())
def fit(self,x, y=None):
return self
def transform(self, X):
return np.array([
np.mean([self.word2vec[w] for w in words if w in self.word2vec]
or [np.zeros(self.dim)], axis=0)
for words in X
])
However when it comes time to fit the model I receive an error.
combined_features = FeatureUnion([("w2v_class",w2vTransformer(word2vec)),
("feat",feat_select),("TSVD",TSVD)])#join features into combined_features
#combined_features = FeatureUnion([("feat",feat_select),("TSVD",TSVD)])#runs when word embeddings are not included
text_clf_svm = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('feature_selection', combined_features),
('clf-svm', SGDClassifier( loss="modified_huber")),
])
text_clf_svm_1 = text_clf_svm.fit(data.data,data.target) # fits data
text_clf_svm_1 = text_clf_svm.fit(data.data,data.target) # fits data
Traceback (most recent call last):
File "<ipython-input-8-a085b7d40f8f>", line 1, in <module>
text_clf_svm_1 = text_clf_svm.fit(data.data,data.target) # fits data
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 248, in fit
Xt, fit_params = self._fit(X, y, **fit_params)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 213, in _fit
**fit_params_steps[name])
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\memory.py", line 362, in __call__
return self.func(*args, **kwargs)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 581, in _fit_transform_one
res = transformer.fit_transform(X, y, **fit_params)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 739, in fit_transform
for name, trans, weight in self._iter())
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 779, in __call__
while self.dispatch_one_batch(iterator):
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 625, in dispatch_one_batch
self._dispatch(tasks)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 111, in apply_async
result = ImmediateResult(func)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 332, in __init__
self.results = batch()
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 131, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 131, in <listcomp>
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 581, in _fit_transform_one
res = transformer.fit_transform(X, y, **fit_params)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\base.py", line 520, in fit_transform
return self.fit(X, y, **fit_params).transform(X)
File "<ipython-input-6-cbc52cd420cd>", line 16, in transform
for words in X
File "<ipython-input-6-cbc52cd420cd>", line 16, in <listcomp>
for words in X
File "<ipython-input-6-cbc52cd420cd>", line 14, in <listcomp>
np.mean([self.word2vec[w] for w in words if w in self.word2vec]
TypeError: unhashable type: 'csr_matrix'
Traceback (most recent call last):
File "<ipython-input-8-a085b7d40f8f>", line 1, in <module>
text_clf_svm_1 = text_clf_svm.fit(data.data,data.target) # fits data
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 248, in fit
Xt, fit_params = self._fit(X, y, **fit_params)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 213, in _fit
**fit_params_steps[name])
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\memory.py", line 362, in __call__
return self.func(*args, **kwargs)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 581, in _fit_transform_one
res = transformer.fit_transform(X, y, **fit_params)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 739, in fit_transform
for name, trans, weight in self._iter())
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 779, in __call__
while self.dispatch_one_batch(iterator):
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 625, in dispatch_one_batch
self._dispatch(tasks)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 111, in apply_async
result = ImmediateResult(func)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 332, in __init__
self.results = batch()
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 131, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 131, in <listcomp>
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\pipeline.py", line 581, in _fit_transform_one
res = transformer.fit_transform(X, y, **fit_params)
File "C:\Users\rlusk\AppData\Local\Continuum\Anaconda3\lib\site-packages\sklearn\base.py", line 520, in fit_transform
return self.fit(X, y, **fit_params).transform(X)
File "<ipython-input-6-cbc52cd420cd>", line 16, in transform
for words in X
File "<ipython-input-6-cbc52cd420cd>", line 16, in <listcomp>
for words in X
File "<ipython-input-6-cbc52cd420cd>", line 14, in <listcomp>
np.mean([self.word2vec[w] for w in words if w in self.word2vec]
TypeError: unhashable type: 'csr_matrix'
I understand that the error is because the variable "words" is a csr_matrix, but it needs to be an iterable such as a list. My question is how do I modify the transformer class or data so I can use the word embeddings as features to feed into FeatureUnion? This is my first SO post, please be gentle.
| 1 | 1 | 0 | 0 | 0 | 0 |
I've been using Rasa NLU for a project which involves making sense of structured text. My use case requires me to keep updating my training set by adding new examples of text corpus entities. However, this means that I have to keep retraining my model every few days, thereby taking more time for the same owing to increased training set size.
Is there a way in Rasa NLU to update an already trained model by only training it with the new training set data instead of retraining the entire model again using the entire previous training data set and the new training data set?
I'm trying to look for an approach where I can simply update my existing trained model by training it with incremental additional training data set every few days.
| 1 | 1 | 0 | 1 | 0 | 0 |
I am using Doc2vec to get vectors from words.
Please see my below code:
from gensim.models.doc2vec import TaggedDocument
f = open('test.txt','r')
trainings = [TaggedDocument(words = data.strip().split(","),tags = [i]) for i,data in enumerate(f)
model = Doc2Vec(vector_size=5, epochs=55, seed = 1, dm_concat=1)
model.build_vocab(trainings)
model.train(trainings, total_examples=model.corpus_count, epochs=model.epochs)
model.save("doc2vec.model")
model = Doc2Vec.load('doc2vec.model')
for i in range(len(model.docvecs)):
print(i,model.docvecs[i])
I have a test.txt file that its content has 2 lines and contents of these 2 lines is the same (they are "a")
I trained with doc2vec and got the model, but the problem is although the contents of 2 lines is the same, doc2vec gave me 2 different vectors.
0 [ 0.02730868 0.00393569 -0.08150548 -0.04009786 -0.01400406]
1 [ 0.03916578 -0.06423566 -0.05350181 -0.00726833 -0.08292392]
I dont know why this happened. I thought that these vectors would be the same.
Can you explain that? And if I want to make the same vectors for the sames words, what should I do in this case?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am in the process of writing an AI for the game 2048. At the moment, I can pull the game state from the browser and send moves to the game, but I don't know how to integrate that with TensorFlow. The nature of the project isn't conducive to training data, so I was wondering if it's possible to pass in the state of the game, have the network chuck out a move, run the move, repeat until the game is over, and then have it do the training?
| 1 | 1 | 0 | 0 | 0 | 0 |
How can I find the base (root) form of Verbs from derived NOUN forms? Here are some examples that I'm looking for. Is there any dictionary I can use?
Collection --> Collect
Maintenance --> Maintain
Replacement --> Replace
| 1 | 1 | 0 | 0 | 0 | 0 |
From a unstructured text, I have extracted all necessary entities and stored it in a dictionary using stanford POS tagger. Now I want to extract the relation between them to build my own Ontology in the form of triplets (Entity1,Entity2,relation). I tried the stanford dependencies parser, but I don't know how to extract these triplets.
For example:
The front diffusers comprise pivotable flaps that are arranged between boundary walls of air ducts.
I want to have the relation (front diffusers, pivotable flaps, comprise); (pivotable flaps, boundary walls of air ducts, arrange);
Another example: The cargo body comprises a container having a floor, a top wall, a front wall, side walls and a rear door.
My expected relations are (cargo body, container, comprise); (container, floor, have); (container,top wall, have); (container, front wall, have); (container, side walls, have); (container, rear door, have).
What can I do with the stanford dependencies parser to achieve my goal? This means how to navigate the dependencies parse tree and get the results?
| 1 | 1 | 0 | 0 | 0 | 0 |
tl;dr what is the most efficient way to dynamically choose some entries of a tensor.
I am trying to implement syntactic GCN in Tensorflow. Basically, I need to have a different weight matrix for every label (lets ignore biases for this question) and choose at each run the relevant entries to use, those would be chosen by a sparse matrix (for each entry there is at most one label in one direction and mostly no edge so not even that).
More concretely, when I have a sparse matrix of labeled edges (zero-one), is it better to use it in a mask, a sparse-dense tensor multiplication or maybe just use normal multiplication (I guess not the latter, but for simplicty use it in the example)
example:
units = 6 # output size
x = ops.convert_to_tensor(inputs[0], dtype=self.dtype)
labeled_edges = ops.convert_to_tensor(inputs[1], dtype=self.dtype)
edges_shape = labeled_edges.get_shape().as_list()
labeled_edges = expand_dims(labeled_edges, -2)
labeled_edges = tile(
labeled_edges, [1] * (len(edges_shape) - 1) + [units, 1])
graph_kernel = math_ops.multiply(self.kernel, labeled_edges) # here is the question basically
outputs = standard_ops.tensordot(x, graph_kernel, [[1], [0]])
outputs = math_ops.reduce_sum(outputs, [-1])
| 1 | 1 | 0 | 0 | 0 | 0 |
I need to calculate cosine similarity between documents with already calculated TFIDF scores.
Usually I would use (e.g.) TFIDFVectorizer which would create a matrix of documents / terms, calculating TFIDF scores as it goes. I can't apply this because it will re-calculate TFIDF scores. This would be incorrect because the documents have already had a large amount of pre-processing including Bag of Words and IDF filtering (I will not explain why - too long).
Illustrative input CSV file:
Doc, Term, TFIDF score
1, apples, 0.3
1, bananas, 0.7
2, apples, 0.1
2, pears, 0.9
3, apples, 0.6
3, bananas, 0.2
3, pears, 0.2
I need to generate the matrix that would normally be generated by TFIDFVectorizer, e.g.:
| apples | bananas | pears
1 | 0.3 | 0.7 | 0
2 | 0.1 | 0 | 0.9
3 | 0.6 | 0.2 | 0.2
... so that I can calculate cosine similarity between documents.
I'm using Python 2.7 but suggestions for other solutions or tools are welcome. I can't easily switch to Python 3.
Edit:
This isn't really about transposing numpy arrays. It involves mapping TFIDF scores to a document / term matrix, with tokenized terms, and missing values filled in as 0.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a list of keywords and I want to parse through a list of long strings for the keyword, any mention of a price in currency format and any other number in the string less than 10. For example:
keywords = ['Turin', 'Milan' , 'Nevada']
strings = ['This is a sentence about Turin with 5 and $10.00 in it.', ' 2.5 Milan is a city with £1,000 in it.', 'Nevada and $1,100,000. and 10.09']]
would hopefully return the following:
final_list = [('Turin', '$10.00', '5'), ('Milan', '£1,000', '2.5'), ('Nevada', '$1,100,000', '')]
I've got the following function with functioning regexes but I don't know how to combine the outputs into a list of tuples. Is there an easier way to achieve this? Should I split by word then look for matches?
def find_keyword_comments(list_of_strings,keywords_a):
list_of_tuples = []
for string in list_of_strings:
keywords = '|'.join(keywords_a)
keyword_rx = re.findall(r"^\b({})\b$".format(keywords), string, re.I)
price_rx = re.findall(r'^[\$\£\€]\s?\d{1,3}(?:[.,]\d{3})*(?:[.,]\d{1,2})?$', string)
number_rx1 = re.findall(r'\b\d[.]\d{1,2}\b', string)
number_rx2 = re.findall(r'\s\d\s', string)
| 1 | 1 | 0 | 0 | 0 | 0 |
I've created simple neural network which can recognize separate digits and characters. I want the neural network to recognize licence plate on the car. In order to do it I have to separate symbols on image. For example, i have to find symbols on the image and save each symbol to file (png or jpg):
Source image:
Founded symbols:
Separated symbol in file:
How can I find symbol and save green rectangles to simple png (or jpg) file using python?
| 1 | 1 | 0 | 0 | 0 | 0 |
I've programmed (Java) my own feed-forward network learning by back propagation. My network is trained to learn the XOR problem. I have an input matrix 4x2 and target 4x1.
Inputs:
{{0,0},
{0,1},
{1,0},
{1,1}}
Outputs:
{0.95048}
{-0.06721}
{-0.06826}
{0.95122}
I have this trained network and now I want to test it on new inputs like:
{.1,.9} //should result in 1
However, I'm not sure how to implement a float predict(double[] input) method. From what I can see, my problem is that my training data has a different size than my input data.
Please suggest.
EDIT:
The way I have this worded, it sounds like I want a regression value. However, I'd like the output to be a probability vector (classification) which I can then analyze.
| 1 | 1 | 0 | 1 | 0 | 0 |
I trained a KENLM language model on around 5000 English sentences/paragraphs. I want to query this ARPA model with two or more segments and see if they can be concatenated to form a longer sentence, hopefully more "grammatical." Here as follows is the Python code that I have used to get the logarithmic scores - and the ten-based power value - of the segments and the "sentence." I have given two examples. Obviously, the sentence in the first example is more grammatical than the one in the second example. However, my question is not about this, but about how to relate the language model score of a whole sentence to those of the sentence's constituents. That is, if the sentence is grammatically better than its constituents.
import math
import kenlm as kl
model = kl.LanguageModel(r'D:\seg.arpa.bin')
print ('************')
sentence = 'Mr . Yamada was elected Chairperson of'
print(sentence)
p1=model.score(sentence)
p2=math.pow(10,p1)
print(p1)
print(p2)
sentence = 'the Drafting Committee by acclamation .'
print(sentence)
p3=model.score(sentence)
p4=math.pow(10,p3)
print(p3)
print(p4)
sentence = 'Mr . Yamada was elected Chairperson of the Drafting Committee by acclamation .'
print(sentence)
p5=model.score(sentence)
p6=math.pow(10,p5)
print(p5)
print(p6)
print ('-------------')
sentence = 'Cases cited in the present volume ix'
print(sentence)
p1=model.score(sentence)
p2=math.pow(10,p1)
print(p1)
print(p2)
sentence = 'Multilateral instruments cited in the present volume xiii'
print(sentence)
p3=model.score(sentence)
p4=math.pow(10,p3)
print(p3)
print(p4)
sentence = 'Cases cited in the present volume ix Multilateral instruments cited in the present volume xiii'
print(sentence)
p5=model.score(sentence)
p6=math.pow(10,p5)
print(p5)
print(p6)
************ Mr . Yamada was elected Chairperson of
-34.0706558228
8.49853715087e-35 the Drafting Committee by acclamation .
-28.3745193481
4.22163470933e-29 Mr . Yamada was elected Chairperson of the Drafting Committee by acclamation .
-55.5128440857
3.07012398337e-56
------------- Cases cited in the present volume ix
-27.7353248596
1.83939558773e-28 Multilateral instruments cited in the present volume xiii
-34.4523620605
3.52888852435e-35 Cases cited in the present volume ix Multilateral instruments cited in the present volume xiii
-60.7075233459
1.9609957573e-61
| 1 | 1 | 0 | 0 | 0 | 0 |
So i started to learn NLP via nltk book and it seems i immediately ran into a problem nobody mentioned before.
Let's import data from nltk.book just as the book says:
from nltk.book import *
Now i want to continue with examples from the book:
text1.concordance("monstrous")
Gives me:
Displaying 11 of 11 matches:
ong the former , one was of a most monstrous size . ... This came towards us ,
ON OF THE PSALMS . " Touching that monstrous bulk of the whale or ork we have r
ll over with a heathenish array of monstrous clubs and spears . Some were thick
d as you gazed , and wondered what monstrous cannibal and savage could ever hav
that has survived the flood ; most monstrous and most mountainous ! That Himmal
they might scout at Moby Dick as a monstrous fable , or still worse and more de
th of Radney .'" CHAPTER 55 Of the monstrous Pictures of Whales . I shall ere l
ing Scenes . In connexion with the monstrous pictures of whales , I am strongly
ere to enter upon those still more monstrous stories of them which are to be fo
ght have been rummaged out of this monstrous cabinet there is no telling . But
of Whale - Bones ; for Whales of a monstrous size are oftentimes cast up dead u
So far, so good. Now i want to know concordance for word whale in Moby Dick.
text1.concordance("whale")
Displaying 25 of 25 matches:
s , and to teach them by what name a whale - fish is to be called in our tongue
t which is not true ." -- HACKLUYT " WHALE . ... Sw . and Dan . HVAL . This ani
ulted ." -- WEBSTER ' S DICTIONARY " WHALE . ... It is more immediately from th
ISH . WAL , DUTCH . HWAL , SWEDISH . WHALE , ICELANDIC . WHALE , ENGLISH . BALE
HWAL , SWEDISH . WHALE , ICELANDIC . WHALE , ENGLISH . BALEINE , FRENCH . BALLE
least , take the higgledy - piggledy whale statements , however authentic , in
dreadful gulf of this monster ' s ( whale ' s ) mouth , are immediately lost a
patient Job ." -- RABELAIS . " This whale ' s liver was two cartloads ." -- ST
Touching that monstrous bulk of the whale or ork we have received nothing cert
of oil will be extracted out of one whale ." -- IBID . " HISTORY OF LIFE AND D
ise ." -- KING HENRY . " Very like a whale ." -- HAMLET . " Which to secure , n
restless paine , Like as the wounded whale to shore flies thro ' the maine ." -
. OF SPERMA CETI AND THE SPERMA CETI WHALE . VIDE HIS V . E . " Like Spencer '
t had been a sprat in the mouth of a whale ." -- PILGRIM ' S PROGRESS . " That
EN ' S ANNUS MIRABILIS . " While the whale is floating at the stern of the ship
e ship called The Jonas - in - the - Whale . ... Some say the whale can ' t ope
in - the - Whale . ... Some say the whale can ' t open his mouth , but that is
masts to see whether they can see a whale , for the first discoverer has a duc
for his pains . ... I was told of a whale taken near Shetland , that had above
oneers told me that he caught once a whale in Spitzbergen that was white all ov
2 , one eighty feet in length of the whale - bone kind came in , which ( as I w
n master and kill this Sperma - ceti whale , for I could never hear of any of t
. 1729 . "... and the breath of the whale is frequendy attended with such an i
ed with hoops and armed with ribs of whale ." -- RAPE OF THE LOCK . " If we com
contemptible in the comparison . The whale is doubtless the largest animal in c
Whait, it can't be right. There is no way word "whale" is only occurs 25 times in Moby Dick. How about word "it"?
text1.concordance("it")
Displaying 25 of 25 matches:
Ok, lets increase amount of lines shown:
text1.concordance("it", lines=100)
Displaying 25 of 25 matches:
How about decreasing it?
text1.concordance("it", lines=10)
Displaying 10 of 25 matches:
It wants me to believe there is only 25 occurrences of the word "it"?
While this is definitely a malfunction, it gets even worse with width argument (it does not take it into account at all).
System i use nltk with:
Win 10 64 bit;
Python 3.6.5 32 bit
What's going on and how can i fix that?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to get the basic english word for an english word which is modified from its base form. This question had been asked here, but I didnt see a proper answer, so I am trying to put it this way. I tried 2 stemmers and one lemmatizer from NLTK package which are porter stemmer, snowball stemmer, and wordnet lemmatiser.
I tried this code:
from nltk.stem.porter import PorterStemmer
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
words = ['arrival','conclusion','ate']
for word in words:
print "
Original Word =>", word
print "porter stemmer=>", PorterStemmer().stem(word)
snowball_stemmer = SnowballStemmer("english")
print "snowball stemmer=>", snowball_stemmer.stem(word)
print "WordNet Lemmatizer=>", WordNetLemmatizer().lemmatize(word)
This is the output I get:
Original Word => arrival
porter stemmer=> arriv
snowball stemmer=> arriv
WordNet Lemmatizer=> arrival
Original Word => conclusion
porter stemmer=> conclus
snowball stemmer=> conclus
WordNet Lemmatizer=> conclusion
Original Word => ate
porter stemmer=> ate
snowball stemmer=> ate
WordNet Lemmatizer=> ate
but I want this output
Input : arrival
Output: arrive
Input : conclusion
Output: conclude
Input : ate
Output: eat
How can I achieve this? Are there any tools already available for this? This is called as morphological analysis. I am aware of that, but there must be some tools which are already achieving this. Help is appreciated :)
First Edit
I tried this code
import nltk
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from nltk.corpus import wordnet as wn
query = "The Indian economy is the worlds tenth largest by nominal GDP and third largest by purchasing power parity"
def is_noun(tag):
return tag in ['NN', 'NNS', 'NNP', 'NNPS']
def is_verb(tag):
return tag in ['VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']
def is_adverb(tag):
return tag in ['RB', 'RBR', 'RBS']
def is_adjective(tag):
return tag in ['JJ', 'JJR', 'JJS']
def penn_to_wn(tag):
if is_adjective(tag):
return wn.ADJ
elif is_noun(tag):
return wn.NOUN
elif is_adverb(tag):
return wn.ADV
elif is_verb(tag):
return wn.VERB
return wn.NOUN
tags = nltk.pos_tag(word_tokenize(query))
for tag in tags:
wn_tag = penn_to_wn(tag[1])
print tag[0]+"---> "+WordNetLemmatizer().lemmatize(tag[0],wn_tag)
Here, I tried to use wordnet lemmatizer by providing proper tags. Here is the output:
The---> The
Indian---> Indian
economy---> economy
is---> be
the---> the
worlds---> world
tenth---> tenth
largest---> large
by---> by
nominal---> nominal
GDP---> GDP
and---> and
third---> third
largest---> large
by---> by
purchasing---> purchase
power---> power
parity---> parity
Still, words like "arrival" and "conclusion" wont get processed with this approach. Is there any solution for this?
| 1 | 1 | 0 | 0 | 0 | 0 |
Hello I am writing a neuron for determining the figures counting
def get_image_size():
img = cv2.imread('gestures/0/100.jpg', 0)
return img.shape // 50*50
def get_num_of_classes():
return len(os.listdir('gestures/')) //13classes
image_x, image_y = get_image_size()
The CNN Model
def cnn_model():
num_of_classes = get_num_of_classes()
model = Sequential()
model.add(Conv2D(32, (5,5), input_shape=(image_x, image_y, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
model.add(Conv2D(64, (5,5), activation='relu'))
model.add(MaxPooling2D(pool_size=(5, 5), strides=(5, 5), padding='same'))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(num_of_classes, activation='softmax'))
sgd = optimizers.SGD(lr=1e-4)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
filepath="cnn_model_keras2.h5"
checkpoint1 = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
#checkpoint2 = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint1]
return model, callbacks_list
Trainning
def train():
with open("train_images", "rb") as f:
train_images = np.array(pickle.load(f))
with open("train_labels", "rb") as f:
train_labels = np.array(pickle.load(f), dtype=np.int32)
with open("test_images", "rb") as f:
test_images = np.array(pickle.load(f))
with open("test_labels", "rb") as f:
test_labels = np.array(pickle.load(f), dtype=np.int32)
train_images = np.reshape(train_images, (train_images.shape[0], image_x, image_y, 1))
test_images = np.reshape(test_images, (test_images.shape[0], image_x, image_y, 1))
train_labels = np_utils.to_categorical(train_labels)
test_labels = np_utils.to_categorical(test_labels)
model, callbacks_list = cnn_model()
model.fit(train_images, train_labels, validation_data=(test_images, test_labels), epochs=50, batch_size=100, callbacks=callbacks_list)
scores = model.evaluate(test_images, test_labels, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
but i'm getting this error : ValueError: Error when checking target: expected dense_1 to have shape (13,) but got array with shape (40,) and i searched about some solutions but nothing work , if any one have an idea how to solve it please
| 1 | 1 | 0 | 0 | 0 | 0 |
Does anyone know of a python Natural Language Processing Library or module that I could use to find synonyms (or antonyms, etc ..) of english words ?
| 1 | 1 | 0 | 0 | 0 | 0 |
Trying to use PixelCNN I get the following error with these input arguments:
C:\Users\cknau\Downloads\pixel-cnn-master\pixel-cnn-master>python train2.py
input args:
{
"data_dir":"D:\\PixelCNN\\dataset",
"save_dir":"D:\\PixelCNN\\samples",
"data_set":"cifar",
"save_interval":20,
"load_params":false,
"nr_resnet":5,
"nr_filters":160,
"nr_logistic_mix":10,
"resnet_nonlinearity":"concat_elu",
"class_conditional":false,
"energy_distance":false,
"learning_rate":0.001,
"lr_decay":0.999995,
"batch_size":16,
"init_batch_size":16,
"dropout_p":0.5,
"max_epochs":5000,
"nr_gpu":8,
"polyak_decay":0.9995,
"num_samples":1,
"seed":1
}
Error:
T
raceback (most recent call last):
File "train2.py", line 120, in <module>
loss_gen.append(loss_fun(tf.stop_gradient(xs[i]), out))
File "C:\Users\cknau\Downloads\pixel-cnn-master\pixel-cnn-master\pixel_cnn_pp
n.py", line 83, in discretized_mix_logistic_loss
log_probs = tf.reduce_sum(log_probs,3) + log_prob_from_logits(logit_probs)
File "C:\Users\cknau\Downloads\pixel-cnn-master\pixel-cnn-master\pixel_cnn_pp
n.py", line 27, in log_prob_from_logits
m = tf.reduce_max(x, axis, keepdims=True)
TypeError: reduce_max() got an unexpected keyword argument 'keepdims'
can anyone help me out? I have NumPy 1.13, so that's not the issue.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have been using python's TextBlob library to get sentiment polarity for english language.
Now I want to get sentiment analysis(polarity) of urdu language written in latin script.
for example
English sentence : "What is your name"
its equivalent urdu language written in latin script
Urdu sentence (written in latin script ) : "Tumhara kia name hai"
I want suggestions, which procedure do i follow to achieve this for desired language using machine learning in both cases
Supervised learning
By using Recurrent neural networks with pre human tagged data set
"or" any Unsupervised learning algo ?
| 1 | 1 | 0 | 1 | 0 | 0 |
I am currently learning about NLP in Python and I am getting problems with a Python syntax.
cfd = nltk.ConditionalFreqDist( #create conditional freq dist
(target, fileid[:4]) #create target (Y) and years (X)
for fileid in inaugural.fileids() #loop through all fileids
for w in inaugural.words(fileid) #loop through each word of each fileids
for target in ['america','citizen'] #loop through target
if w.lower().startswith(target)) #if w.lower() starts with target words
cfd.plot() # plot it
I do not understand the purpose of line 2.
Moreover, I do not understand why each loop doesn't end with ":" like any loops in Python.
Can someone explain me this code ? The code works, but I do not fully understand its syntax.
Thank you
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to extract keywords line by line from a csv file and create a keyword field. Right now I am able to get the full extraction. How do I get keywords for each row/field?
Data:
id,some_text
1,"What is the meaning of the word Himalaya?"
2,"Palindrome is a word, phrase, or sequence that reads the same backward as forward"
Code: This is search entire text but not row by row. Do I need to put something else besides replace(r'\|', ' ')?
import pandas as pd
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
df = pd.read_csv('test-data.csv')
# print(df.head(5))
text_context = df['some_text'].str.lower().str.replace(r'\|', ' ').str.cat(sep=' ') # not put lower case?
print(text_context)
print('')
tokens=nltk.tokenize.word_tokenize(text_context)
word_dist = nltk.FreqDist(tokens)
stop_words = stopwords.words('english')
punctuations = ['(',')',';',':','[',']',',','!','?']
keywords = [word for word in tokens if not word in stop_words and not word in punctuations]
print(keywords)
final output:
id,some_text,new_keyword_field
1,What is the meaning of the word Himalaya?,"meaning,word,himalaya"
2,"Palindrome is a word, phrase, or sequence that reads the same backward as forward","palindrome,word,phrase,sequence,reads,backward,forward"
| 1 | 1 | 0 | 0 | 0 | 0 |
I have seen many papers explaining the use of pretrained word embeddings (such as Word2Vec or Fasttext) on sentence sentiment classification using CNNs (like Yoon Kim's paper). However, these classifiers also account for order that the words appear in.
My application of word embeddings is to predict the class of "pools" of words. For example, in the following list of lists
example = [["red", "blue", "green", "orange"], ["bear", "horse", "cow"], ["brown", "pink"]]
The order of the words doesn't matter, but I want to classify the sublists into either class of color or animal.
Are there any prebuilt Keras implementations of this, or any papers you could point me to which address this type of classification problem based on pretrained word embeddings?
I am sorry if this is off-topic in this forum. If so, please let me know where would be a better place to post it.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm running the chunk of code under 3. Word to Vectors Integration from this NLP tutorial. It uses Spacy's lexemes to calculate the most similar words to whatever word you give it - I'm using it to try to find the nearest synonyms to the given word. However, if you replace apple with a word like "look" you get a lot of related words, but not synonyms (examples: pretty, there, over, etc.). I was thinking of modifying the code to also filter by part of speech, so that I could just get verbs in the output and would be able to go from that. To do that, I'd need to use tokens so I can use token.pos_, since that function isn't available for lexemes. Does anyone know a way to take the output (list called "others" in the code) and change it from a lexeme to a token? I was reading over spacy's information document for lexemes here, but I haven't been able to find anything about transforming.
I've also tried adding a section of code at the end to the other person's code:
from numpy import dot
from numpy.linalg import norm
import spacy
from spacy.lang.en import English
nlp = English()
parser = spacy.load('en_core_web_md')
my_word = u'calm'
#Generate word vector of the word - apple
apple = parser.vocab[my_word]
#Cosine similarity function
cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
others = list({w for w in parser.vocab if w.has_vector and w.orth_.islower()
and w.lower_ != my_word})
print("done listing")
# sort by similarity score
others.sort(key=lambda w: cosine(w.vector, apple.vector))
others.reverse()
for word in others[:10]:
print(word.orth_)
The part I added:
b = ""
for word in others[:10]:
a = str(word) + ' '
b += a
doc = nlp(b)
print(doc)
token = doc[0]
counter = 1
while counter < 50:
token += doc[counter]
counter += 1
print(token)
This is the output error:
'token += doc[counter]
TypeError: unsupported operand type(s) for +=: 'spacy.tokens.token.Token' and 'spacy.tokens.token.Token'
<spacy.lexeme.Lexeme object at 0x000002920ABFAA68> <spacy.lexeme.Lexeme object at 0x000002920BD56EE8> '
Does anyone have any suggestions to fix what I did or another way to change the lexeme to a token? Thank you!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using Word2vec through gensim with Google's pretrained vectors trained on Google News. I have noticed that the word vectors I can access by doing direct index lookups on the Word2Vec object are not unit vectors:
>>> import numpy
>>> from gensim.models import Word2Vec
>>> w2v = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
>>> king_vector = w2v['king']
>>> numpy.linalg.norm(king_vector)
2.9022589
However, in the most_similar method, these non-unit vectors are not used; instead, normalised versions are used from the undocumented .syn0norm property, which contains only unit vectors:
>>> w2v.init_sims()
>>> unit_king_vector = w2v.syn0norm[w2v.vocab['king'].index]
>>> numpy.linalg.norm(unit_king_vector)
0.99999994
The larger vector is just a scaled up version of the unit vector:
>>> king_vector - numpy.linalg.norm(king_vector) * unit_king_vector
array([ 0.00000000e+00, -1.86264515e-09, 0.00000000e+00,
0.00000000e+00, -1.86264515e-09, 0.00000000e+00,
-7.45058060e-09, 0.00000000e+00, 3.72529030e-09,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
... (some lines omitted) ...
-1.86264515e-09, -3.72529030e-09, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00], dtype=float32)
Given that word similarity comparisons in Word2Vec are done by cosine similarity, it's not obvious to me what the lengths of the non-normalised vectors mean - although I assume they mean something, since gensim exposes them to me rather than only exposing the unit vectors in .syn0norm.
How are the lengths of these non-normalised Word2vec vectors generated, and what is their meaning? For what calculations does it make sense to use the normalised vectors, and when should I use the non-normalised ones?
| 1 | 1 | 0 | 0 | 0 | 0 |
Wrote and ran an AI search program to run a search from start until the end or result is found. However, when I run it, I do not get the search result instead I get fail and none. Any idea what could be the cause of the issue would be much appreciated
grid = [[0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 1, 0]]
init = [0, 0]
goal = [len(grid)-1, len(grid[0])-1]
cost = 1
delta = [[-1, 0], # go up
[ 0,-1], # go left
[ 1, 0], # go down
[ 0, 1]] # go right
delta_name = ['^', '<', 'v', '>']
def search():
closed = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
closed[init[0]][init[1]] = 1
x = init[0]
y =init[1]
g = 0
open = [[g, x, y]]
found = False
resign = False
while found is False and resign is False:
if len(open) == 0:
resign = True
print 'fail'
else:
open.sort()
open.reverse()
next = open.pop()
x = next[3]
y = next[4]
g = next[1]
if x == goal[0] and y == goal[1]:
found = next
print next
else:
for i in range(len(delta)):
x2 = x + delta[i][0]
y2 = y + delta[i][1]
if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid):
if closed[x2][y2] == 0 and grid[x2][y2] == 0:
g2 = g + cost
open.append([g2, x2, y2])
closed[x2][y2] = 1
print search()
| 1 | 1 | 0 | 0 | 0 | 0 |
Below is my code:
import sklearn
#features = [[140,"smooth"],[130,"smooth"],[150,"bumpy"],[170,"bumpy"]]
#labels = ["apple","apple","orange","orange"]
# Now replace 1 for smooth & 0 for bumpy and 0 for apple & 1 for orange
features = [[140,1],[130,1],[150,0],[170,0]]
labels = [0,0,1,1]
# Now I train a classifier
from sklearn import tree
my_classifier = tree.DecisionTreeClassifier()
my_classifier.fit(features,labels)
predict = my_classifier.predict([[150,0]])
print(predict)
How can I train a classifier without converting it to numbers?
e.g. I want below lines of code to classify my classifier. Please suggest, thanks in advance:)
features = [[140,"smooth"],[130,"smooth"],[150,"bumpy"],[170,"bumpy"]]
labels = ["apple","apple","orange","orange"]
| 1 | 1 | 0 | 1 | 0 | 0 |
I want to analyze sentences with NLTK and display their chunks as a tree. NLTK offers the method tree.draw() to draw a tree. This following code draws a tree for the sentence "the little yellow dog barked at the cat":
import nltk
sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked","VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")]
pattern = "NP: {<DT>?<JJ>*<NN>}"
NPChunker = nltk.RegexpParser(pattern)
result = NPChunker.parse(sentence)
result.draw()
The result is this tree:
How do i get a tree with one more level like this?
| 1 | 1 | 0 | 0 | 0 | 0 |
The issue:
I am confused as to why we transform our test data using the CountVectorizer fitted on our train data for bag of words classification.
Why would we not create a new CountVectorizer and fit the test data to this and have the classifier predict on the test CountVectorizer?
Looking here: How to standardize the bag of words for train and test?
Ripped from the answer:
LabeledWords=pd.DataFrame(columns=['word','label'])
LabeledWords.append({'word':'Church','label':'Religion'} )
vectorizer = CountVectorizer()
Xtrain,yTrain=vectorizer.fit_transform(LabeledWords['word']).toarray(),vectorizer.fit_transform(LabeledWords['label']).toarray()
forest = RandomForestClassifier(n_estimators = 100)
clf=forest.fit(Xtrain,yTrain)
for each_word,label in Preprocessed_list:
test_featuresX.append(vectorizer.transform(each_word),toarray())
test_featuresY.append(label.toarray())
clf.score(test_featuresX,test_featuresY)
We can see the user created a CountVectorizer and fit it to the training data. Then fit the classifier to this CountVectorizer. Afterwards the user just transformed the test data using the CountVectorizer which was fit to the train data, and fed this into the classifier. Why is that?
What I am trying to accomplish:
I am trying to implement bag of visual words. It uses the same concept, but I am unsure how should create my train and test sets for classification.
| 1 | 1 | 0 | 1 | 0 | 0 |
I'd like to extract author names from pdf papers. Does anybody know a robust way to do so?
For example, I'd like to extract the name Archana Shukla from this pdf https://arxiv.org/pdf/1111.1648
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to generate char-n-grams of sizes 2 to 4. This is what I have by now:
from nltk import ngrams
sentence = ['i have an apple', 'i like apples so much']
for i in range(len(sentence)):
for n in range(2, 4):
n_grams = ngrams(sentence[i].split(), n)
for grams in n_grams:
print(grams)
This will give me:
('i', 'have')
('have', 'an')
('an', 'apple')
('i', 'have', 'an')
('have', 'an', 'apple')
('i', 'like')
('like', 'apples')
('apples', 'so')
('so', 'much')
('i', 'like', 'apples')
('like', 'apples', 'so')
('apples', 'so', 'much')
How can I do this in an optimal way? I have a very large entry data and my solution contains for in for so the complexity is a bit huge and it takes a lot of time for the algorithm to finish.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to generate the summary of a large text file using Gensim Summarizer.
I am getting memory error. Have been facing this issue since sometime, any help
would be really appreciated. feel free to ask for more details.
from gensim.summarization.summarizer import summarize
file_read =open("xxxxx.txt",'r')
Content= file_read.read()
def Summary_gen(content):
print(len(Content))
summary_r=summarize(Content,ratio=0.02)
print(summary_r)
Summary_gen(Content)
The length of the document is:
365042
Error messsage:
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-6-a91bd71076d1> in <module>()
10
11
---> 12 Summary_gen(Content)
<ipython-input-6-a91bd71076d1> in Summary_gen(content)
6 def Summary_gen(content):
7 print(len(Content))
----> 8 summary_r=summarize(Content,ratio=0.02)
9 print(summary_r)
10
c:\python3.6\lib\site-packages\gensim\summarization\summarizer.py in summarize(text, ratio, word_count, split)
428 corpus = _build_corpus(sentences)
429
--> 430 most_important_docs = summarize_corpus(corpus, ratio=ratio if word_count is None else 1)
431
432 # If couldn't get important docs, the algorithm ends.
c:\python3.6\lib\site-packages\gensim\summarization\summarizer.py in summarize_corpus(corpus, ratio)
367 return []
368
--> 369 pagerank_scores = _pagerank(graph)
370
371 hashable_corpus.sort(key=lambda doc: pagerank_scores.get(doc, 0), reverse=True)
c:\python3.6\lib\site-packages\gensim\summarization\pagerank_weighted.py in pagerank_weighted(graph, damping)
57
58 """
---> 59 adjacency_matrix = build_adjacency_matrix(graph)
60 probability_matrix = build_probability_matrix(graph)
61
c:\python3.6\lib\site-packages\gensim\summarization\pagerank_weighted.py in build_adjacency_matrix(graph)
92 neighbors_sum = sum(graph.edge_weight((current_node, neighbor)) for neighbor in graph.neighbors(current_node))
93 for j in xrange(length):
---> 94 edge_weight = float(graph.edge_weight((current_node, nodes[j])))
95 if i != j and edge_weight != 0.0:
96 row.append(i)
c:\python3.6\lib\site-packages\gensim\summarization\graph.py in edge_weight(self, edge)
255
256 """
--> 257 return self.get_edge_properties(edge).setdefault(self.WEIGHT_ATTRIBUTE_NAME, self.DEFAULT_WEIGHT)
258
259 def neighbors(self, node):
c:\python3.6\lib\site-packages\gensim\summarization\graph.py in get_edge_properties(self, edge)
404
405 """
--> 406 return self.edge_properties.setdefault(edge, {})
407
408 def add_edge_attributes(self, edge, attrs):
MemoryError:
I have tried looking up for this error on the internet, but, couldn't find a workable solution to this.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a list of properties and i need to make a logical representation of a sentence using lambda calculus for example for the property 'located in' it needs to return (x,y) | < x, located in ,y >
i tried this but it's not correct :
for index, row in properties.iterrows():
def parse_r(properties,x,y):
return lambda x, y: <x, row['Property'], y>
i get this error
return lambda x, y:
^
SyntaxError: invalid syntax
the system should understand that the relation between x and y is what's in the middle and get the needed logical representation
how can i do this with lambda calculus in python code ?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to use spaCy to create a new entity categorization 'Species' with a list of species names, example can he found here.
I found a tutorial for training new entity type from this spaCy tutorial (Github code here). However, the problem is, I don't want to manually create a sentence for each species name as it would be very time consuming.
I created below training data, which looks like this:
TRAIN_DATA = [('Bombina',{'entities':[(0,6,'SPECIES')]}),
('Dermaptera',{'entities':[(0,9,'SPECIES')]}),
....
]
The way I created the training set is: instead of providing a full sentence and the location of the matched entity, I only provide the name of each species, and the start and end index are programmatically generated:
[( 0, 6, 'SPECIES' )]
[( 0, 9, 'SPECIES' )]
Below training code is what I used to train the model. (Code copied from above hyperlink)
nlp = spacy.blank('en') # create blank Language class
# Add entity recognizer to model if it's not in the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
if 'ner' not in nlp.pipe_names:
ner = nlp.create_pipe('ner')
nlp.add_pipe(ner)
# otherwise, get it, so we can add labels to it
else:
ner = nlp.get_pipe('ner')
ner.add_label(LABEL) # add new entity label to entity recognizer
if model is None:
optimizer = nlp.begin_training()
else:
# Note that 'begin_training' initializes the models, so it'll zero out
# existing entity types.
optimizer = nlp.entity.create_optimizer()
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
for itn in range(n_iter):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update([text], [annotations], sgd=optimizer, drop=0.35, losses=losses)
print(losses)
I'm new to NLP and spaCy please let me know if I did it correctly or not. And why my attempt failed the training (when I ran it, it throws an error).
[UPDATE]
The reason I want to feed keyword only to the training model is that, ideally, I would hope the model to learn those key words first, and once it identifies a context which contains the keyword, it will learn the associated context, and therefore, enhance the current model.
At the first glance, it is more like regex expression. But with more and more data feeding in, the model will continuous learn, and finally being able to identify new species names that previously not exists in the original training set.
Thanks,
Katie
| 1 | 1 | 0 | 1 | 0 | 0 |
The following question is about the Spacy NLP library for Python, but I would be surprised if the answer for other libraries differed substantially.
What is the maximum document size that Spacy can handle under reasonable memory conditions (e.g. a 4 GB VM in my case)? I had hoped to use Spacy to search for matches in book-size documents (100K+ tokens), but I'm repeatedly getting crashes that point to memory exhaustion as the cause.
I'm an NLP noob - I know the concepts academically, but I don't really know what to expect out of the state of the art libraries in practice. So I don't know if what I'm asking the library to do is ridiculously hard, or so easy that must be something I've screwed up in my environment.
As far as why I'm using an NLP library instead of something specifically oriented toward document search (e.g. solr), I'm using it because I would like to do lemma-based matching, rather than string-based.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying to build a app that finds relevant sentences in a document based on keywords or statements that a user enters. Performing this manually using a needle in the haystack approach seems highly inefficient.
Is there a ideal approach or library that can tackle this problem?
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a set of text documents and want to count the number of bigrams over all text documents.
First, I create a list where each element is again a list representing the words in one specific document:
print(doc_clean)
# [['This', 'is', 'the', 'first', 'doc'], ['And', 'this', 'is', 'the', 'second'], ..]
Then, I extract the bigrams document-wise and store them in a list:
bigrams = []
for doc in doc_clean:
bigrams.extend([(doc[i-1], doc[i])
for i in range(1, len(doc))])
print(bigrams)
# [('This', 'is'), ('is', 'the'), ..]
Now, I want to count the frequency of each unique bigram:
bigrams_freq = [(b, bigrams.count(b))
for b in set(bigrams)]
Generally, this approach is working, but it is far too slow. The list of bigrams is quiet big with ~5mio entries in total and ~300k unique bigrams. On my laptop, the current approach is taking too much time for the analysis.
Thanks for helping me!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to execute in parallel some machine learning algorithm.
When I use multiprocessing, it's slower than without. My wild guess is that the pickle serialization of the models I use slowing down the whole process. So the question is: how can I initialize the pool's worker with an initial state so that I don't need to serialize/deserialize for every single call the models?
Here is my current code:
import pickle
from pathlib import Path
from collections import Counter
from multiprocessing import Pool
from gensim.models.doc2vec import Doc2Vec
from wikimark import html2paragraph
from wikimark import tokenize
def process(args):
doc2vec, regressions, filepath = args
with filepath.open('r') as f:
string = f.read()
subcategories = Counter()
for index, paragraph in enumerate(html2paragraph(string)):
tokens = tokenize(paragraph)
vector = doc2vec.infer_vector(tokens)
for subcategory, model in regressions.items():
prediction = model.predict([vector])[0]
subcategories[subcategory] += prediction
# compute the mean score for each subcategory
for subcategory, prediction in subcategories.items():
subcategories[subcategory] = prediction / (index + 1)
# keep only the main category
subcategory = subcategories.most_common(1)[0]
return (filepath, subcategory)
def main():
input = Path('./build')
doc2vec = Doc2Vec.load(str(input / 'model.doc2vec.gz'))
regressions = dict()
for filepath in input.glob('./*/*/*.model'):
with filepath.open('rb') as f:
model = pickle.load(f)
regressions[filepath.parent] = model
examples = list(input.glob('../data/wikipedia/english/*'))
with Pool() as pool:
iterable = zip(
[doc2vec] * len(examples), # XXX!
[regressions] * len(examples), # XXX!
examples
)
for filepath, subcategory in pool.imap_unordered(process, iterable):
print('* {} -> {}'.format(filepath, subcategory))
if __name__ == '__main__':
main()
The lines marked with XXX! point to the data that serialized when I call pool.imap_unodered. There at least 200MB of data that is serialized.
How can I avoid serialization?
| 1 | 1 | 0 | 0 | 0 | 0 |
I wonder if I understood correctly the idea of using world embedding in natural language processing. I want to show you how I perceive it and ask whether my interpretation is correct.
Let's assume that we want to predict whether sentence is positive or negative. We will use a pre-trained word embedding prepared on a very large text corpus with dimension equals 100. It means that for each word we have 100 values. Our file looks in this way:
...
new -0.68538535 -0.08992791 0.8066535 other 97 values ...
man -0.6401568 -0.05007627 0.65864474 ...
many 0.18335487 -0.10728102 0.468635 ...
doesnt 0.0694685 -0.4131108 0.0052553082 ...
...
Obviously we have test and train set. We will use sklearn model to fit and predict results. Our train set looks in this way:
1 This is positive and very amazing sentence.
0 I feel very sad.
And test set contains sentences like:
In my opinion people are amazing.
I have mainly doubts related to the preprocessing of input data. I wonder whether it should be done in this way:
We do for all sentences for instance tokenization, removing of stop words, lowercasing etc. So for our example we get:
'this', 'is', 'positive', 'very', 'amazing', 'sentence'
'i', 'feel', 'very', 'sad'
'in', 'my', 'opinion', 'people', 'amazing'
We use pad_sequences:
1,2,3,4,5,6
7,8,4,9
10,11,12,13,5
Furthermore we check the length of the longest sentence in both train set and test set. Let's assume that in our case the maximum length is equals to 10. We need to have all vectors of the same length so we fill the remaining fields with zeros.
1,2,3,4,5,0,0,0,0,0
6,7,4,8,0,0,0,0,0,0
10,11,12,13,5,0,0,0,0,0
Now the biggest doubt - we assign values from our word embedding word2vec file to all words from the prepared vectors from the training set and the test set.
Our word embedding word2vec file looks like this:
...
in -0.039903056 0.46479827 0.2576446 ...
...
opinion 0.237968 0.17199863 -0.23182874...
...
people 0.2037858 -0.29881874 0.12108547 ...
...
amazing 0.20736384 0.22415389 0.09953516 ...
...
my 0.46468195 -0.35753986 0.6069699 ...
...
And for instance for 'in', 'my', 'opinion', 'people', 'amazing' equals to 10,11,12,13,5,0,0,0,0,0 we get the table of tables like this:
[-0.039903056 0.46479827 0.2576446 ...],[0.46468195 -0.35753986 0.6069699 ...],[0.237968 0.17199863 -0.23182874...],[0.2037858 -0.29881874 0.12108547 ...],[0.20736384 0.22415389 0.09953516 ...],0,0,0,0
Finally our train set looks in this way:
x y
1 [0.237968 0.17199863 -0.23182874...],[next 100 values],[next 100 values],[...],[...],0,0,0,0,0,
0 [...],[...],[...],[...],[...],[...],[...],0,0,0
1 [...],[...],[...],[...],[...],0,0,0,0,0
...
And test set looks in this way:
y
[100 values],[...],[...],[...],0,0,0,0,0,0
...
In the last step we train our model using for example sklearn model:
LogisticRegression().fit(values from y column of train set, values from x column of train set)
Then we predict data:
LogisticRegression().predict(values from y column of test set)
Above I described the whole process with the specified steps that give me the most doubts. I am asking you to indicate me the mistakes I have made in my reasoning and their explanation. I want to be sure that I understood it correctly. Thank you in advance for your help.
| 1 | 1 | 0 | 1 | 0 | 0 |
data trying to read
I have tried various ways still getting errors of the different type.
import codecs
f = codecs.open('sampledata.xlsx', encoding='utf-8')
for line in f:
print (repr(line))
the other way I tried is
f = open(fname, encoding="ascii", errors="surrogateescape")
still no luck.any help?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have the phrase: I have 10 bla, 50 blo, 10 blu
I want make a dict like this, using regex:
dictionary = {
"bla": 10,
"blo": 50,
"blu": 10
}
and if I receive the phrase: I have 50 blo, 5 blu, I make the dict, but without key bla. Like this:
dictionary = {
"blo": 50,
"blu": 5
}
Edit
Different formats like: I want 50 haha, 20 xxx, 17 y, I got 10 xxx, 17 hahaha, 3 xxx.
Need accept decimal numbers: 30.5, 10,5
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm looking for a pythonic interface to load ARPA files (back-off language models) and use them to evaluate some text, e.g. get its log-probability, perplexity etc.
I don't need to generate the ARPA file in Python, only to use it for querying.
Does anybody have a recommended package?
I already saw kenlm and swig-srilm, but the first is very hard to set up in Windows and the second seems un-maintained anymore.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am currently trying to implement a neural network that uses a doc2vec vector, and then uses that to work further.
I have a machine which only allows me to use tensorflow (this is a requirement!), so I need a model to transform a sentence / paragraph to a vector.
I know about gensim's doc2vec and this implementation. I have experience with gensim's implementation, but it apparently does not use tensorflow in the backend. The latter link, however, does not work without a few hours / days of debugging it seems.
I would be helpful for any links and recommendations!
| 1 | 1 | 0 | 0 | 0 | 0 |
I have parallel translated corpus in English-French (text.en,text.fr),
each text includes around 500K of lines (sentences in source and target languge). what I want is to:
1- Remove the duplicated lines in both texts using python command; and avoid any alignment problem in both files. e.g: command deleted line 32 in text.en, then of course delete it in text.fr.
2- Then Split both files into Train/Dev/Test data, only 1K for dev, and 1K for test, and the rest for train.
I need to split text.en and text.fr using the same command, so I could keep the alignment and corresponding sentences in both files.
It would be better if I could extract test and dev data randomly, that will help getting better results.
How can I do that? please write the commands.
I appreciate any help, Thank you !
| 1 | 1 | 0 | 0 | 0 | 0 |
This is my complete text:
RETENTION
Liability in excess of the Retention
The Retention shall be borne by the Named Insured and the Insurer shall only be liable for Loss once the Retention has been fully eroded. The Retention shall apply until such time as it has been fully eroded after which no Retention shall apply.
Erosion of the Retention
The Retention shall be eroded by Loss for which the Insurer would be liable under this Policy but for the Retention.
I want to extract the whole RETENTION paragraph.
This was my code to extract the sentences which have a specific word (here: Retention).
abc3=([sentence + '.' for sentence in txt_trim_string.split('.') if 'RETENTION' in sentence])
But this gave the output as:
RETENTION
Liability in excess of the Retention
The Retention shall be borne by the Named Insured and the Insurer shall only be liable for Loss once the Retention has been fully eroded.
I also want to include:
Erosion of the Retention
The Retention shall be eroded by Loss for which the Insurer would be liable under this Policy but for the Retention.
How can I do that?
| 1 | 1 | 0 | 0 | 0 | 0 |
I would need to find something like the opposite of model.most_similar()
While most_similar() returns an array of words most similar to the one given as input, I need to find a sort of "center" of a list of words.
Is there a function in gensim or any other tool that could help me?
Example:
Given {'chimichanga', 'taco', 'burrito'} the center would be maybe mexico or food, depending on the corpus that the model was trained on
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to generate Word2vec vectors.
I have pandas data frame.
I transformed it into tokens.
df["token"]
Used Word2vec from gensim.models
model = w2v.Word2Vec(
sentences=df["token"],
seed=seed,
workers=num_workers,
size=num_features,
min_count=min_word_count,
window=context_size,
sample=downsampling
)
How do I transform my dataframe df now?
That is what is the equivalent of doing
model.transform(df)
| 1 | 1 | 0 | 0 | 0 | 0 |
I have 90 documents with around 40 pages each (raw text). I want to tokenize them with spacy.
nlp = spacy.load('de')
tokenized_list = []
for document in doc_collection:
temp_doc = nlp(document)
tokenized_list.append(temp_doc)
It's working for a low number of documents, but if i want to tokenize all, then it gives a "MemoryError".
"...site-packages
umpy\core\shape_base.py", line 234, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
MemoryError"
Does somebody know how I can fix it?
Update:
I can execute it over and over again without changing the documents and it get stuck sometimes in this document sometimes in that - really weird... Does somebody know a similar problem?
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to parse a sentence to a binary parse of this form (Format used in the SNLI corpus):
sentence:"A person on a horse jumps over a broken down airplane."
parse: ( ( ( A person ) ( on ( a horse ) ) ) ( ( jumps ( over ( a ( broken ( down airplane ) ) ) ) ) . ) )
I'm unable to find a parser which does this.
note: This question has been asked earlier(How to get a binary parse in Python). But the answers are not helpful. And I was unable to comment because I do not have the required reputation.
| 1 | 1 | 0 | 0 | 0 | 0 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import warnings
warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
import logging
import os.path
import sys
import multiprocessing
# from gensim.corpora import WikiCorpus
from gensim.models import Word2Vec
from gensim.models.word2vec import LineSentence
if __name__ == '__main__':
program = os.path.basename(sys.argv[0])
logger = logging.getLogger(program)
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s', level=logging.INFO)
logger.info("running %s" % ' '.join(sys.argv))
min_count=100
data_dir='/opt/mengyuguang/word2vec/'
inp = data_dir + 'wiki.zh.simp.seg.txt'
outp1 = data_dir + 'wiki.zh.min_count{}.model'.format(str(min_count))
outp2 = data_dir + 'wiki.zh.min_count{}.vector'.format(str(min_count))
# train cbow
model = Word2Vec(LineSentence(inp), size=300,
workers=multiprocessing.cpu_count(),min_count=min_count)
# save
model.save(outp1)
model.wv.save_word2vec_format(outp2, binary=False)
Firstly,I trained word embedding with the code above, I don't think there is anything wrong with it. And I created a list vocab to store the words in the vector file. Then
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(max_document_length)
pretrain = vocab_processor.fit(vocab)
Vocab is a list of 415657 words. And I got a vocabulary of 412722. I know that vocab_processor.fit won't take upper and lower case as two words. This is really strange. How is this happening?
I checked the vector file again. There are no overlapping words at all.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to fit a Word2Vec model. According to the documentation for Gensim's Word2Vec we do not need to call model.build_vocabulary before using it.
But yet it is asking for me to do it. I have tried calling this function and it has not worked. I also fitted a Word2Vec model before without needing to call model.build_vocabulary .
Am I doing something wrong? Here is my code:
from gensim.models import Word2Vec
dataset = pd.read_table('genemap_copy.txt',delimiter='\t', lineterminator='
')
def row_to_sentences(dataframe):
columns = dataframe.columns.values
corpus = []
for index,row in dataframe.iterrows():
if index == 1000:
break
sentence = ''
for column in columns:
sentence += ' '+str(row[column])
corpus.append([sentence])
return corpus
corpus = row_to_sentences(dataset)
clean_corpus = [[sentence[0].lower()] for sentence in corpus ]
# model = Word2Vec()
# model.build_vocab(clean_corpus)
model = Word2Vec(clean_corpus, size=100, window=5, min_count=5, workers=4)
Help is greatly appreciated!
Also I am using macOS Sierra.
There is not much support online for using Gensim with Mac D: .
| 1 | 1 | 0 | 0 | 0 | 0 |
So I am fairly new to machine learning and have a few questions about keywords. Right now I'm trying to make a machine learning model using some movie data that I have previously collected (The Data is made of 4 attributes one beeing keywords which describe the movie). Nonetheless, some movies have more keywords than others for (example: Spiderman's keywords would be superhero, spider, fight etc...) each movie has from 50 to 400 keywords, therefore I wanted to ask you if I should include each keyword as a separate attribute or should I just add all of them under Keywords and separate them with commas.
To better illustrate my point here is two examples:
Including Movie Keywords as sperate Attributes
Including Movie Keyword as one attribute
Thank you very much in advance for your help
| 1 | 1 | 0 | 1 | 0 | 0 |
I am totally a newbie to nltk and python. I have been given a task to extract all the texts from an url. I have tried and able to extract text from a specified url after reading the nltk documentation. My main concern is how to do I remove the special characters (like .,-,"",'',!,) from the extracted list. The below mentioned code is not working for the text inside the <li> </li> tag of a html web page. Thus, always dot . is appended to the last word of the text inside the <li> tag. Any help is deeply appreciated. The source code is as follows.
from bs4 import BeautifulSoup
import urllib.request
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
response = urllib.request.urlopen('https://en.wikipedia.org/wiki/Electronics')
f=open('corpus.txt','w+')
html = response.read()
soup = BeautifulSoup(html,"html.parser")
text = soup.get_text(strip=True)
tokens = [t for t in text.split()]
clean_tokens = tokens[:]
sr = stopwords.words('english')
for token in tokens:
if token in sr:
clean_tokens.remove(token)
freq = nltk.FreqDist(clean_tokens)
for normalize,val in freq.items():
lemmatizer=WordNetLemmatizer()
corpus_refi=lemmatizer.lemmatize(str(normalize) + ':' + str(val), pos="a")
corpus_refi=corpus_refi.lower()
print(corpus_refi)
| 1 | 1 | 0 | 0 | 0 | 0 |
I have two terms "vehicle" and "motor vehicle". Are there any way to compare the meaningfulness level or ambiguity level of these two in NLP? The outcome should be that "motor vehicle" is more meaningful than "vehicle" or "vehicle" is more ambiguous than "motor vehicle". Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a about 90 documents that i have processed with spacy.
import spacy, os
nlp = spacy.load('de')
index = 1
for document in doc_collection:
doc = nlp(document)
doc.to_disk('doc_folder/' + str(index))
It seems to be working fine. After that i want to reload the doc files later as a generator object.
def get_spacy_doc_list():
for file in os.listdir(directory):
filename = os.fsdecode(file)
yield spacy.tokens.Doc(spacy.vocab.Vocab()).from_disk('doc_folder/' + filename)
for doc in get_spacy_doc_list():
for token in doc:
print(token.lemma_)
If I try this, then i get the following error:
KeyError: "[E018] Can't retrieve string for hash '12397158900972795331'."
How i can store and load the doc objects of spacy without getting this error?
Thanks for your help!
| 1 | 1 | 0 | 0 | 0 | 0 |
How would I replace all the sentences and paragraphs with a <string> tag in text files?
I want to keep spacing, tabs, and lists in the text document intact:
Example input:
Clause 1:
a) detail 1. some more about detail 1. Here is more information about this paragraph right here. There is more information that we think sometimes.
b) detail 2. some more about detail 2. and some more..
Example output:
<string>
a) <string>
b) <string>
| 1 | 1 | 0 | 0 | 0 | 0 |
I have been searching for a while, but I didn't find anything about this.
I've got the following problem:
I want to train a model where for a input i get a custom BIO tag. For instance, for the input "My dad lives in Manhattan, his name is Anthony Clark", and the classes LOC and PER, the output has to be:
[(My, O),(dad,O), (lives, O), (in,O), (Manhattan, B-LOC), (, , O), (his,O), (name,O), (is,O), (Anthony, B-PER), (Clark,I-PER)]
Is it possible to do with NTLK? Which features should I include?
| 1 | 1 | 0 | 1 | 0 | 0 |
I have the following two sentences:
I want to go home.
I would like to leave.
My goal is to quantify similarity between the two sentences using a kernel suggested in
this paper. I extract all the dependency triplets for each sentence. These are 3 item tuples containing all the relations between words in the sentence and look like (tail, relationship, head).
To calculate similarity, I need to loop through every possible combination of triplet across sentences and add a particular number to the similarity score based on how many nodes match and whether the relationship matches.
I attempted using list comprehensions inside a for loop since I figured it would be more efficient than another nested for loop but am getting a syntax error. Here's my code:
sim = 0
theta = 2.5
for d1 in deps1:
[sim += theta for d2 in deps2 if ((d1[0]==d2[0] or d1[2]==d2[2]) and d1[1]==d2[1])]
[sim += 1 for d2 in deps2 if ((d1[0]==d2[0] or d1[2]==d2[2]) and d1[1]!=d2[1])]
For reference, here's what deps1 and deps2 look like when printed:
[('I', 'nsubj', 'want'), ('want', 'ROOT', 'want'), ('to', 'aux', 'go'), ('go', 'xcomp', 'want'), ('home', 'advmod', 'go')]
[('I', 'nsubj', 'like'), ('would', 'aux', 'like'), ('like', 'ROOT', 'like'), ('to', 'aux', 'leave'), ('leave', 'xcomp', 'like')]
Questions:
What's the correct syntax to do this with a list comprehension?
Is there a more efficient way, maybe using numpy(?), to do this computation?
| 1 | 1 | 0 | 0 | 0 | 0 |
Doing the text analysis of italian text (tokenization, lemmalization) for future use of TF-IDF technics and constructing clusters based on that. For preprocessing I use NLTK and for one text file everything is working fine:
import nltk
from nltk.stem.wordnet import WordNetLemmatizer
it_stop_words = nltk.corpus.stopwords.words('italian')
lmtzr = WordNetLemmatizer()
with open('3003.txt', 'r' , encoding="latin-1") as myfile:
data=myfile.read()
word_tokenized_list = nltk.tokenize.word_tokenize(data)
word_tokenized_no_punct = [str.lower(x) for x in word_tokenized_list if x not in string.punctuation]
word_tokenized_no_punct_no_sw = [x for x in word_tokenized_no_punct if x not in it_stop_words]
word_tokenized_no_punct_no_sw_no_apostrophe = [x.split("'") for x in word_tokenized_no_punct_no_sw]
word_tokenized_no_punct_no_sw_no_apostrophe = [y for x in word_tokenized_no_punct_no_sw_no_apostrophe for y in x]
word_tokenize_list_no_punct_lc_no_stowords_lemmatized = [lmtzr.lemmatize(x) for x in word_tokenized_no_punct_no_sw_no_apostrophe]
But the question is that I need to perform the following to bunch of .txt files in the folder. For that I'm trying to use possibilities of PlaintextCorpusReader():
from nltk.corpus.reader.plaintext import PlaintextCorpusReader
corpusdir = 'reports/'
newcorpus = PlaintextCorpusReader(corpusdir, '.txt')
Basically I can not just apply newcorpus into the previous functions because it's an object and not a string. So my questions are:
How should the functions look like (or how should I change the existing ones for a distinct file) for doing tokenization and lemmatization for a corpus of files (using PlaintextCorpusReader())
How would the TF-IDF approach (standard sklearn approach of vectorizer = TfidfVectorizer() will look like in PlaintextCorpusReader()
Many Thanks!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using gensim to load pre-trained fasttext model. I downloaded the English wikipedia trained model from fasttext website.
here is the code I wrote to load the pre-trained model:
from gensim.models import FastText as ft
model=ft.load_fasttext_format("wiki.en.bin")
I try to check if the following phrase exists in the vocal(which rare chance it would as these are pre-trained model).
print("internal executive" in model.wv.vocab)
print("internal executive" in model.wv)
False
True
So the phrase "internal executive" is not present in the vocabulary but we still have the word vector corresponding to that.
model.wv["internal executive"]
Out[46]:
array([ 0.0210917 , -0.15233646, -0.1173932 , -0.06210957, -0.07288644,
-0.06304111, 0.07833624, -0.17026938, -0.21922196, 0.01146349,
-0.13639058, 0.17283678, -0.09251394, -0.17875175, 0.01339212,
-0.26683623, 0.05487974, -0.11843193, -0.01982722, 0.37037706,
-0.24370994, 0.14269598, -0.16363597, 0.00328478, -0.16560239,
-0.1450972 , -0.24787527, -0.01318423, 0.03277111, 0.16175713,
-0.19367714, 0.16955379, 0.1972683 , 0.09044111, 0.01731548,
-0.0034324 , -0.04834719, 0.14321515, 0.01422525, -0.08803893,
-0.29411593, -0.1033244 , 0.06278021, 0.16452256, 0.0650492 ,
0.1506474 , -0.14194389, 0.10778475, 0.16008648, -0.07853138,
0.2183501 , -0.25451994, -0.0345991 , -0.28843886, 0.19964759,
-0.10923116, 0.26665714, -0.02544454, 0.30637854, 0.04568949,
-0.04798719, -0.05769338, 0.25762403, -0.05158515, -0.04426906,
-0.19901046, 0.00894193, -0.17269588, -0.24747233, -0.19061406,
0.14322804, -0.10804397, 0.4002605 , 0.01409482, -0.04675362,
0.10039093, 0.07260711, -0.0938239 , -0.20434211, 0.05741301,
0.07592541, -0.02921724, 0.21137556, -0.23188967, -0.23164661,
-0.4569614 , 0.07434579, 0.10841205, -0.06514647, 0.01220404,
0.02679767, 0.11840229, 0.2247431 , -0.1946325 , -0.0990666 ,
-0.02524677, 0.0801085 , 0.02437297, 0.00674876, 0.02088535,
0.21464555, -0.16240154, 0.20670174, -0.21640894, 0.03900698,
0.21772243, 0.01954809, 0.04541844, 0.18990673, 0.11806394,
-0.21336791, -0.10871669, -0.02197789, -0.13249406, -0.20440844,
0.1967368 , 0.09804545, 0.1440366 , -0.08401451, -0.03715726,
0.27826542, -0.25195453, -0.16737154, 0.3561183 , -0.15756823,
0.06724873, -0.295487 , 0.28395334, -0.04908851, 0.09448399,
0.10877471, -0.05020981, -0.24595442, -0.02822314, 0.17862654,
0.06452435, -0.15105674, -0.31911567, 0.08166212, 0.2634299 ,
0.17043628, 0.10063848, 0.0687021 , -0.12210461, 0.10803893,
0.13644943, 0.10755012, -0.09816817, 0.11873955, -0.03881042,
0.18548298, -0.04769253, -0.01511982, -0.08552645, -0.05218676,
0.05387992, 0.0497043 , 0.06922272, -0.0089245 , 0.24790663,
0.27209425, -0.04925154, -0.08621719, 0.15918174, 0.25831223,
0.01654229, -0.03617229, -0.13490392, 0.08033483, 0.34922174,
-0.01744722, -0.16894792, -0.10506647, 0.21708378, -0.22582002,
0.15625793, -0.10860757, -0.06058934, -0.25798836, -0.20142137,
-0.06613475, -0.08779443, -0.10732629, 0.05967236, -0.02455976,
0.2229451 , -0.19476262, -0.2720119 , 0.03687386, -0.01220259,
0.07704347, -0.1674307 , 0.2400516 , 0.07338555, -0.2000631 ,
0.13897157, -0.04637206, -0.00874449, -0.32827383, -0.03435039,
0.41587186, 0.04643605, 0.03352945, -0.13700874, 0.16430037,
-0.13630766, -0.18546128, -0.04692861, 0.37308362, -0.30846512,
0.5535561 , -0.11573419, 0.2332801 , -0.07236694, -0.01018955,
0.05936847, 0.25877884, -0.2959846 , -0.13610311, 0.10905041,
-0.18220575, 0.06902339, -0.10624941, 0.33002165, -0.12087796,
0.06742091, 0.20762768, -0.34141317, 0.0884434 , 0.11247049,
0.14748637, 0.13261876, -0.07357208, -0.11968047, -0.22124515,
0.12290633, 0.16602683, 0.01055585, 0.04445777, -0.11142147,
0.00004863, 0.22543314, -0.14342701, -0.23209116, -0.00003538,
0.19272381, -0.13767233, 0.04850799, -0.281997 , 0.10343244,
0.16510887, 0.08671653, -0.24125539, 0.01201926, 0.0995285 ,
0.09807415, -0.06764816, -0.0206733 , 0.04697794, 0.02000999,
0.05817033, 0.10478792, 0.0974884 , -0.01756372, -0.2466861 ,
0.02877498, 0.02499748, -0.00370895, -0.04728201, 0.00107118,
-0.21848503, 0.2033032 , -0.00076264, 0.03828803, -0.2929495 ,
-0.18218371, 0.00628893, 0.20586628, 0.2410889 , 0.02364616,
-0.05220835, -0.07040054, -0.03744286, -0.06718048, 0.19264086,
-0.06490505, 0.27364203, 0.05527219, -0.27494466, 0.22256687,
0.10330909, -0.3076979 , 0.04852265, 0.07411488, 0.23980476,
0.1590279 , -0.26712465, 0.07580928, 0.05644221, -0.18824042],
Now my confusion is that Fastext creates vectors for character ngrams of a word too. So for a word "internal" it will create vectors for all its character ngrams including the full word and then the final word vector for the word is the sum of its character ngrams.
However, how it is still able to give me vector of a word or even the whole sentence? Isn't fastext vector is for a word and its ngram? So what are these vector I am seeing for the phrase when its clearly two words?
| 1 | 1 | 0 | 0 | 0 | 0 |
I use the TF-IDF code from here in my corpus of documents, which is 3 PDF documents, each about 270 pages long.
# Calculating the Term Frequency, Inverse Document Frequency score
import os
import math
from textblob import TextBlob as tb
def tf(word, blob):
return tb(blob).words.count(word) / len(tb(blob).words)
def n_containing(word, bloblist):
return sum(1 for blob in bloblist if word in tb(blob).words)
def idf(word, bloblist):
return math.log(len(bloblist) / (1 + n_containing(word, bloblist)))
def tfidf(word, blob, bloblist):
return tf(word, blob) * idf(word, bloblist)
# Stemming the articles
from nltk.stem import PorterStemmer
port = PorterStemmer()
bloblist = []
doclist = [pdf1, pdf2, pdf3] # Defined earlier, not showing here as it is not relevant to the question
for doc in doclist:
bloblist.append(port.stem(str(doc)))
# TF-IDF calculation on the stemmed articles
for index, blob in enumerate(bloblist):
print("Top words in document {}".format(index + 1))
scores = {word: tfidf(word, blob, bloblist) for word in tb(blob).words}
sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)
i=1
for word, score in sorted_words[:5]:
print("\tWord "+str(i)+": {}, TF-IDF: {}".format(word, round(score, 5)))
i+=1
The problem is, it just keeps running, without displaying anything beyond Top words in document 1. Why is it taking so long to calculate the scores? I've kept it running for an hour now, and the code hasn't terminated. Earlier I tried out the code for 50 odd txt files which were much shorter in length (like, 2-3 paragraphs on average), and there it was able to show the TF-IDF scores instantaneously. What's wrong with 3 docs of 270 pages each?
| 1 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.