text stringlengths 0 27.6k | python int64 0 1 | DeepLearning or NLP int64 0 1 | Other int64 0 1 | Machine Learning int64 0 1 | Mathematics int64 0 1 | Trash int64 0 1 |
|---|---|---|---|---|---|---|
i am working on NLP using python and nltk.
I was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc
from what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative review which doesn't exactly related to keywords showing emotions.
Is there anyway which i could build my own dictionary containing words which shows emotion for this purpose? is so, how do i do it and is there any collection of such words?
Any help would be greatly appreciated
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to reproduce the results of this paper: https://arxiv.org/pdf/1607.06520.pdf
Specifically this part:
To identify the gender subspace, we took the ten gender pair difference vectors and computed its principal components (PCs). As Figure 6 shows, there is a single direction that explains the majority of variance in these vectors. The first eigenvalue is significantly larger than the rest.
I am using the same set of word vectors as the authors (Google News Corpus, 300 dimensions), which I load into word2vec.
The 'ten gender pair difference vectors' the authors refer to are computed from the following word pairs:
I've computed the differences between each normalized vector in the following way:
model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-
negative300.bin', binary = True)
model.init_sims()
pairs = [('she', 'he'),
('her', 'his'),
('woman', 'man'),
('Mary', 'John'),
('herself', 'himself'),
('daughter', 'son'),
('mother', 'father'),
('gal', 'guy'),
('girl', 'boy'),
('female', 'male')]
difference_matrix = np.array([model.word_vec(a[0], use_norm=True) - model.word_vec(a[1], use_norm=True) for a in pairs])
I then perform PCA on the resulting matrix, with 10 components, as per the paper:
from sklearn.decomposition import PCA
pca = PCA(n_components=10)
pca.fit(difference_matrix)
However I get very different results when I look at pca.explained_variance_ratio_ :
array([ 2.83391436e-01, 2.48616155e-01, 1.90642492e-01,
9.98411858e-02, 5.61260498e-02, 5.29706681e-02,
2.75670634e-02, 2.21957722e-02, 1.86491774e-02,
1.99108478e-32])
or with a chart:
The first component accounts for less than 30% of the variance when it should be above 60%!
The results I get are similar to what I get when I try to do the PCA on randomly selected vectors, so I must be doing something wrong, but I can't figure out what.
Note: I've tried without normalizing the vectors, but I get the same results.
| 1 | 1 | 0 | 0 | 0 | 0 |
A homograph is a word that has the same spelling as another word but has a different sound and a different meaning, for example,lead (to go in front of) / lead (a metal) .
I was trying to use spacy word vectors to compare documents with each other by summing each word vector for each document and then finally finding cosine similarity. If for example spacy vectors have the same vector for the two 'lead' listed above , the results will be probably bad.
In the code below , why does the similarity between the two 'bank'
tokens come out as 1.00 ?
import spacy
nlp = spacy.load('en')
str1 = 'The guy went inside the bank to take out some money'
str2 = 'The house by the river bank.'
str1_tokenized = nlp(str1.decode('utf8'))
str2_tokenized = nlp(str2.decode('utf8'))
token1 = str1_tokenized[-6]
token2 = str2_tokenized[-2]
print 'token1 = {} token2 = {}'.format(token1,token2)
print token1.similarity(token2)
The output for given program is
token1 = bank token2 = bank
1.0
| 1 | 1 | 0 | 0 | 0 | 0 |
as the title says, I am trying to train a neural network to predict outcomes, and I can't figure out what is wrong with my model. I keep getting the exact same accuracy level, and the loss is Nan. I'm so confused... I have looked at other similar questions and still can't seem to get it working. My code for the model and training is below:
import numpy as np
import pandas as pd
import tensorflow as tf
import urllib.request as request
import matplotlib.pyplot as plt
from FlowersCustom import get_MY_data
def get_data():
IRIS_TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"
IRIS_TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'species']
train = pd.read_csv(IRIS_TRAIN_URL, names=names, skiprows=1)
test = pd.read_csv(IRIS_TEST_URL, names=names, skiprows=1)
# Train and test input data
Xtrain = train.drop("species", axis=1)
Xtest = test.drop("species", axis=1)
# Encode target values into binary ('one-hot' style) representation
ytrain = pd.get_dummies(train.species)
ytest = pd.get_dummies(test.species)
return Xtrain, Xtest, ytrain, ytest
def create_graph(hidden_nodes):
# Reset the graph
tf.reset_default_graph()
# Placeholders for input and output data
X = tf.placeholder(shape=Xtrain.shape, dtype=tf.float64, name='X')
y = tf.placeholder(shape=ytrain.shape, dtype=tf.float64, name='y')
# Variables for two group of weights between the three layers of the network
print(Xtrain.shape, ytrain.shape)
W1 = tf.Variable(np.random.rand(Xtrain.shape[1], hidden_nodes), dtype=tf.float64)
W2 = tf.Variable(np.random.rand(hidden_nodes, ytrain.shape[1]), dtype=tf.float64)
# Create the neural net graph
A1 = tf.sigmoid(tf.matmul(X, W1))
y_est = tf.sigmoid(tf.matmul(A1, W2))
# Define a loss function
deltas = tf.square(y_est - y)
loss = tf.reduce_sum(deltas)
# Define a train operation to minimize the loss
# optimizer = tf.train.GradientDescentOptimizer(0.005)
optimizer = tf.train.AdamOptimizer(0.001)
opt = optimizer.minimize(loss)
return opt, X, y, loss, W1, W2, y_est
def train_model(hidden_nodes, num_iters, opt, X, y, loss, W1, W2, y_est):
# Initialize variables and run session
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
losses = []
# Go through num_iters iterations
for i in range(num_iters):
sess.run(opt, feed_dict={X: Xtrain, y: ytrain})
local_loss = sess.run(loss, feed_dict={X: Xtrain.values, y: ytrain.values})
losses.append(local_loss)
weights1 = sess.run(W1)
weights2 = sess.run(W2)
y_est_np = sess.run(y_est, feed_dict={X: Xtrain.values, y: ytrain.values})
correct = [estimate.argmax(axis=0) == target.argmax(axis=0)
for estimate, target in zip(y_est_np, ytrain.values)]
acc = 100 * sum(correct) / len(correct)
if i % 10 == 0:
print('Epoch: %d, Accuracy: %.2f, Loss: %.2f' % (i, acc, local_loss))
print("loss (hidden nodes: %d, iterations: %d): %.2f" % (hidden_nodes, num_iters, losses[-1]))
sess.close()
return weights1, weights2
def test_accuracy(weights1, weights2):
X = tf.placeholder(shape=Xtest.shape, dtype=tf.float64, name='X')
y = tf.placeholder(shape=ytest.shape, dtype=tf.float64, name='y')
W1 = tf.Variable(weights1)
W2 = tf.Variable(weights2)
A1 = tf.sigmoid(tf.matmul(X, W1))
y_est = tf.sigmoid(tf.matmul(A1, W2))
# Calculate the predicted outputs
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
y_est_np = sess.run(y_est, feed_dict={X: Xtest, y: ytest})
# Calculate the prediction accuracy
correct = [estimate.argmax(axis=0) == target.argmax(axis=0)
for estimate, target in zip(y_est_np, ytest.values)]
accuracy = 100 * sum(correct) / len(correct)
print('final accuracy: %.2f%%' % accuracy)
def get_inputs_and_outputs(train, test, output_column_name):
Xtrain = train.drop(output_column_name, axis=1)
Xtest = test.drop(output_column_name, axis=1)
ytrain = pd.get_dummies(getattr(train, output_column_name))
ytest = pd.get_dummies(getattr(test, output_column_name))
return Xtrain, Xtest, ytrain, ytest
if __name__ == '__main__':
train, test = get_MY_data('output')
Xtrain, Xtest, ytrain, ytest = get_inputs_and_outputs(train, test, 'output')#get_data()
# Xtrain, Xtest, ytrain, ytest = get_data()
hidden_layers = 10
num_epochs = 500
opt, X, y, loss, W1, W2, y_est = create_graph(hidden_layers)
w1, w2 = train_model(hidden_layers, num_epochs, opt, X, y, loss, W1, W2, y_est)
# test_accuracy(w1, w2)
Here is a screenshot of what the training is printing out:
And this is a screenshot of the Pandas Dataframe that I am using for the input data (5 columns of floats):
And finally, here is the Pandas Dataframe that I am using for the expected outputs (1 column of either -1 or 1):
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a list of reviews, each element of the list is a review of IMDB data set in kaggle. there are 25000 reviews in total. I have the label of each review +1 for positive and -1 for negative.
I want to train a Hidden Markov Model with these reviews and labels.
1- what is the sequence that I should give to HMM? is it something like Bag of words or is it something else like probabilities which I need to calculate? what kind of feature extraction method is appropriate? I was told to use Bag of words on review's list, but when I searched a little I find out HMM cares about the order but bag of words doesn't maintain the order of words in sequences. how should I prepare this List of reviews to be able to feed it into a HMM model?
2- is there a framework for this? I know hmmlearn, and I think I should use the MultinomialHMM, correct me if I'm wrong. but it is not supervised, its models do not take labels as input when i want to train it, and I get some funny errors which I don't know how to solve because of the first question I asked about the correct type of input I should give to it. seqlearn is the one I find recently, is it good or there is a better one to use?
I appreciate any guidance since I have almost zero knowledge about NLP.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a build my naive Bayes classifier model for nlp using bags of word. Now I want to predict output for a single external input
. How can I do it?please find this github link for correction thanks
https://github.com/Kundan8296/Machine-Learning/blob/master/NLP.ipynb
| 1 | 1 | 0 | 0 | 0 | 0 |
I created some MNIST digits by using a generative adversarial neural network and saved them in png format. I know that Keras has the MNIST dataset, but I want to combine the digit images that I created with the original MNIST dataset in the Keras. Is this possible, if so, how can I do this ?
Thank you.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a code that converts word to vector. Below is my code:
# word_to_vec_demo.py
from gensim.models import word2vec
import logging
logging.basicConfig(format='%(asctime)s : \
%(levelname)s : %(message)s', level=logging.INFO)
sentences = [['In', 'the', 'beginning', 'Abba','Yahweh', 'created', 'the',
'heaven', 'and', 'the', 'earth.', 'And', 'the', 'earth', 'was',
'without', 'form,', 'and', 'void;', 'and', 'darkness', 'was',
'upon', 'the', 'face', 'of', 'the', 'deep.', 'And', 'the',
'Spirit', 'of', 'Yahweh', 'moved', 'upon', 'the', 'face', 'of',
'the', 'waters.']]
model = word2vec.Word2Vec(sentences, size=10, min_count=1)
print("Vector for 'earth' is:
")
print(model.wv['earth'])
print("
End demo")
The output is
Vector for 'earth' is:
[-0.00402722 0.0034133 0.01583795 0.01997946 0.04112177 0.00291858
-0.03854967 0.01581967 -0.02399057 0.00539708]
Is it possible to encode from array of vector to words? If yes, how will I implement it in Python?
| 1 | 1 | 0 | 1 | 0 | 0 |
I have trained a Bi-LSTM model to find NER on a set of sentences. For this I took the different words present and I did a mapping between a word and a number and then created the Bi-LSTM model using those numbers. I then create and pickle that model object.
Now I get a set of new sentences containing certain words that the training model has not seen. Thus these words do not have a numeric value till now. Thus when I test it on my previously existing model, it would give an error. It is not able to find the words or features as the numeric values for those do not exist.
To circumvent this error I gave a new integer value to all the new words that I see.
However, when I load the model and test it, it gives the error that:
InvalidArgumentError: indices[0,24] = 5444 is not in [0, 5442) [[Node: embedding_14_16/Gather = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, validate_indices=true,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_14_16/embeddings/read, embedding_14_16/Cast)]]
The training data contains 5445 words including the padding word. Thus = [0, 5444]
5444 is the index value I have given to the paddings in the test sentences. Not clear why it is assuming the index values to range between [0, 5442).
I have used the base code available on the following link: https://www.kaggle.com/gagandeep16/ner-using-bidirectional-lstm
The code:
input = Input(shape=(max_len,))
model = Embedding(input_dim=n_words, output_dim=50
, input_length=max_len)(input)
model = Dropout(0.1)(model)
model = Bidirectional(LSTM(units=100, return_sequences=True, recurrent_dropout=0.1))(model)
out = TimeDistributed(Dense(n_tags, activation="softmax"))(model) # softmax output layer
model = Model(input, out)
model.compile(optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])
#number of epochs - Also for output file naming
epoch_num=20
domain="../data/Laptop_Prediction_Corrected"
output_file_name=domain+"_E"+str(epoch_num)+".xlsx"
model_name="../models/Laptop_Prediction_Corrected"
output_model_filename=model_name+"_E"+str(epoch_num)+".sav"
history = model.fit(X_tr, np.array(y_tr), batch_size=32, epochs=epoch_num, validation_split=0.1, verbose=1)
max_len is the total number of words in a sentence and n_words is the vocab size. In the model the padding has been done using the following code where n_words=5441:
X = pad_sequences(maxlen=max_len, sequences=X, padding="post", value=n_words)
The padding in the new dataset:
max_len = 50
# this is to pad sentences to the maximum length possible
#-> so all records of X will be of the same length
#X = pad_sequences(maxlen=max_len, sequences=X, padding="post", value=res_new_word2idx["pad_blank"])
#X = pad_sequences(maxlen=max_len, sequences=X, padding="post", value=5441)
Not sure which of these paddings is correct?
However, the vocab only includes the words in the training data. When I say:
p = loaded_model.predict(X)
How to use predict for text sentences which contain words that are not present in the initial vocab?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am supposed to do some exercises with python glove, most of it doesn't give me any problems but now i am supposed to find the 5 most similar words to "norway - war + peace" from the "glove-wiki-gigaword-100" package. But when i run my code it just says that the 'word' is not in the vocabulary. Now I'm guessing that this is some kind of formatting, but i don't know how to use it.
import gensim.downloader as api
model = api.load("glove-wiki-gigaword-100") # download the model and return as object ready for use
bests = model.most_similar("norway - war + peace", topn= 5)
print("5 most similar words to 'norway - war + peace':")
for best in bests:
print(best)
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a large corpus (around 400k unique sentences). I just want to get TF-IDF score for each word. I tried to calculate the score for each word by scanning each word and calculating the frequency but it's taking too long.
I used :
X= tfidfVectorizer(corpus)
from sklearn but it directly gives back the vector representation of the sentence. Is there any way I can get the TF-IDF scores for each word in the corpus?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to transform pdfs from conference/journal papers into .txt files. I basically want to have a structure a bit cleaner that the current pdf: no line break before the end of a sentence and highlighting sections of the paper. The problem I am dealing with currently is to try and detect sections automatically. That is, in the following image, I want to be able to find ABSTRACT, CSS CONCEPT, 1 INTRODUCTION, 2 THE BODY OF THE PAPER..
If currently use a simply idea which works-ish. I basically let pdf miner do its job and then use NTLK to find sentences.
def convert_pdf_to_txt(path, year):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = open(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos=set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):
interpreter.process_page(page)
text = retstr.getvalue()
sentences = sent_tokenize(text)
size = len(sentences)
i = 0
path = path[:-3]
output = open("out.txt", 'w')
for s in sentences:
s = s.replace("-
", '') #remove hyphens
lines = s.split("
")
for line in lines:
if(line.isupper()): #section are only uppercase.
#however, other things are also only uppercase hence my errors
line = "--SECTION-- " +line
output.write("
"+line+"
")
else:
output.write(line)
output.write("
")
fp.close()
device.close()
retstr.close()
This gives me on the whole file the following output:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758SIG Proceedings Paper in LaTeX Format∗Extended Abstract†
--SECTION-- G.K.M.
Tobin§Dublin, OhioSean Fogartywebmaster@marysville-ohio.comLars Thørväld¶The Thørväld GroupHekla, Icelandlarst@affiliation.orgCharles PalmerInstitute for Clarity in DocumentationInstitute for Clarity in DocumentationBen Trovato‡Dublin, Ohiotrovato@corporation.comLawrence P. LeipunerBrookhaven Laboratorieslleipuner@researchlabs.orgJohn SmithPalmer Research LaboratoriesSan Antonio, Texascpalmer@prl.comJulius P. KumquatNASA Ames Research CenterMoffett Field, Californiafogartys@amesres.orgThe Thørväld Groupjsmith@affiliation.orgThe Kumquat Consortiumjpkumquat@consortium.netcolumns), a specified set of fonts (Arial or Helvetica and TimesRoman) in certain specified sizes, a specified live area, centeredon the page, specified size of margins, specified column width andgutter size.
--SECTION-- ABSTRACT
This paper provides a sample of a LATEX document which conforms,somewhat loosely, to the formatting guidelines for ACM SIG Proceedings.1Unpublishedworkingdraft.
Notfordistribution.
--SECTION-- CCS CONCEPTS
• Computer systems organization → Embedded systems; Redundancy; Robotics; • Networks → Network reliability;
--SECTION-- KEYWORDS
ACM proceedings, LATEX, text taggingACM Reference Format:Ben Trovato, G.K.M.
Tobin, Lars Thørväld, Lawrence P. Leipuner, SeanFogarty, Charles Palmer, John Smith, and Julius P. Kumquat.
1997.
--SECTION-- SIG
Proceedings Paper in LaTeX Format: Extended Abstract.
In Proceedings ofACM Woodstock conference (WOODSTOCK’97).
ACM, New York, NY, USA,5 pages.
https://doi.org/10.475/123_4
--SECTION-- 2 THE BODY OF THE PAPER
Typically, the body of a paper is organized into a hierarchical structure, with numbered or unnumbered headings for sections, subsections, sub-subsections, and even smaller sections.
The command\section that precedes this paragraph is part of such a hierarchy.3LATEX handles the numbering and placement of these headings foryou, when you use the appropriate heading commands aroundthe titles of the headings.
If you want a sub-subsection or smallerpart to be unnumbered in your output, simply append an asteriskto the command name.
Examples of both numbered and unnumbered headings will appear throughout the balance of this sampledocument.
Because the entire article is contained in the document environment, you can indicate the start of a new paragraph with a blankline in your input file; that is why this sentence forms a separateparagraph.
--SECTION-- 1 INTRODUCTION
The proceedings are the records of a conference.2 ACM seeks to givethese conference by-products a uniform, high-quality appearance.
To do this, ACM has some rigid requirements for the format of theproceedings documents: there is a specified format (balanced double∗Produces the permission block, and copyright information†The full version of the author’s guide is available as acmart.pdf document‡Dr.
Trovato insisted his name be first.
§The secretary disavows any knowledge of this author’s actions.
¶This author is the one who did all the really hard work.
1This is an abstract footnote2This is a footnotePermission to make digital or hard copies of part or all of this work for personal orUnpublished working draft.
Not for distribution.
classroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page.
Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
WOODSTOCK’97, July 1997, El Paso, Texas USA© 2016 Copyright held by the owner/author(s).
--SECTION-- ACM ISBN 123-4567-24-567/08/06.
https://doi.org/10.475/123_4Submission ID: 123-A12-B3.
2018-10-20 12:29.
Page 1 of 1–5.
2.1 Type Changes and Special CharactersWe have already seen several typeface changes in this sample.
You can indicate italicized words or phrases in your text with thecommand \textit; emboldening with the command \textbf andtypewriter-style (for instance, for computer code) with \texttt.
But remember, you do not have to indicate typestyle changes whensuch changes are part of the structural elements of your article;for instance, the heading of this subsection will be in a sans serif4typeface, but that is handled by the document class file.
Take carewith the use of5 the curly braces in typeface changes; they mark thebeginning and end of the text that is to be in the different typeface.
3This is a footnote.
4Another footnote here.
Let’s make this a rather long one to see how it looks.
5Another footnote.
5960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116WOODSTOCK’97, July 1997, El Paso, Texas USAB. Trovato et al.
You can use whatever symbols, accented characters, or nonEnglish characters you need anywhere in your document; you canfind a complete list of what is available in the LATEX User’s Guide[26].
2.2 Math EquationsYou may want to display math equations in three distinct styles:inline, numbered or non-numbered display.
Each of the three arediscussed in the next sections.
Table 1: Frequency of Special CharactersNon-English or Math
--SECTION-- Ø
π$2
--SECTION-- Ψ
1Frequency Comments1 in 1,0001 in 54 in 5For Swedish namesCommon in mathUsed in business1 in 40,000 Unexplained usagemand, using \cite.
This article shows only the plainest form of the citation comInline (In-text) Equations.
A formula that appears in the2.2.1running text is called an inline or in-text formula.
It is producedby the math environment, which can be invoked with the usual\begin .
.
.
\end construction or with the short form $ .
.
.
$.
You can use any of the symbols and structures, from α to ω, availablein LATEX [26]; this section will simply show a few examples of intext equations in context.
Notice how this equation: limn→∞ x = 0,set here in in-line math style, looks slightly different when set indisplay style.
(See next section).
2.2.2 Display Equations.
A numbered display equation—one setoff by vertical space from the text and centered horizontally—isproduced by the equation environment.
An unnumbered displayequation is produced by the displaymath environment.
Again, in either environment, you can use any of the symbolsand structures available in LATEX; this section will just give a coupleof examples of display equations in context.
First, consider theequation, shown as an inline equation above:Some examples.
A paginated journal article [2], an enumeratedjournal article [11], a reference to an entire issue [10], a monograph(whole book) [25], a monograph/whole book in a series (see 2ain spec.
document) [18], a divisible-book such as an anthology orcompilation [13] followed by the same example, however we onlyoutput the series if the volume number is given [14] (so Editor00a’sseries should NOT be present since it has no vol.
no.
), a chapterin a divisible book [37], a chapter in a divisible book in a series[12], a multi-volume work as book [24], an article in a proceedings(of a conference, symposium, workshop for example) (paginatedproceedings article) [4], a proceedings article with all possible elements [36], an example of an enumerated proceedings article [16],an informally published work [17], a doctoral dissertation [9], amaster’s thesis: [5], an online document / world wide web resource[1, 30, 38], a video game (Case 1) [29] and (Case 2) [28] and [27] and(Case 3) a patent [35], work accepted for publication [31], ’YYYYb’test for prolific author [32] and [33].
Other cites might contain’duplicate’ DOI and URLs (some SIAM articles) [23].
Boris / BarbaraBeeton: multi-volume works as books [21] and [20].
Unpublishedworkingdraft.
Notfordistribution.
2.4 TablesBecause tables cannot be split across pages, the best placement forthem is typically the top of the page nearest their initial cite.
To ensure this proper “floating” placement of tables, use the environmenttable to enclose the table’s contents and the table caption.
The contents of the table itself must go in the tabular environment, to bealigned properly in rows and columns, with the desired horizontaland vertical rules.
Again, detailed instructions on tabular materialare found in the LATEX User’s Guide.
Immediately following this sentence is the point at which Table 1is included in the input file; compare the placement of the tablehere with the table in the printed output of this document.
Notice how it is formatted somewhat differently in the displaymath environment.
Now, we’ll enter an unnumbered equation:A couple of citations with DOIs: [22, 23].
Online citations: [38–40].
just to demonstrate LATEX’s able handling of numbering.
and follow it with another numbered equation:x + 1∫ π +2(1)(2)limn→∞ x = 0∞i =0∞i =0xi =0f2.3 CitationsCitations to articles [6–8, 19], conference proceedings [8] or maybebooks [26, 34] listed in the Bibliography section of your article willoccur throughout the text of your article.
You should use BibTeX toautomatically produce this bibliography; you simply need to insertone of several citation commands with a key of the item cited in theproper location in the .tex file [26].
The key is a short referenceyou invent to uniquely identify each work; in this sample document,the key is the first author’s surname and a word from the title.
Thisidentifying key is included with each item in the .bib file for yourarticle.
The details of the construction of the .bib file are beyond thescope of this sample document, but more information can be foundin the Author’s Guide, and exhaustive details in the LATEX User’sGuide by Lamport [26].
To set a wider table, which takes up the whole width of the page’slive area, use the environment table* to enclose the table’s contentsand the table caption.
As with a single-column table, this widetable will “float” to a location deemed more desirable.
Immediatelyfollowing this sentence is the point at which Table 2 is included inthe input file; again, it is instructive to compare the placement ofthe table here with the table in the printed output of this document.
It is strongly recommended to use the package booktabs [15]and follow its main principles of typography with respect to tables:(1) Never, ever use vertical rules.
(2) Never use double rules.
Submission ID: 123-A12-B3.
2018-10-20 12:29.
Page 2 of 1–5.
117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232SIG Proceedings Paper in LaTeX FormatWOODSTOCK’97, July 1997, El Paso, Texas USATable 2: Some Typical CommandsCommand A Number Comments\author\table\table*100300400AuthorFor tablesFor wider tablesDefinition 2.2.
If z is irrational, then by ez we mean the uniquenumber that has logarithm z:f (x)д(x) = L.(cid:21)log ez = z.
The pre-defined theorem-like constructs are theorem, conjecture, proposition, lemma and corollary.
The pre-defined definition-like constructs are example and definition.
You can add yourown constructs using the amsthm interface [3].
The styles used inthe \theoremstyle command are acmplain and acmdefinition.
Another construct is proof, for example,Proof.
Suppose on the contrary there exists a real number Lsuch thatFigure 1: A sample black and white graphic.
It is also a good idea not to overuse horizontal rules.
Figure 2: A sample black and white graphic that has beenresized with the includegraphics command.
Unpublishedworkingdraft.
Notfordistribution.
lim(cid:20)x→∞дx · f (x)д(x)f (x) = limx→c2.5 FiguresLike tables, figures cannot be split across pages; the best placementfor them is typically the top or the bottom of the page nearest theirinitial cite.
To ensure this proper “floating” placement of figures,use the environment figure to enclose the figure and its caption.
This sample document contains examples of .eps files to bedisplayable with LATEX.
If you work with pdfLATEX, use files in the.pdf format.
Note that most modern TEX systems will convert .epsto .pdf for you on the fly.
More details on each of these are foundin the Author’s Guide.
As was the case with tables, you may want a figure that spans twocolumns.
To do this, and still to ensure proper “floating” placementof tables, use the environment figure* to enclose the figure and itscaption.
And don’t forget to end the environment with figure*, notfigure!
= limx→cд(x)· limx→cf (x)д(x) = 0·L = 0,Thenl = limx→cwhich contradicts our assumption that l (cid:44) 0.
--SECTION-- 3 CONCLUSIONS
This paragraph will end the body of this sample document.
Remember that you might still have Acknowledgments or Appendices;brief samples of these follow.
There is still the Bibliography to dealwith; and we will make a disclaimer about that here: with the exception of the reference to the LATEX book, the citations in this paperare to articles which have nothing to do with the present subjectand are used as examples only.
□
--SECTION-- A HEADINGS IN APPENDICES
The rules about hierarchical headings discussed above for the bodyof the article are different in the appendices.
In the appendix environment, the command section is used to indicate the start ofeach Appendix, with alphabetic order designation (i.e., the first isA, the second B, etc.)
and a title (if you include one).
So, if you needhierarchical structure within an Appendix, start with subsectionas the highest level.
Here is an outline of the body of this documentin Appendix-appropriate form:2.6 Theorem-like ConstructsOther common constructs that may occur in your article are theforms for logical constructs like theorems, axioms, corollaries andproofs.
ACM uses two types of these constructs: theorem-like anddefinition-like.
Here is a theorem:Theorem 2.1.
Let f be continuous on [a, b].
If G is an antiderivative for f on [a, b], then∫ baHere is a definition:f (t) dt = G(b) − G(a).
Submission ID: 123-A12-B3.
2018-10-20 12:29.
Page 3 of 1–5.
A.1 IntroductionA.2 The Body of the PaperA.2.1 Type Changes and Special Characters.
A.2.2 Math Equations.
Inline (In-text) Equations.
Display Equations.
A.2.3 Citations.
233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348WOODSTOCK’97, July 1997, El Paso, Texas USAB. Trovato et al.
Figure 4: A sample black and white graphic that has beenresized with the includegraphics command.
A.2.4 Tables.
--SECTION-- A.2.5
Figures.
A.2.6 Theorem-like Constructs.
Figure 3: A sample black and white graphic that needs to span two columns of text.
Unpublishedworkingdraft.
Notfordistribution.
(Nov. 1996).
booktabs.
Mathematical Society.
http://www.ctan.org/pkg/amsthm.
[2] Patricia S. Abril and Robert Plant.
2007.
The patent holder’s dilemma: Buy, sell,or troll?
Commun.
ACM 50, 1 (Jan. 2007), 36–44.
https://doi.org/10.1145/1188913.
1188915[3] American Mathematical Society 2015.
Using the amsthm Package.
American[4] Sten Andler.
1979.
Predicate Path expressions.
In Proceedings of the 6th.
--SECTION-- ACM
SIGACT-SIGPLAN symposium on Principles of Programming Languages (POPL ’79).
ACM Press, New York, NY, 226–236.
https://doi.org/10.1145/567752.567774[5] David A. Anisi.
2003.
Optimal Motion Control of a Ground Vehicle.
Master’s thesis.
[6] Mic Bowman, Saumya K. Debray, and Larry L. Peterson.
1993.
Reasoning AboutNaming Systems.
ACM Trans.
Program.
Lang.
Syst.
15, 5 (November 1993), 795–825. https://doi.org/10.1145/161468.161471[7] Johannes Braams.
1991.
Babel, a Multilingual Style-Option System for Use withRoyal Institute of Technology (KTH), Stockholm, Sweden.
LaTeX’s Standard Document Styles.
TUGboat 12, 2 (June 1991), 291–301.
TeX Users Group, 84–89.
[8] Malcolm Clark.
1991.
Post Congress Tristesse.
In TeX90 Conference Proceedings.
[9] Kenneth L. Clarkson.
1985.
Algorithms for Closest-Point Problems (ComputationalGeometry).
Ph.D. Dissertation.
Stanford University, Palo Alto, CA.
UMI OrderNumber: AAT 8506171.
[10] Jacques Cohen (Ed.).
1996.
Special issue: Digital Libraries.
Commun.
--SECTION-- ACM 39, 11
[11] Sarah Cohen, Werner Nutt, and Yehoshua Sagic.
2007.
Deciding equivalancesamong conjunctive aggregate queries.
J. ACM 54, 2, Article 5 (April 2007),50 pages.
https://doi.org/10.1145/1219092.1219093[12] Bruce P. Douglass, David Harel, and Mark B. Trakhtenbrot.
1998.
Statecarts inuse: structured analysis and object-orientation.
In Lectures on Embedded Systems,Grzegorz Rozenberg and Frits W. Vaandrager (Eds.).
Lecture Notes in ComputerScience, Vol.
1494.
Springer-Verlag, London, 368–394.
https://doi.org/10.1007/3-540-65193-4_29[13] Ian Editor (Ed.).
2007.
The title of book one (1st.
ed.).
The name of the seriesone, Vol.
9.
University of Chicago Press, Chicago.
https://doi.org/10.1007/3-540-09237-4Chicago, Chapter 100. https://doi.org/10.1007/3-540-09237-4[14] Ian Editor (Ed.).
2008.
The title of book two (2nd.
ed.).
University of Chicago Press,[15] Simon Fear.
2005.
Publication quality tables in LATEX.
http://www.ctan.org/pkg/[16] Matthew Van Gundy, Davide Balzarotti, and Giovanni Vigna.
2007.
Catch me, ifyou can: Evading network signatures with web-based polymorphic worms.
InProceedings of the first USENIX workshop on Offensive Technologies (WOOT ’07).
USENIX Association, Berkley, CA, Article 7, 9 pages.
[17] David Harel.
1978.
LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER.
MIT Research Lab Technical Report TR-200.
Massachusetts Institute of Technology, Cambridge, MA.
[18] David Harel.
1979.
First-Order Dynamic Logic.
Lecture Notes in Computer Science,Vol.
68.
Springer-Verlag, New York, NY.
https://doi.org/10.1007/3-540-09237-4[19] Maurice Herlihy.
1993.
A Methodology for Implementing Highly ConcurrentData Objects.
ACM Trans.
Program.
Lang.
Syst.
15, 5 (November 1993), 745–770.
https://doi.org/10.1145/161468.161469[20] Lars Hörmander.
1985.
The analysis of linear partial differential operators.
--SECTION-- III.
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles ofMathematical Sciences], Vol.
275.
Springer-Verlag, Berlin, Germany.
viii+525pages.
Pseudodifferential operators.
[21] Lars Hörmander.
1985.
The analysis of linear partial differential operators.
--SECTION-- IV.
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles ofMathematical Sciences], Vol.
275.
Springer-Verlag, Berlin, Germany.
vii+352pages.
Fourier integral operators.
A Caveat for the TEX Expert.
A.3 ConclusionsA.4 ReferencesGenerated by bibtex from your .bib file.
Run latex, then bibtex, thenlatex twice (to resolve references) to create the .bbl file.
Insert that.bbl file into the .tex source file and comment out the command\thebibliography.
--SECTION-- B MORE HELP FOR THE HARDY
Of course, reading the source code is always useful.
The file acmart.
pdf contains both the user guide and the commented code.
--SECTION-- ACKNOWLEDGMENTS
The authors would like to thank Dr. Yuhua Li for providing theMATLAB code of the BEPS method.
The authors would also like to thank the anonymous refereesfor their valuable comments and helpful suggestions.
The work issupported by the National Natural Science Foundation of Chinaunder Grant No.
: 61273304 and Young Scientists’ Support Program(http://www.nnsf.cn/youngscientists).
--SECTION-- REFERENCES
[1] Rafal Ablamowicz and Bertfried Fauser.
2007.
CLIFFORD: a Maple 11 Package forClifford Algebra Computations, version 11.
Retrieved February 28, 2008 fromhttp://math.tntech.edu/rafal/cliff11/index.htmlSubmission ID: 123-A12-B3.
2018-10-20 12:29.
Page 4 of 1–5.
349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464SIG Proceedings Paper in LaTeX FormatWOODSTOCK’97, July 1997, El Paso, Texas USAAlgorithms (3rd.
ed.).
Addison Wesley Longman Publishing Co., Inc.New York, NY.
--SECTION-- [22] IEEE 2004.
IEEE TCSC Executive Committee.
In Proceedings of the IEEE International Conference on Web Services (ICWS ’04).
IEEE Computer Society, Washington,
--SECTION-- DC, USA, 21–22.
https://doi.org/10.1109/ICWS.2004.64[23] Markus Kirschmer and John Voight.
2010.
Algorithmic Enumeration of IdealClasses for Quaternion Orders.
SIAM J. Comput.
39, 5 (Jan. 2010), 1714–1747.
https://doi.org/10.1137/080734467[24] Donald E. Knuth.
1997.
The Art of Computer Programming, Vol.
1: Fundamental[25] David Kosiur.
2001.
Understanding Policy-Based Networking (2nd.
ed.).
Wiley,[26] Leslie Lamport.
1986.
LATEX: A Document Preparation System.
Addison-Wesley,[27] Newton Lee.
2005.
Interview with Bill Kinder: January 13, 2005.
Video.
Comput.
Entertain.
3, 1, Article 4 (Jan.-March 2005).
https://doi.org/10.1145/1057270.
1057278[28] Dave Novak.
2003.
Solder man.
Video.
In ACM SIGGRAPH 2003 Video Review onAnimation theater Program: Part I - Vol.
145 (July 27–27, 2003).
ACM Press, NewYork, NY, 4. https://doi.org/99.9999/woot07-S422[29] Barack Obama.
2008.
A more perfect union.
Video.
Retrieved March 21, 2008Reading, MA.
from http://video.google.com/videoplay?docid=6528042696351994555[30] Poker-Edge.Com.
2006.
Stats and Analysis.
Retrieved June 7, 2006 from http://www.poker-edge.com/stats.phpArticle 5 (July 2008).
To appear.
[31] Bernard Rous.
2008.
The Enabling of Digital Libraries.
Digital Libraries 12, 3,[32] Mehdi Saeedi, Morteza Saheb Zamani, and Mehdi Sedighi.
2010.
A library-basedsynthesis methodology for reversible logic.
Microelectron.
--SECTION-- J.
41, 4 (April 2010),185–194.
[33] Mehdi Saeedi, Morteza Saheb Zamani, Mehdi Sedighi, and Zahra Sasanian.
2010.
Synthesis of Reversible Circuit Using Cycle-Based Approach.
J. Emerg.
Technol.
Comput.
Syst.
6, 4 (Dec. 2010).
--SECTION-- [34] S.L.
Salas and Einar Hille.
1978.
Calculus: One and Several Variable.
John Wileyand Sons, New York.
[35] Joseph Scientist.
2009.
The fountain of youth.
Patent No.
12345, Filed July 1st.,Unpublishedworkingdraft.
Notfordistribution.
2008, Issued Aug.
9th., 2009.
[36] Stan W. Smith.
2010.
An experiment in bibliographic mark-up: Parsing metadatafor XML export.
In Proceedings of the 3rd.
annual workshop on Librarians andComputers (LAC ’10), Reginald N. Smythe and Alexander Noble (Eds.
), Vol.
3.
Paparazzi Press, Milan Italy, 422–431.
https://doi.org/99.9999/woot07-S422In DistributedSystems (2nd.
ed.
), Sape Mullender (Ed.).
ACM Press, New York, NY, 19–33.
https://doi.org/10.1145/90417.90738[38] Harry Thornburg.
2001.
Introduction to Bayesian Statistics.
Retrieved March 2,[37] Asad Z. Spector.
1990.
Achieving application requirements.
2005 from http://ccrma.stanford.edu/~jos/bayes/bayes.html
--SECTION-- [39] TUG 2017.
Institutional members of the TEX Users Group.
Retrieved May 27,[40] Boris Veytsman.
[n. d.].
acmart—Class for typesetting publications of ACM.
2017 from http://wwtug.org/instmem.htmlRetrieved May 27, 2017 from http://www.ctan.org/pkg/acmartSubmission ID: 123-A12-B3.
2018-10-20 12:29.
Page 5 of 1–5.
465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580
This output is partially right. I mean all section are rightfully detected but I also get a lot of false positives. Can you think of any better way (less false-positive prone) to implement this.
PS: if you need the pdf, it is available here, filename is
sample-sigconf-authordraft.pdf
| 1 | 1 | 0 | 0 | 0 | 0 |
Imagine there is a column in dataset representing university. We need to classify the values, i.e. number of groups after classification should be as equal as possible to real number of universities. The problem is that there might be different naming for the same university. An example: University of Stanford = Stanford University = Uni of Stanford. Is there any certain NLP method/function/solution in Python 3?
Let's consider both cases: data might be tagged as well as untagged.
Thanks in advance.
| 1 | 1 | 0 | 1 | 0 | 0 |
I know that in gensims KeyedVectors-model, one can access the embedding matrix by the attribute model.syn0. There is also a syn0norm, which doesn't seem to work for the glove model I recently loaded. I think I also have seen syn1 somewhere previously.
I haven't found a doc-string for this and I'm just wondering what's the logic behind this?
So if syn0 is the embedding matrix, what is syn0norm? What would then syn1 be and generally, what does syn stand for?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a sentence for which i need to identify the Person names alone:
For example:
sentence = "Larry Page is an American business magnate and computer scientist who is the co-founder of Google, alongside Sergey Brin"
I have used the below code to identify the NERs.
from nltk import word_tokenize, pos_tag, ne_chunk
print(ne_chunk(pos_tag(word_tokenize(sentence))))
The output i received was:
(S
(PERSON Larry/NNP)
(ORGANIZATION Page/NNP)
is/VBZ
an/DT
(GPE American/JJ)
business/NN
magnate/NN
and/CC
computer/NN
scientist/NN
who/WP
is/VBZ
the/DT
co-founder/NN
of/IN
(GPE Google/NNP)
,/,
alongside/RB
(PERSON Sergey/NNP Brin/NNP))
I want to extract all the person names, such as
Larry Page
Sergey Brin
In order to achieve this, I refereed this link and tried this.
from nltk.tag.stanford import StanfordNERTagger
st = StanfordNERTagger('/usr/share/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz','/usr/share/stanford-ner/stanford-ner.jar')
However i continue to get this error:
LookupError: Could not find stanford-ner.jar jar file at /usr/share/stanford-ner/stanford-ner.jar
Where can i download this file?
As informed above, the result that i am expecting in the form of list or dictionary is :
Larry Page
Sergey Brin
| 1 | 1 | 0 | 0 | 0 | 0 |
I am wondering that is there any efficient way to extract expected target phrase or key phrase from given sentence. So far I tokenized the given sentence and get POS tag for each word. Now I am not sure how to extract target key phrase or keyword from given sentence. The way of doing this is not intuitive to me.
Here is my input sentence list:
sentence_List= {"Obviously one of the most important features of any computer is the human interface.", "Good for everyday computing and web browsing.",
"My problem was with DELL Customer Service", "I play a lot of casual games online[comma] and the touchpad is very responsive"}
here is the tokenized sentence:
from nltk.tokenize import word_tokenize
tokenized_sents = [word_tokenize(i) for i in sentence_List]
tokenized=[i for i in tokenized_sents]
Here I used Spacy to get POS tag of words:
import spacy
nlp = spacy.load('en_core_web_sm')
res=[]
for i in range(len(sentence_list.index)):
for token in i:
res.append(token.pos_)
so I may use NER (a.k.a, name entity relation) from spacy but its output is not the same thing with my pre-defined expected target phrase. Does anyone know how to accomplish this task either using Spacy or stanfordcorenlp module in python? what is an efficient solution to make this happen? Any idea? Thanks in advance :)
desired output:
I want to get the list of target phrase from respective sentence list as follow:
target_phraseList={"human interface","everyday computing","DELL Customer Service","touchpad"}
so I concatenate my input sentence_list with an expected target phrase, my final desired output would be like this:
import pandas as pd
df=pd.Series(sentence_List, target_phraseList)
df=pd.DataFrame(df)
How can I get my expected target phrases from a given input sentence list by using spacy? Any idea?
| 1 | 1 | 0 | 0 | 0 | 0 |
It's a multi-step exercise that, I suspect, can be handled in various ways. Here is what I have and have done.
tableA contains Stores and Brands. tableB contains Customer and Stores. Tables can be joined on Stores.
tableA = [(Ikea, 'Adidas, Nike'),
(Target, 'Adidas, NB'),
(Sears, 'Puma')]
labels = ['Store', 'Brand']
dfA = pd.DataFrame.from_records(tableA, columns=labels)
tableB = [('Neil', Ikea),
('Neil', Target),
('Javal', Target),
('Colleen', Ikea),
('Colleen', Sears),
('Javal', Target),
('Neil', Target),
('Colleen', Sears)]
labels = ['Customer', 'Store']
dfB = pd.DataFrame.from_records(tableB, columns=labels)
As an output, I want to have:
Customers as rows, brands as columns and count as values.
First, I want to deal with splitting the cells and counting. Later, I will join two tables.
Splitting
The best I am able to achieve is:
dfA['Adidas'], dfA['Nike'] = dfA['tags'].str.split(', ').str
If I do:
dfA['Adidas'], dfA['Nike'], dfA['NB'], dfA['Puma'] = dfA['tags'].str.split(', ').str
I get a mistake:
ValueError: not enough values to unpack (expected 4, got 2)
I understand the mistake's nature but haven't found an alternative yet.
Questions I have:
(1) Should I first deal with splitting and then join tables?
(2) How to properly split the column?
(3) How to add proper counts (Counter has nothing to do with it, right?)
| 1 | 1 | 0 | 0 | 0 | 0 |
I am new to NLP. My requirement is to parse meaning from sentences.
Example
"Perpetually Drifting is haunting in all the best ways."
"When The Fog Rolls In is a fantastic song
From above sentences, I need to extract the following sentences
"haunting in all the best ways."
"fantastic song"
Is it possible to achieve this in spacy?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am busy with a Recurrent Neural Network for predicting Cryptocurrencies prices. So the reason I do this project is because of school. I am pretty far with the project, but I ran against a problem. So, in my code I have a dataframe (df). In the dataframe the values are pretty big, so I shaped it to smaller values using this:
for col in df.columns:
if col != "target":
df[col] = df[col].pct_change()
df.dropna(inplace=True)
df[col] = preprocessing.scale(df[col].values)
But after I have put it into the model, I need the values shaped back to original. So, I have tried everything on the internet, but I couldn't find my solution. Can someone help me with this?
EDIT:
I want to scale the values after the model.fit! So when I train the model with this:
# Train model
model.fit(
train_x, train_y,
batch_size=64,
epochs=EPOCHS,
validation_split=0.05,
callbacks=[tensorboard])
How can I do that?
| 1 | 1 | 0 | 0 | 0 | 0 |
Is there an algorithm that can automatically calculate a numerical rating of the degree of abstractness of a word. For example, the algorithm rates purvey as 1, donut as 0, and immodestly as 0.5 ..(these are example values)
Abstract words in the sense words that refer to ideas and concepts that are distant from immediate perception, such as economics, calculating, and disputable. Other side Concrete words refer to things, events, and properties that we can perceive directly with our senses, such as trees, walking, and red.
| 1 | 1 | 0 | 0 | 0 | 0 |
There is only 1 feature dim. But the result is unreasonable. The code and data is below. The purpose of the code is to judge whether the two sentences are the same.
In fact, the final input to the model is: feature is [1] with label 1, and feature is [0] with label 0.
The data is quite simple:
sent1 sent2 label
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
我想听 我想听 1
我想听 我想说 0
我想说 我想说 1
我想说 我想听 0
import pandas as pd
import xgboost as xgb
d = pd.read_csv("data_small.tsv",sep=" ")
def my_test(sent1,sent2):
result = [0]
if "我想说" in sent1 and "我想说" in sent2:
result[0] = 1
if "我想听" in sent1 and "我想听" in sent2:
result[0] = 1
return result
fea_ = d.apply(lambda row: my_test(row['sent1'], row['sent2']), axis=1).tolist()
labels = d["label"].tolist()
fea = pd.DataFrame(fea_)
for i in range(len(fea_)):
print(fea_[i],labels[i])
labels = pd.DataFrame(labels)
from sklearn.model_selection import train_test_split
# train_x_pd_split, valid_x_pd, train_y_pd_split, valid_y_pd = train_test_split(fea, labels, test_size=0.2,
# random_state=1234)
train_x_pd_split = fea[0:16]
valid_x_pd = fea[16:20]
train_y_pd_split = labels[0:16]
valid_y_pd = labels[16:20]
train_xgb_split = xgb.DMatrix(train_x_pd_split, label=train_y_pd_split)
valid_xgb = xgb.DMatrix(valid_x_pd, label=valid_y_pd)
watch_list = [(train_xgb_split, 'train'), (valid_xgb, 'valid')]
params3 = {
'seed': 1337,
'colsample_bytree': 0.48,
'silent': 1,
'subsample': 1,
'eta': 0.05,
'objective': 'binary:logistic',
'eval_metric': 'logloss',
'max_depth': 8,
'min_child_weight': 20,
'nthread': 8,
'tree_method': 'hist',
}
xgb_trained_model = xgb.train(params3, train_xgb_split, 1000, watch_list, early_stopping_rounds=50,
verbose_eval=10)
# xgb_trained_model.save_model("predict/model/xgb_model_all")
print("feature importance 0:")
importance = xgb_trained_model.get_fscore()
temp1 = []
temp2 = []
for k in importance:
temp1.append(k)
temp2.append(importance[k])
print("-----")
feature_importance_df = pd.DataFrame({
'column': temp1,
'importance': temp2,
}).sort_values(by='importance')
# print(feature_importance_df)
feature_sort_list = feature_importance_df["column"].tolist()
feature_importance_list = feature_importance_df["importance"].tolist()
print()
for i,item in enumerate(feature_sort_list):
print(item,feature_importance_list[i])
train_x_xgb = xgb.DMatrix(train_x_pd_split)
train_predict = xgb_trained_model.predict(train_x_xgb)
print(train_predict)
train_predict_binary = (train_predict >= 0.5) * 1
print("TRAIN DATA SELF")
from sklearn import metrics
print('LogLoss: %.4f' % metrics.log_loss(train_y_pd_split, train_predict))
print('AUC: %.4f' % metrics.roc_auc_score(train_y_pd_split, train_predict))
print('ACC: %.4f' % metrics.accuracy_score(train_y_pd_split, train_predict_binary))
print('Recall: %.4f' % metrics.recall_score(train_y_pd_split, train_predict_binary))
print('F1-score: %.4f' % metrics.f1_score(train_y_pd_split, train_predict_binary))
print('Precesion: %.4f' % metrics.precision_score(train_y_pd_split, train_predict_binary))
print()
valid_xgb = xgb.DMatrix(valid_x_pd)
valid_predict = xgb_trained_model.predict(valid_xgb)
print(valid_predict)
valid_predict_binary = (valid_predict >= 0.5) * 1
print("TEST DATA PERFORMANCE")
from sklearn import metrics
print('LogLoss: %.4f' % metrics.log_loss(valid_y_pd, valid_predict))
print('AUC: %.4f' % metrics.roc_auc_score(valid_y_pd, valid_predict))
print('ACC: %.4f' % metrics.accuracy_score(valid_y_pd, valid_predict_binary))
print('Recall: %.4f' % metrics.recall_score(valid_y_pd, valid_predict_binary))
print('F1-score: %.4f' % metrics.f1_score(valid_y_pd, valid_predict_binary))
print('Precesion: %.4f' % metrics.precision_score(valid_y_pd, valid_predict_binary))
But result shows that xgboost do not fit the data:
TRAIN DATA SELF
LogLoss: 0.6931
AUC: 0.5000
ACC: 0.5000
Recall: 1.0000
F1-score: 0.6667
Precesion: 0.5000
TEST DATA PERFORMANCE
LogLoss: 0.6931
AUC: 0.5000
ACC: 0.5000
Recall: 1.0000
F1-score: 0.6667
Precesion: 0.5000
| 1 | 1 | 0 | 0 | 0 | 0 |
I have 9000 samples of non-labeled articles, i want to label it to be binary class (0 and 1)
Additionally, i have 500 labeled samples belonging to the positive class (label=1) and no samples for the negative class label=0.
I know it's impossible to label 9000 samples with 0 and 1 using a model trained only on the 500 positive samples.
So i would like to implement a "similarity" approach to classify the 9000 samples on the base of their "word similarity" with the 500 positive samples. To extract the similar data from the 9000 data, so i can label it with 1. so the rest of data from the 9000 dataset can be labeled as 0 class.
so the question, is it possible to filtered it? if so, how can i filtered it with the similarity of word with python?
thank you for your answer, i hope i have the solution :)
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a model based on doc2vec trained on multiple documents. I would like to use that model to infer the vectors of another document, which I want to use as the corpus for comparison. So, when I look for the most similar sentence to one I introduce, it uses this new document vectors instead of the trained corpus.
Currently, I am using the infer_vector() to compute the vector for each one of the sentences of the new document, but I can't use the most_similar() function with the list of vectors I obtain, it has to be KeyedVectors.
I would like to know if there's any way that I can compute these vectors for the new document that will allow the use of the most_similar() function, or if I have to compute the similarity between each one of the sentences of the new document and the sentence I introduce individually (in this case, is there any implementation in Gensim that allows me to compute the cosine similarity between 2 vectors?).
I am new to Gensim and NLP, and I'm open to your suggestions.
I can not provide the complete code, since it is a project for the university, but here are the main parts in which I'm having problems.
After doing some pre-processing of the data, this is how I train my model:
documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(train_data)]
assert gensim.models.doc2vec.FAST_VERSION > -1
cores = multiprocessing.cpu_count()
doc2vec_model = Doc2Vec(vector_size=200, window=5, workers=cores)
doc2vec_model.build_vocab(documents)
doc2vec_model.train(documents, total_examples=doc2vec_model.corpus_count, epochs=30)
I try to compute the vectors for the new document this way:
questions = [doc2vec_model.infer_vector(line) for line in lines_4]
And then I try to compute the similarity between the new document vectors and an input phrase:
text = str(input('Me: '))
tokens = text.split()
new_vector = doc2vec_model.infer_vector(tokens)
index = questions[i].most_similar([new_vector])
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to extract a certain part of a letter from a txt file with Python. The beginning and the ending is marked by clear beginning / ending expressions (letter_begin / letter_end). My problem is that the "recording" of the text needs to start at the very first occurence of any item in the letter_begin list and end at the very last item in the letter_end list (+3 lines buffer). I want to write the output text to file. Here is my sample text and my code so far:
sample_text = """Some random text right here
.........
Dear Shareholders: We are pleased to provide this report to our shareholders and fellow shareholders. we thank you for your continued support.
Best regards,
Douglas - Director
Other random text in this lines """
letter_begin = ["dear", "to our shareholders", "fellow shareholders"]
letter_end = ["best regards", "respectfully submitted", "thank you for your continued support"]
with open(filename, 'r', encoding="utf-8") as infile, open(xyz.txt, mode = 'w', encoding="utf-8") as f:
text = infile.read()
lines = text.strip().split("
")
target_start_idx = None
target_end_idx = None
for index, line in enumerate(lines):
line = line.lower()
if any(beg in line for beg in letter_begin):
target_start_idx = index
continue
if any(end in line for end in letter_end):
target_end_idx = index + 3
break
if target_start_idx is not None:
target = "
".join(lines[target_start_idx : target_end_idx])
f.write(str(target))
my desired output should be:
output = "Dear Shareholders: We are pleased to provide this report to our shareholders and fellow shareholders. we thank you for your continued support.
Best regards,
Douglas - Director
"
| 1 | 1 | 0 | 0 | 0 | 0 |
I am doing sentiment analysis on given documents, my goal is I want to find out the closest or surrounding adjective words respect to target phrase in my sentences. I do have an idea how to extract surrounding words respect to target phrases, but How do I find out relatively close or closest adjective or NNP or VBN or other POS tag respect to target phrase.
Here is the sketch idea of how I may get surrounding words to respect to my target phrase.
sentence_List= {"Obviously one of the most important features of any computer is the human interface.", "Good for everyday computing and web browsing.",
"My problem was with DELL Customer Service", "I play a lot of casual games online[comma] and the touchpad is very responsive"}
target_phraseList={"human interface","everyday computing","DELL Customer Service","touchpad"}
Note that my original dataset was given as dataframe where the list of the sentence and respective target phrases were given. Here I just simulated data as follows:
import pandas as pd
df=pd.Series(sentence_List, target_phraseList)
df=pd.DataFrame(df)
Here I tokenize the sentence as follow:
from nltk.tokenize import word_tokenize
tokenized_sents = [word_tokenize(i) for i in sentence_List]
tokenized=[i for i in tokenized_sents]
then I try to find out surrounding words respect to my target phrases by using this loot at here. However, I want to find out relatively closer or closet adjective, or verbs or VBN respect to my target phrase. How can I make this happen? Any idea to get this done? Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
I am new to python and I have a dataset that looks like this
I am extracting the reviews from the dataset and trying to apply the VADER tool to check the sentiment weights associated with each review. I am able to successfully retrieve the reviews but unable to apply VADER to each review. This is the code
import nltk
import requirements_elicitation
from nltk.sentiment.vader import SentimentIntensityAnalyzer
c = requirements_elicitation.read_reviews("D:\\Python\\testml\\my-tracks-reviews.csv")
class SentiFind:
def init__(self,review):
self.review = review
for review in c:
review = review.comment
print(review)
sid = SentimentIntensityAnalyzer()
for i in review:
print(i)
ss = sid.polarity_scores(i)
for k in sorted(ss):
print('{0}: {1}, '.format(k, ss[k]), end='')
print()
Sample output:
g
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
r
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
e
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
a
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
t
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
a
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
p
compound: 0.0, neg: 0.0, neu: 0.0, pos: 0.0,
p
I need to customize the labels for each review as well to something like this
"Total weight: {0}, Negative: {1}, Neutral: {2}, Positive: {3}".
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm building a RNN loosely based on the TensorFlow tutorial.
The relevant parts of my model are as follows:
input_sequence = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, PIXEL_COUNT + AUX_INPUTS])
output_actual = tf.placeholder(tf.float32, [BATCH_SIZE, OUTPUT_SIZE])
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(CELL_SIZE, state_is_tuple=False)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * CELL_LAYERS, state_is_tuple=False)
initial_state = state = stacked_lstm.zero_state(BATCH_SIZE, tf.float32)
outputs = []
with tf.variable_scope("LSTM"):
for step in xrange(TIME_STEPS):
if step > 0:
tf.get_variable_scope().reuse_variables()
cell_output, state = stacked_lstm(input_sequence[:, step, :], state)
outputs.append(cell_output)
final_state = state
And the feeding:
cross_entropy = tf.reduce_mean(-tf.reduce_sum(output_actual * tf.log(prediction), reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_actual, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
numpy_state = initial_state.eval()
for i in xrange(1, ITERATIONS):
batch = DI.next_batch()
print i, type(batch[0]), np.array(batch[1]).shape, numpy_state.shape
if i % LOG_STEP == 0:
train_accuracy = accuracy.eval(feed_dict={
initial_state: numpy_state,
input_sequence: batch[0],
output_actual: batch[1]
})
print "Iteration " + str(i) + " Training Accuracy " + str(train_accuracy)
numpy_state, train_step = sess.run([final_state, train_step], feed_dict={
initial_state: numpy_state,
input_sequence: batch[0],
output_actual: batch[1]
})
When I run this, I get the following error:
Traceback (most recent call last):
File "/home/agupta/Documents/Projects/Image-Recognition-with-LSTM/RNN/feature_tracking/model.py", line 109, in <module>
output_actual: batch[1]
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 698, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 838, in _run
fetch_handler = _FetchHandler(self._graph, fetches)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 355, in __init__
self._fetch_mapper = _FetchMapper.for_fetch(fetches)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 181, in for_fetch
return _ListFetchMapper(fetch)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 288, in __init__
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 178, in for_fetch
(fetch, type(fetch)))
TypeError: Fetch argument None has invalid type <type 'NoneType'>
Perhaps the weirdest part is that this error gets thrown the second iteration, and the first works completely fine. I'm ripping my hair trying to fix this, so any help would be greatly appreciated.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am facing difficulty in using Keras embedding layer with one hot encoding of my input data.
Following is the toy code.
Import packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
import matplotlib.pyplot as plt
import numpy as np
import openpyxl
import pandas as pd
from keras.callbacks import ModelCheckpoint
from keras.callbacks import ReduceLROnPlateau
The input data is text based as follows.
Train and Test data
X_train_orignal= np.array(['OC(=O)C1=C(Cl)C=CC=C1Cl', 'OC(=O)C1=C(Cl)C=C(Cl)C=C1Cl',
'OC(=O)C1=CC=CC(=C1Cl)Cl', 'OC(=O)C1=CC(=CC=C1Cl)Cl',
'OC1=C(C=C(C=C1)[N+]([O-])=O)[N+]([O-])=O'])
X_test_orignal=np.array(['OC(=O)C1=CC=C(Cl)C=C1Cl', 'CCOC(N)=O',
'OC1=C(Cl)C(=C(Cl)C=C1Cl)Cl'])
Y_train=np.array(([[2.33],
[2.59],
[2.59],
[2.54],
[4.06]]))
Y_test=np.array([[2.20],
[2.81],
[2.00]])
Creating dictionaries
Now i create two dictionaries, characters to index vice. The unique character number is stored in len(charset) and maximum length of the string along with 5 additional characters is stored in embed. The start of each string will be padded with ! and end will be E.
charset = set("".join(list(X_train_orignal))+"!E")
char_to_int = dict((c,i) for i,c in enumerate(charset))
int_to_char = dict((i,c) for i,c in enumerate(charset))
embed = max([len(smile) for smile in X_train_orignal]) + 5
print (str(charset))
print(len(charset), embed)
One hot encoding
I convert all the train data into one hot encoding as follows.
def vectorize(smiles):
one_hot = np.zeros((smiles.shape[0], embed , len(charset)),dtype=np.int8)
for i,smile in enumerate(smiles):
#encode the startchar
one_hot[i,0,char_to_int["!"]] = 1
#encode the rest of the chars
for j,c in enumerate(smile):
one_hot[i,j+1,char_to_int[c]] = 1
#Encode endchar
one_hot[i,len(smile)+1:,char_to_int["E"]] = 1
return one_hot[:,0:-1,:]
X_train = vectorize(X_train_orignal)
print(X_train.shape)
X_test = vectorize(X_test_orignal)
print(X_test.shape)
When it converts the input train data into one hot encoding, the shape of the one hot encoded data becomes (5, 44, 14) for train and (3, 44, 14) for test. For train, there are 5 example, 0-44 is the maximum length and 14 are the unique characters. The examples for which there are less number of characters, are padded with E till the maximum length.
Verifying the correct padding
Following is the code to verify if we have done the padding rightly.
mol_str_train=[]
mol_str_test=[]
for x in range(5):
mol_str_train.append("".join([int_to_char[idx] for idx in np.argmax(X_train[x,:,:], axis=1)]))
for x in range(3):
mol_str_test.append("".join([int_to_char[idx] for idx in np.argmax(X_test[x,:,:], axis=1)]))
and let's see, how the train set looks like.
mol_str_train
['!OC(=O)C1=C(Cl)C=CC=C1ClEEEEEEEEEEEEEEEEEEEE',
'!OC(=O)C1=C(Cl)C=C(Cl)C=C1ClEEEEEEEEEEEEEEEE',
'!OC(=O)C1=CC=CC(=C1Cl)ClEEEEEEEEEEEEEEEEEEEE',
'!OC(=O)C1=CC(=CC=C1Cl)ClEEEEEEEEEEEEEEEEEEEE',
'!OC1=C(C=C(C=C1)[N+]([O-])=O)[N+]([O-])=OEEE']
Now is the time to build model.
Model
model = Sequential()
model.add(Embedding(len(charset), 10, input_length=embed))
model.add(Flatten())
model.add(Dense(1, activation='linear'))
def coeff_determination(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
optimizer = Adam(lr=0.00025)
lr_metric = get_lr_metric(optimizer)
model.compile(loss="mse", optimizer=optimizer, metrics=[coeff_determination, lr_metric])
callbacks_list = [
ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-15, verbose=1, mode='auto',cooldown=0),
ModelCheckpoint(filepath="weights.best.hdf5", monitor='val_loss', save_best_only=True, verbose=1, mode='auto')]
history =model.fit(x=X_train, y=Y_train,
batch_size=1,
epochs=10,
validation_data=(X_test,Y_test),
callbacks=callbacks_list)
Error
ValueError: Error when checking input: expected embedding_3_input to have 2 dimensions, but got array with shape (5, 44, 14)
The embedding layer expects two dimensional array. How can I deal with this issue so that it can accept the one hot vector encoded data.
All the above code can be run.
| 1 | 1 | 0 | 1 | 0 | 0 |
I am using countvectorizer to extract features, and I am wondering if I can scale the features. With the code below I am wondering if I can do some scaling using StandardScaler.
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
x_training=vectorizer.fit_transform(df ['var'])
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a text file as follows:
Sentence:1 Polarity:N 5puan verdim o da anistonun güzel yüzünün hatırına.
Sentence:2 Polarity:N son derece sıkıcı bir filim olduğunu söyleyebilirim.
Sentence:3 Polarity:N ..saçma bir konuyu nasılda filim yapmışlar maşallah
Sentence:4 Polarity:P bence hoş vakit geçirmek için seyredilebilir.
Sentence:5 Polarity:P hoş ve sevimli bir film.
Sentence:6 Polarity:O eşcinsellere pek sempati duymamakla beraber bu filmde sanki onları sevimli göstermeye çalışmışlar gibi geldi.
Sentence:7 Polarity:O itici bir film değildi sonuçta.
Sentence:8 Polarity:N seyrederken bu kadar sinirlendiğim film hatırlamıyorum.
Sentence:9 Polarity:O J.Aniston ın hiç mi umut yok diye sorduğu sahnede kıracaktım televizyonu!
Sentence:10 Polarity:O kimse yazmamış ben yazıyım:)
Sentence:11 Polarity:P güzel bi pazar günü şirin bi film izlemek isteyenler için çok güzel.
I want to split this data in to a table like this:
Sentence_No - Sentence_Polarity - Sentence_txt
1 - N - 5puan verdim o da anistonun güzel yüzünün hatırına.
2 - N - son derece sıkıcı bir filim olduğunu söyleyebilirim.
3 - N - ..saçma bir konuyu nasılda filim yapmışlar maşallah
4 - P - bence hoş vakit geçirmek için seyredilebilir.
So I think I need to get the part from after "Sentence:", "Polarity" and the last txt part. I want it this way so I can classify the data.
I wrote the code below but it is not working for this purpose:
df = pd.read_csv('SU-Movie-Reviews-Sentences.txt', lineterminator='
', names=['Sentence_No', 'Sentence_Polarity' , 'Sentence_txt'])
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm using spaCy to do sentence segmentation on texts that using paragraph numbering, for example:
text = '3. English law takes a dim view of stealing stuff from the shops. Some may argue that this is a pity.'
I'm trying to force spaCy's sentence segmenter to not split the 3. into a sentence of it's own.
At the moment, the following code returns three separate sentences:
nlp = spacy.load("en_core_web_sm")
text = """3. English law takes a dim view of stealing stuff from the shops. Some may argue that this is a pity."""
doc = nlp(text)
for sent in doc.sents:
print("****", sent.text)
This returns:
**** 3.
**** English law takes a dim view of stealing stuff from the shops.
**** Some may argue that this is a pity.
I've been trying to stop this from happening by passing a custom rule into the pipeline before the parser:
if token.text == r'\d\.':
doc[token.i+1].is_sent_start = False
This is doesn't seem to have any effect. Has anyone come across this problem before?
| 1 | 1 | 0 | 0 | 0 | 0 |
After tokenizing, my sentence contains many weird characters. How can I remove them?
This is my code:
def summary(filename, method):
list_names = glob.glob(filename)
orginal_data = []
topic_data = []
print(list_names)
for file_name in list_names:
article = []
article_temp = io.open(file_name,"r", encoding = "utf-8-sig").readlines()
for line in article_temp:
print(line)
if (line.strip()):
tokenizer =nltk.data.load('tokenizers/punkt/english.pickle')
sentences = tokenizer.tokenize(line)
print(sentences)
article = article + sentences
orginal_data.append(article)
topic_data.append(preprocess_data(article))
if (method == "orig"):
summary = generate_summary_origin(topic_data, 100, orginal_data)
elif (method == "best-avg"):
summary = generate_summary_best_avg(topic_data, 100, orginal_data)
else:
summary = generate_summary_simplified(topic_data, 100, orginal_data)
return summary
The print(line) prints a line of a txt. And print(sentences) prints the tokenized sentences in the line.
But sometimes the sentences contains weird characters after nltk's processing.
Assaly, who is a fan of both Pusha T and Drake, said he and his friends
wondered if people in the crowd might boo Pusha T during the show, but
said he never imagined actual violence would take place.
[u'Assaly, who is a fan of both Pusha T and Drake, said he and his
friends wondered if people in\xa0the crowd might boo Pusha\xa0T during
the show, but said he never imagined actual violence would take
place.']
Like above example, where is the \xa0 and \xa0T from?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have 8 classes that I want to predict from input text. Here is my code for preprocessing the data:
num_max = 1000
tok = Tokenizer(num_words=num_max)
tok.fit_on_texts(x_train)
mat_texts = tok.texts_to_matrix(x_train,mode='count')
num_max = 1000
tok = Tokenizer(num_words=num_max)
tok.fit_on_texts(x_train)
max_len = 100
cnn_texts_seq = tok.texts_to_sequences(x_train)
print(cnn_texts_seq[0])
[12, 4, 303]
# padding the sequences
cnn_texts_mat = sequence.pad_sequences(cnn_texts_seq,maxlen=max_len)
print(cnn_texts_mat[0])
print(cnn_texts_mat.shape)
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 12 4 303]
(301390, 100)
Below is the structure of my model which contains an embedding layer:
max_features = 20000
max_features = cnn_texts_mat.shape[1]
maxlen = 100
embedding_size = 128
model = Sequential()
model.add(Embedding(max_features, embedding_size, input_length=maxlen))
model.add(Dropout(0.2))
model.add(Dense(5000, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(units=y_train.shape[1], activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
optimizer=sgd)
Below is the model summary:
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_5 (Embedding) (None, 100, 128) 12800
_________________________________________________________________
dropout_13 (Dropout) (None, 100, 128) 0
_________________________________________________________________
dense_13 (Dense) (None, 100, 5000) 645000
_________________________________________________________________
dropout_14 (Dropout) (None, 100, 5000) 0
_________________________________________________________________
dense_14 (Dense) (None, 100, 600) 3000600
_________________________________________________________________
dropout_15 (Dropout) (None, 100, 600) 0
_________________________________________________________________
dense_15 (Dense) (None, 100, 8) 4808
=================================================================
Total params: 3,663,208
Trainable params: 3,663,208
Non-trainable params: 0
After this, I am getting below error when I try to run the model:
model.fit(x=cnn_texts_mat, y=y_train, epochs=2, batch_size=100)
ValueError Traceback (most recent call last)
<ipython-input-41-4b9da9914e7e> in <module>
----> 1 model.fit(x=cnn_texts_mat, y=y_train, epochs=2, batch_size=100)
~/.local/lib/python3.5/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False
~/.local/lib/python3.5/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
787 feed_output_shapes,
788 check_batch_axis=False, # Don't enforce the batch size.
--> 789 exception_prefix='target')
790
791 # Generate sample-wise weight values given the `sample_weight` and
~/.local/lib/python3.5/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
126 ': expected ' + names[i] + ' to have ' +
127 str(len(shape)) + ' dimensions, but got array '
--> 128 'with shape ' + str(data_shape))
129 if not check_batch_axis:
130 data_shape = data_shape[1:]
ValueError: Error when checking target: expected dense_15 to have 3 dimensions, but got array with shape (301390, 8)
| 1 | 1 | 0 | 1 | 0 | 0 |
Actually, I am not so understand about token..
When I read googleresearch/bert model, I see these words.
# In the demo, we are doing a simple classification task on the entire
# segment.
#
# If you want to use the token-level output, use model.get_sequence_output() # instead.
Can anyone make an example about token-level and segment-level classification?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am looking to design a system that will essentially need to make decisions based on input. The input will be a person.
class Person:
def __init__(self, name, age, sex, weight, height, nationality):
self.name = name
self.age = age
self.sex = sex
self.weight = weight
self.height = height
self.nationality = nationality
We want to assign each person to a school class based on certain rules.
For example:
Women from the UK between 22-25 should go to class B.
Men over 75 should go to class A.
Women over 6ft should go to class C.
We will have approximately 400 different rules and the first rule that is met should be applied - we need to maintain the order of the rules.
I am thinking about how to store/represent the rules here. Obviously, you could just have a veeeery long if, elif, elif statement but this isn't efficient. Another option would be storing the rules in a database and maybe having an in memory table.
I would like to be able to edit the rules without doing a release - possibly having a front end to allow non tech people to add, remove and reorder rules.
Everything is on the table here - the only certain requirement is the actually programming language must be Python.
Added for further context
I suppose my question is how to store the rules. At the moment it is one huge long if elif elif statement so anytime there is a change to the business logic the PM does up the new rules and I then convert them to the if statement.
All inputs to the system will be sent through the same list of rules and the first rule that matches will be applied. Multiple rules can apply to each input but it's always the first that is applied.
e.g.
Women over 25 go to Class B
Women go to Class A.
Any women over 25 will be sent to class B even though the second rule also applies.
Input will always contain the same format input - haven't decided where it will be an object or a dict but some of the values may be None. Some Persons may not have a weight associated with them.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to assign the result of my function to a variable when applied,but when i try to do print the assigned variable it comes out as None. how do i save and print out page_contents outside of the function? See code below:
def mpdf(pdf):
pdfName = pdf
read_pdf = PyPDF2.PdfFileReader(pdfName)
for i in range(read_pdf.getNumPages()):
page = read_pdf.getPage(i)
print ('Page No - ' + str(1+read_pdf.getPageNumber(page)))
page_content = page.extractText()
print ((page_content))
df=mpdf('sample.pdf')
print(df)
Output>>>None
| 1 | 1 | 0 | 0 | 0 | 0 |
I have the following dataframe with data:
index field1 field2 field3
1079 COMPUTER long text.... 3
Field1 is a category and field2 is a description and field3 is just an integer representation of field1.
I am using the following code to learn field2 to category mappings with sklearn:
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
X_train, X_test, y_train, y_test = train_test_split(df['Text'], df['category_id'], random_state = 0)
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = MultinomialNB().fit(X_train_tfidf, y_train)
After I trained the model I can use it to predict a category and it works well. However, I would like to evaluate the model using the test set.
X_test_counts = count_vect.fit_transform(X_test)
X_test_tfidf = tfidf_transformer.fit_transform(X_test_counts)
clf.score(X_test_tfidf, y_test)
It throws the following error:
ValueError: dimension mismatch
Is there a way test the model and get the score or accuracy with such dataset?
UPDATE: Adding similar transformation to the test set.
| 1 | 1 | 0 | 0 | 0 | 0 |
For example, the sentence is "The corporate balance sheets data are available on an annual basis", and I need to label the "corporate balance sheets" which is a substring found from given sentence.
So, the pattern that I need to find is:
"corporate balance sheets"
Given the string:
"The corporate balance sheets data are available on an annual basis".
The output label sequence I want will be:
[0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]
There are a bunch of sentences(more than 2GB), and a bunch of patterns I need to find. I have no idea how to do that efficiently in python. Can someone give me a good algorithm?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to get the output as string using LexRankSummarizer in sumy library.
I am using the following code (pretty straightforward)
parser = PlaintextParser.from_string(text,Tokenizer('english'))
summarizer = LexRankSummarizer()
sum_1 = summarizer(parser.document,10)
sum_lex=[]
for sent in sum_1:
sum_lex.append(sent)
using the above code I am getting an output which is in the form of tuple. Consider a summary as given below from a text as input
The Mahājanapadas were sixteen kingdoms or oligarchic republics that existed in ancient India from the sixth to fourth centuries BCE.
Two of them were most probably ganatantras (republics) and others had forms of monarchy.
Using the above code I am getting an output as
sum_lex = [<Sentence: The Mahājanapadas were sixteen kingdoms or oligarchic republics that existed in ancient India from the sixth to fourth centuries BCE.>,
<Sentence: Two of them were most probably ganatantras (republics) and others had forms of monarchy.>]
However, if I use print(sent) I am getting proper output as given above.
How to tackle this issue?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to come up with a function that takes entries like
"businessidentifier", "firstname", "streetaddress"
and outputs
"business identifier", "first name", "street address"
This seems to be a fairly complicated problem involving NLP, since the function will have to iterate over a string and test against a vocabulary to see when it arrives at a word in the vocabulary, but for the first example "businessidentifier" might be seen first as "bus I ness identifier". Has anyone come across a function that accomplishes this task?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have already trained gensim doc2Vec model, which is finding most similar documents to an unknown one.
Now I need to find the similarity value between two unknown documents (which were not in the training data, so they can not be referenced by doc id)
d2v_model = doc2vec.Doc2Vec.load(model_file)
string1 = 'this is some random paragraph'
string2 = 'this is another random paragraph'
vec1 = d2v_model.infer_vector(string1.split())
vec2 = d2v_model.infer_vector(string2.split())
in the code above vec1 and vec2 are successfully initialized to some values and of size - 'vector_size'
now looking through the gensim api and examples I could not find method that works for me, all of them are expecting TaggedDocument
Can I compare the feature vectors value by value and if they are closer => the texts are more similar?
| 1 | 1 | 0 | 1 | 0 | 0 |
I was trying to write regular expression that only matches text consists of English alphabet text that are more than 3 letters in python. I tried:
regex = r'[a-z][a-z][a-z]+'
but it can't filter out strings like
how@@
Any ideas would be appreciated:)
| 1 | 1 | 0 | 0 | 0 | 0 |
I opened a folder in that many file and I want to parse every file and and pre process it and write tokens into the same current files which they belong to. Please help me with that.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am making a classifier based on a CNN model in Keras.
I will use it in an application, where the user can load the application and enter input text and the model will be loaded from the weights and make predictions.
The thing is I am using GloVe embeddings as well and the CNN model uses padded text sequences as well.
I used Keras tokenizer as following:
tokenizer = text.Tokenizer(num_words=max_features, lower=True, char_level=False)
tokenizer.fit_on_texts(list(train_x))
train_x = tokenizer.texts_to_sequences(train_x)
test_x = tokenizer.texts_to_sequences(test_x)
train_x = sequence.pad_sequences(train_x, maxlen=maxlen)
test_x = sequence.pad_sequences(test_x, maxlen=maxlen)
I trained the model and predicted on test data, but now I want to test the same with loaded model which I loaded and working.
But my problem here is If I provide a single review, it has to be passed through the tokeniser.text_to_sequences() which is returning 2D array, with a shape of (num_chars, maxlength) and hence followed by a num_chars predictions, but I need it in (1, max_length) shape.
I am using the following code for prediction:
review = 'well free phone cingular broke stuck not abl offer kind deal number year contract up realli want razr so went look cheapest one could find so went came euro charger small adpat made fit american outlet, gillett fusion power replac cartridg number count packagemay not greatest valu out have agillett fusion power razor'
xtest = tokenizer.texts_to_sequences(review)
xtest = sequence.pad_sequences(xtest, maxlen=maxlen)
model.predict(xtest)
Output is:
array([[0.29289 , 0.36136267, 0.6205081 ],
[0.362869 , 0.31441122, 0.539749 ],
[0.32059124, 0.3231736 , 0.5552745 ],
...,
[0.34428033, 0.3363668 , 0.57663095],
[0.43134686, 0.33979046, 0.48991954],
[0.22115968, 0.27314988, 0.6188136 ]], dtype=float32)
I need a single prediction here array([0.29289 , 0.36136267, 0.6205081 ]) as I have a single review.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am a newbie in python. I have a problem, how do we import a .txt file to python?
I have a .txt file with so many texts inside it for me to analyze with the NLTK.
Can you tell me how to start to analyze the texts?
Thank you in advance
| 1 | 1 | 0 | 0 | 0 | 0 |
Here is my code:
import itertools
import numpy as np
sentences = '''
sam is red
hannah not red
hannah is green
bob is green
bob not red
sam not green
sarah is red
sarah not green'''.strip().split('
')
is_green = np.asarray([[0, 1, 1, 1, 1, 0, 0, 0]], dtype='int32').T
for s, g in zip(sentences, is_green):
print(s, '->', g)
tokenize = lambda x: x.strip().lower().split(' ')
sentences_tokenized = [tokenize(sentence) for sentence in sentences]
words = set(itertools.chain(*sentences_tokenized))
word2idx = dict((v, i) for i, v in enumerate(words))
idx2word = list(words)
print('Vocabulary:')
print(word2idx, end='
')
to_idx = lambda x: [word2idx[word] for word in x] # convert a list of words to a list of indices
sentences_idx = [to_idx(sentence) for sentence in sentences_tokenized]
sentences_array = np.asarray(sentences_idx, dtype='int32')
print('Sentences:')
print(sentences_array)
sentence_maxlen = 3
n_words = len(words)
n_embed_dims = 2
print('%d words per sentence, %d in vocabulary, %d dimensions for embedding' % (sentence_maxlen, n_words, n_embed_dims))
from keras.layers import Input, Embedding, merge, Flatten, Reshape, Lambda
import keras.backend as K
from keras.models import Model
input_sentence = Input(shape=(sentence_maxlen,), dtype='int32')
input_embedding = Embedding(n_words, n_embed_dims)(input_sentence)
avepool = Lambda(lambda x: K.mean(x, axis=1, keepdims=True), output_shape=lambda x: (x[0], 1))
color_prediction = avepool(Reshape((sentence_maxlen * n_embed_dims,))
(input_embedding))
predict_green = Model(inputs=[input_sentence], outputs=[color_prediction])
predict_green.compile(optimizer='sgd', loss='binary_crossentropy')
predict_green.fit([sentences_array], [is_green], epochs=5000, verbose=1)
embeddings = predict_green.layers[0].W.get_values()
While running this code I am getting the following error:
AttributeError:'InputLayer' object has no attribute 'W'
What does this error mean here? How to overcome this?
Python:3.6, Keras: 2.2.4 & 2.2.0, backend: Theano.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to find the most frequent words, in each row of a tokenized Dataframe as follows:
print(df.tokenized_sents)
['apple', 'inc.', 'aapl', 'reported', 'fourth', 'consecutive', 'quarter', 'record', 'revenue', 'profit', 'combination', 'higher', 'iphone', 'prices', 'strong', 'app-store', 'sales', 'propelled', 'technology', 'giant', 'best', 'year', 'ever', 'revenue', 'three', 'months', 'ended', 'sept.']
['brussels', 'apple', 'inc.', 'aapl', '-.', 'chief', 'executive', 'tim', 'cook', 'issued', 'tech', 'giants', 'strongest', 'call', 'yet', 'u.s.-wide', 'data-protection', 'regulation', 'saying', 'individuals', 'personal', 'information', 'been', 'weaponized', 'mr.', 'cooks', 'call', 'came', 'sharply', 'worded', 'speech', 'before', 'p…']
...
wrds = []
for i in range(0, len(df) ):
wrds.append( Counter(df["tokenized_sents"][i]).most_common(5) )
But it reports a list as:
print(wrds)
[('revenue', 2), ('apple', 1), ('inc.', 1), ('aapl', 1), ('reported', 1)]
...
I would like to create the following dataframe instead;
print(final_df)
KeyWords
revenue, apple, inc., aapl, reported
...
N.B. The rows of the final dataframe are not lists, but single text values, e.g. revenue, apple, inc., aapl, reported, NOT, [revenue, apple, inc., aapl, reported]
| 1 | 1 | 0 | 0 | 0 | 0 |
i know similar questions have been asked before but so far i wasnt able to solve my problem, so apologies in advance.
I have a json-file ('test.json') with text in it. The text appears like this:
"... >>\r
>> This is a test.>\r
> \r
-- \r
Mit freundlichen Grüssen\r
\r
Mike Klence ..."
The overal output should be the plain text:
"... This is a test. Mit freundlichen Grüssen Mike Klence ..."
With beautifulsoup i got to remove those html tags. But still those >, \r,
- - remain in the text. So i tried the following code:
import codecs
from bs4 import BeautifulSoup
with codecs.open('test.json', encoding = 'utf-8') as f:
soup = BeautifulSoup(f, 'lxml')
invalid_tags = ['\r', '
', '<', '>']
for tag in invalid_tags:
for match in soup.find_all(tag):
match.replace_with()
print(soup.get_text())
But it doesnt do anything with the text in the file. I tried different variations but nothing seems to change at all.
How can i get my code to work properly?
Or if there is another, easier or faster way, i would be thankful to read about those approaches as well.
Btw i am using python 3.6 on anaconda.
Thank you very much in advance for your help.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have created a custom class to be an ML model, and it is working fine, but I would like to normalize the inputs as they have a wide range of values (e.g. 0, 20000, 500, 10, 8). Currently, as a way of normalizing the inputs, I'm applying lambda x: np.log(x + 1) to each input (the +1 is so it doesn't error out when 0 is passed in). Would a normalization layer be better than my current approach? If so, how would I go about implementing it? My code for the model is below:
class FollowModel:
def __init__(self, input_shape, output_shape, hidden_layers, input_labels, learning_rate=0.001):
tf.reset_default_graph()
assert len(input_labels) == input_shape[1], 'Incorrect number of input labels!'
# Placeholders for input and output data
self.input_labels = input_labels
self.input_shape = input_shape
self.output_shape = output_shape
self.X = tf.placeholder(shape=input_shape, dtype=tf.float64, name='X')
self.y = tf.placeholder(shape=output_shape, dtype=tf.float64, name='y')
self.hidden_layers = hidden_layers
self.learning_rate = learning_rate
# Variables for two group of weights between the three layers of the network
self.W1 = tf.Variable(np.random.rand(input_shape[1], hidden_layers), dtype=tf.float64)
self.W2 = tf.Variable(np.random.rand(hidden_layers, output_shape[1]), dtype=tf.float64)
# Create the neural net graph
self.A1 = tf.sigmoid(tf.matmul(self.X, self.W1))
self.y_est = tf.sigmoid(tf.matmul(self.A1, self.W2))
# Define a loss function
self.deltas = tf.square(self.y_est - self.y) # want this to be 0
self.loss = tf.reduce_sum(self.deltas)
# Define a train operation to minimize the loss
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
#initialize
self.model_init = tf.global_variables_initializer()
self.trained = False
def train(self, Xtrain, ytrain, Xtest, ytest, training_steps, batch_size, print_progress=True):
#intiialize session
self.trained = True
self.training_steps = training_steps
self.batch_size = batch_size
self.sess = tf.Session()
self.sess.run(self.model_init)
self.losses = []
self.accs = []
self.testing_accuracies = []
for i in range(training_steps*batch_size):
self.sess.run(self.optimizer, feed_dict={self.X: Xtrain, self.y: ytrain})
local_loss = self.sess.run(self.loss, feed_dict={self.X: Xtrain.values, self.y: ytrain.values})
self.losses.append(local_loss)
self.weights1 = self.sess.run(self.W1)
self.weights2 = self.sess.run(self.W2)
y_est_np = self.sess.run(self.y_est, feed_dict={self.X: Xtrain.values, self.y: ytrain.values})
correct = [estimate.argmax(axis=0) == target.argmax(axis=0)
for estimate, target in zip(y_est_np, ytrain.values)]
acc = 100 * sum(correct) / len(correct)
self.accs.append(acc)
if i % batch_size == 0:
batch_num = i / batch_size
if batch_num % 5 == 0:
self.testing_accuracies.append(self.test_accuracy(Xtest, ytest, False, True))
temp_table = pd.concat([Xtrain, ytrain], axis=1).sample(frac=1)
column_names = list(temp_table.columns.values)
X_columns, y_columns = column_names[0:len(column_names) - 2], column_names[len(column_names) - 2:]
Xtrain = temp_table[X_columns]
ytrain = temp_table[y_columns]
if print_progress: print('Step: %d, Accuracy: %.2f, Loss: %.2f' % (int(i/batch_size), acc, local_loss))
if print_progress: print("Training complete!
loss: {}, hidden nodes: {}, steps: {}, epoch size: {}, total steps: {}".format(int(self.losses[-1]*100)/100, self.hidden_layers, training_steps, batch_size, training_steps*batch_size))
self.follow_accuracy = acc
return acc
def test_accuracy(self, Xtest, ytest, print_progress=True, return_accuracy=False):
if self.trained:
X = tf.placeholder(shape=Xtest.shape, dtype=tf.float64, name='X')
y = tf.placeholder(shape=ytest.shape, dtype=tf.float64, name='y')
W1 = tf.Variable(self.weights1)
W2 = tf.Variable(self.weights2)
A1 = tf.sigmoid(tf.matmul(X, W1))
y_est = tf.sigmoid(tf.matmul(A1, W2))
# Calculate the predicted outputs
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
y_est_np = sess.run(y_est, feed_dict={X: Xtest, y: ytest})
correctly_followed = 0
incorrectly_followed = 0
missed_follows = 0
correctly_skipped = 0
for estimate, actual in zip(y_est_np, ytest.values):
est = estimate.argmax(axis=0)
# print(estimate)
actual = actual.argmax(axis=0)
if est == 1 and actual == 0: incorrectly_followed += 1
elif est == 1 and actual == 1: correctly_followed += 1
elif est == 0 and actual == 1: missed_follows += 1
else: correctly_skipped += 1
# correct = [estimate.argmax(axis=0) == target.argmax(axis=0) for estimate, target in zip(y_est_np, ytest.values)]
total_followed = incorrectly_followed + correctly_followed
total_correct = correctly_followed + correctly_skipped
total_incorrect = incorrectly_followed + missed_follows
try: total_accuracy = int(total_correct * 10000 / (total_correct + total_incorrect)) / 100
except: total_accuracy = 0
total_skipped = correctly_skipped + missed_follows
try: follow_accuracy = int(correctly_followed * 10000 / total_followed) / 100
except: follow_accuracy = 0
try: skip_accuracy = int(correctly_skipped * 10000 / total_skipped) / 100
except: skip_accuracy = 0
if print_progress: print('Correctly followed {} / {} ({}%), correctly skipped {} / {} ({}%)'.format(
correctly_followed, total_followed, follow_accuracy, correctly_skipped, total_skipped, skip_accuracy))
self.follow_accuracy = follow_accuracy
if return_accuracy:
return total_accuracy
else:
print('The model is not trained!')
def make_prediction_on_normal_data(self, input_list):
assert len(input_list) == len(self.input_labels), 'Incorrect number of inputs (had {} should have {})'.format(len(input_list), len(self.input_labels))
# from ProcessData import normalize_list
# normalize_list(input_list)
input_array = np.array([input_list])
X = tf.placeholder(shape=(1, len(input_list)), dtype=tf.float64, name='X')
y = tf.placeholder(shape=(1, 2), dtype=tf.float64, name='y')
W1 = tf.Variable(self.weights1)
W2 = tf.Variable(self.weights2)
A1 = tf.sigmoid(tf.matmul(X, W1))
y_est = tf.sigmoid(tf.matmul(A1, W2))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
y_est_np = sess.run(y_est, feed_dict={X: input_array, y: self.create_blank_outputs()})
predicted_value = y_est_np[0].argmax(axis=0)
return predicted_value
def make_prediction_on_abnormal_data(self, input_list):
from ProcessData import normalize_list
normalize_list(input_list)
return self.make_prediction_on_normal_data(input_list)
def create_blank_outputs(self):
blank_outputs = np.zeros(shape=(1,2), dtype=np.int)
for i in range(len(blank_outputs[0])):
blank_outputs[0][i] = float(blank_outputs[0][i])
return blank_outputs
| 1 | 1 | 0 | 1 | 0 | 0 |
I am trying to extract a key phrase from given sentence with TF-IDF schema. To do that, I tried to find out candidate word or candidate phrase in the sentence, then use get frequent word in the sentence. However, when I introduced new CFG rule for finding possible key phrases in the sentence, I have error.
Here is my script:
rm_punct=re.compile('[{}]'.format(re.escape(string.punctuation)))
stop_words=set(stopwords.words('english'))
def get_cand_words(sent, cand_type='word', remove_punct=False):
candidates=list()
sent=rm_punct.sub(' ', sent)
tokenized=word_tokenize(sent)
tagged_words=pos_tag(tokenized)
if cand_type=='word':
pos_tag_patt=tags = set(['JJ', 'JJR', 'JJS', 'NN', 'NNP', 'NNS', 'NNPS'])
tagged_words=chain.from_iterable(tagged_words)
for word, tag in enumerate(tagged_words):
if tag in pos_tag_patt and word not in stop_words:
candidates.append(word)
elif cand_type == 'phrase':
grammar = r'KT: {(<JJ>* <NN.*>+ <IN>)? <JJ>* <NN.*>+}'
chunker = RegexpParser(grammar)
all_tag = chain.from_iterable([chunker.parse(tag) for tag in tagged_words])
for key, group in groupby(all_tag, lambda tag: tag[2] != 'O'):
candidate = ' '.join([word for (word, pos, chunk) in group])
if key is True and candidate not in stop_words:
candidates.append(candidate)
else:
print("return word or phrase as target phrase")
return candidates
Here is the error that raised by python:
sentence_1="Hillary Clinton agrees with John McCain by voting to give George Bush the benefit of the doubt on Iran."
sentence_2="The United States has the highest corporate tax rate in the free world"
get_cand_words(sent=sentence_1, cand_type='phrase', remove_punct=False)
ValueError: chunk structures must contain tagged tokens or trees
I inspired the above code based on extracting key phrases from long text paragraph, my goal is want to find a unique key phrase in the given sentence, but the above implementation doesn't work well.
How can I fix this value error? How can I make above implementation works for extracting key phrase in the given sentence or sentence list? Any better idea to make this happen? any more thoughts? Thanks
Goal:
I want to find out a most relevant noun-adjective phrase or compound noun-adjective phrase from given sentence. How can I get this done in python? Anyone knows how to make this happen? Thanks in advance
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a Keras LSTM multitask model that performs two tasks. One is a sequence tagging task (so I predict a label per token). The other is a global classification task over the whole sequence using a CNN that is stacked on the hidden states of the LSTM.
In my setup (don't ask why) I only need the CNN task during training, but the labels it predicts have no use on the final product. So, on Keras, one can train a LSTM model without especifiying the input sequence lenght. like this:
l_input = Input(shape=(None,), dtype="int32", name=input_name)
However, if I add the CNN stacked on the LSTM hidden states I need to set a fixed sequence length for the model.
l_input = Input(shape=(timesteps_size,), dtype="int32", name=input_name)
The problem is that once I have trained the model with a fixed timestep_size I can no longer use it to predict longer sequences.
In other frameworks this is not a problem. But in Keras, I cannot get rid of the CNN and change the expected input shape of the model once it has been trained.
Here is a simplified version of the model
l_input = Input(shape=(timesteps_size,), dtype="int32")
l_embs = Embedding(len(input.keys()), 100)(l_input)
l_blstm = Bidirectional(GRU(300, return_sequences=True))(l_embs)
# Sequential output
l_out1 = TimeDistributed(Dense(len(labels.keys()),
activation="softmax"))(l_blstm)
# Global output
conv1 = Conv1D( filters=5 , kernel_size=10 )( l_embs )
conv1 = Flatten()(MaxPooling1D(pool_size=2)( conv1 ))
conv2 = Conv1D( filters=5 , kernel_size=8 )( l_embs )
conv2 = Flatten()(MaxPooling1D(pool_size=2)( conv2 ))
conv = Concatenate()( [conv1,conv2] )
conv = Dense(50, activation="relu")(conv)
l_out2 = Dense( len(global_labels.keys()) ,activation='softmax')(conv)
model = Model(input=input, output=[l_out1, l_out2])
optimizer = Adam()
model.compile(optimizer=optimizer,
loss="categorical_crossentropy",
metrics=["accuracy"])
I would like to know if anyone here has faced this issue, and if there are any solutions to delete layers from a model after training and, more important, how to reshape input layer sizes after training.
Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
I have the following list thousands of strings like this:
gabaybagxppppapppx5qvxdncxcyPcxvNcPNxPPPdPxgaBQaBag
gcyvgxpppvNppxab5nxdpvbxvBaPvqxBQPvPvxP5PPxgN5y
gabcygxpppaBpapxab6xnvPdxvpcqaxvQvNvxPdPPPxgNvgaya
gvnagyaxappbvppxapapdxcPpqanxvBcPaxvPdPxPPaNaPayxgvQagNa
cqagayxvpdpxapapBgpaxpvPpcxvPnPcx5PPaxPPaQvyax5gag
6yaxpppvpppx8xvnvyaPxvPvPaPxvBpgcxPPdgaxdggv
gncgyaxp5ppxvp5xcPpbvxvq5xaQ6xPPPBvPPxgcyaNg
NabydxppapaQppx8xvb5xcncqx8xPPPvgPPxgNBagBya
8xvpcNax6pax5PBaxppvgnvx7yxPapvyaPxcgd
gabayangxpvpapppxnvBdxapaNPNaPx6PaxcPaQvxPaPaycxq5ba
How can tensorflow be trained to create a new one from learned?
Im using Jupyter Notebook with python 3
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying to understand how to prepare paragraphs for ELMo vectorization.
The docs only show how to embed multiple sentences/words at the time.
eg.
sentences = [["the", "cat", "is", "on", "the", "mat"],
["dogs", "are", "in", "the", "fog", ""]]
elmo(
inputs={
"tokens": sentences,
"sequence_len": [6, 5]
},
signature="tokens",
as_dict=True
)["elmo"]
As I understand, this will return 2 vectors each representing a given sentence.
How would I go about preparing input data to vectorize a whole paragraph containing multiple sentences. Note that I would like to use my own preprocessing.
Can this be done like so?
sentences = [["<s>" "the", "cat", "is", "on", "the", "mat", ".", "</s>",
"<s>", "dogs", "are", "in", "the", "fog", ".", "</s>"]]
or maybe like so?
sentences = [["the", "cat", "is", "on", "the", "mat", ".",
"dogs", "are", "in", "the", "fog", "."]]
| 1 | 1 | 0 | 0 | 0 | 0 |
I'd like to see basic statistics about my corpus like word/sentence counters, distributions etc.
I have a tokens_corpus_reader_ready.txt which contains 137.000 lines of tagged example sentences in this format:
Zur/APPRART Zeit/NN kostenlos/ADJD aber/KON auch/ADV nur/ADV 11/CARD kW./NN
Zur/APPRART Zeit/NN anscheinend/ADJD kostenlos/ADJD ./$.
...
I also have a TaggedCorpusReader() which I have a describe() method for:
class CSCorpusReader(TaggedCorpusReader):
def __init__(self):
TaggedCorpusReader.__init__(self, raw_corpus_path, 'tokens_corpus_reader_ready.txt')
def describe(self):
"""
Performs a single pass of the corpus and
returns a dictionary with a variety of metrics
concerning the state of the corpus.
modified method from https://github.com/foxbook/atap/blob/master/snippets/ch03/reader.py
"""
started = time.time()
# Structures to perform counting.
counts = nltk.FreqDist()
tokens = nltk.FreqDist()
# Perform single pass over paragraphs, tokenize and count
for sent in self.sents():
print(time.time())
counts['sents'] += 1
for word in self.words():
counts['words'] += 1
tokens[word] += 1
return {
'sents': counts['sents'],
'words': counts['words'],
'vocab': len(tokens),
'lexdiv': float(counts['words']) / float(len(tokens)),
'secs': time.time() - started,
}
If I run the describe method like this in IPython:
>> corpus = CSCorpusReader()
>> print(corpus.describe())
There is about a 7 second delay between each sentence:
1543770777.502544
1543770784.383989
1543770792.2057862
1543770798.992075
1543770805.819034
1543770812.599932
...
If I run the same thing with just a few sentences in the tokens_corpus_reader_ready.txt the output time is totally reasonable:
1543771884.739753
1543771884.74035
1543771884.7408729
1543771884.7413561
{'sents': 4, 'words': 212, 'vocab': 42, 'lexdiv': 5.0476190476190474, 'secs': 0.002869129180908203}
Where does this behavior come from and how can I fix it?
Edit 1
By not every time accessing the corpus itself but operate on lists, the time went down to about 3 seconds per sentence, which is still very long, though:
sents = list(self.sents())
words = list(self.words())
# Perform single pass over paragraphs, tokenize and count
for sent in sents:
print(time.time())
counts['sents'] += 1
for word in words:
counts['words'] += 1
tokens[word] += 1
| 1 | 1 | 0 | 0 | 0 | 0 |
Hello I am new in word2vec so I was trying a simple program to read file and get the vec of each word, but there's something wrong with the tokenization process, as word2vec takes into account each letter not word!
for instance my file contains "hello this is my first trial"
from gensim.models import Word2Vec
from nltk.tokenize import word_tokenize
F = open('testfile')
f=F.read()
doc= word_tokenize(f)
print(f)
print(doc)
model = Word2Vec(doc,min_count=1)
# summarize the loaded model
print(model)
words = list(model.wv.vocab)
print(model['hello'])
I get an error that hello is not in the vocab, but when i use a letter 'h' it works
| 1 | 1 | 0 | 0 | 0 | 0 |
I am conducting a research which requires me to know the memory used during run time by the model when i run a deep learning model(CNN) in google colab. Is there any code i can use to know the same .Basically I want to know how much memory has been used in total model run .(after all epoch has been complete). I am coding in python
Regards
Avik
| 1 | 1 | 0 | 0 | 0 | 0 |
Greetings NLP Experts,
I am using the Stanford CoreNLP software package to produce constituency parses, using the most recent version (3.9.2) of the English language models JAR, downloaded from the CoreNLP Download page. I access the parser via the Python interface from the NLTK module nltk.parse.corenlp. Here is a snippet from the top of my main module:
import nltk
from nltk.tree import ParentedTree
from nltk.parse.corenlp import CoreNLPParser
parser = CoreNLPParser(url='http://localhost:9000')
I also fire up the server using the following (fairly generic) call from the terminal:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer
-annotators "parse" -port 9000 -timeout 30000
The parser that CoreNLP selects by default (when the full English model is available) is the Shift-Reduce (SR) parser, which is sometimes claimed to be both more accurate and faster than the CoreNLP PCFG parser. Impressionistically, I can corroborate that with my own experience, where I deal almost exclusively with Wikipedia text.
However, I have noticed that often the parser will erroneously opt for parsing what is in fact a complete sentence (i.e., a finite, matrix clause) as a subsentential constituent instead, often an NP. In other words, the parser should be outputting an S label at root level (ROOT (S ...)), but something in the complexity of the sentence's syntax pushes the parser to say a sentence is not a sentence (ROOT (NP ...)), etc.
The parses for such problem sentences also always contain another (usually glaring) error further down in the tree. Below are a few examples. I'll just paste in the top few levels of each tree to save space. Each is a perfectly acceptable English sentence, and so the parses should all begin (ROOT (S ...)). However, in each case some other label takes the place of S, and the rest of the tree is garbled.
NP: An estimated 22–189 million school days are missed annually due to a cold. (ROOT (NP (NP An estimated 22) (: --) (S 189 million school days are missed annually due to a cold) (. .)))
FRAG: More than one-third of people who saw a doctor received an antibiotic prescription, which has implications for antibiotic resistance. (ROOT (FRAG (NP (NP More than one-third) (PP of people who saw a doctor received an antibiotic prescription, which has implications for antibiotic resistance)) (. .)))
UCP: Coffee is a brewed drink prepared from roasted coffee beans, the seeds of berries from certain Coffea species. (ROOT (UCP (S Coffee is a brewed drink prepared from roasted coffee beans) (, ,) (NP the seeds of berries from certain Coffea species) (. .)))
At long last, here is my question, which I trust the above evidence proves is a useful one: Given that my data contains a negligible number of fragments or otherwise ill-formed sentences, how can I impose a high-level constraint on the CoreNLP parser such that its algorithm gives priority to assigning an S node directly below ROOT?
I am curious to see whether imposing such a constraint when processing data (that one knows to satisfy it) will also cure other myriad ills observed in the parses produced. From what I understand, the solution would not lie in specifying a ParserAnnotations.ConstraintAnnotation. Would it?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a dataframe:
train_review = train['review']
train_review
It looks like:
0 With all this stuff going down at the moment w...
1 \The Classic War of the Worlds\" by Timothy Hi...
2 The film starts with a manager (Nicholas Bell)...
3 It must be assumed that those who praised this...
4 Superbly trashy and wondrously unpretentious 8...
I add the tokens into a string:
train_review = train['review']
train_token = ''
for i in train['review']:
train_token +=i
What I want is to tokenize the reviews using Spacy.
Here is what I tried, but I get the following error:
Argument 'string' has incorrect type (expected str, got
spacy.tokens.doc.Doc)
How can I solve that? Thanks in advance!
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm learning text cleaning using python online.
I have get rid of some stop words and lower the letter.
but when i execute this code, it doesn't show anything.
I don't know why.
# we add some words to the stop word list
texts, article = [], []
for w in doc:
# if it's not a stop word or punctuation mark, add it to our article!
if w.text != '
' and not w.is_stop and not w.is_punct and not w.like_num and w.text != 'I':
# we add the lematized version of the word
article.append(w.lemma_)
# if it's a new line, it means we're onto our next document
if w.text == '
':
texts.append(article)
article = []
when i try to output texts, it's just blank.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying to perform sentiment analysis over a dataset composed of labeled english text labeled with a number between (0,4).
I've been following the tensorflow guide on this from here: https://www.tensorflow.org/tutorials/keras/basic_text_classification
adapted to suit my multiclass classification problem.
A sample of the dataset is here:
PhraseId,SentenceId,Phrase,Sentiment
21071,942,irony,1
63332,3205,Blue Crush ' swims away with the Sleeper Movie of the Summer award .,2
142018,7705,in the third row of the IMAX cinema,2
103601,5464,images of a violent battlefield action picture,2 .
12235,523,an engrossing story,3
77679,3994,should come with the warning `` For serious film buffs only !,2
58875,2969,enjoyed it,3
152071,8297,"A delicious , quirky movie with a terrific screenplay and fanciful direction by Michael Gondry .",4
Currently, my model performs very badly, with a constant accuracy of about 0.5, and this doesn't change across epochs.
I know how to tune the model's hyperparameters and all of the tricks I can try there, but nothing seems to help. I'm convinced that I've made a mistake somewhere in processing the data, since this is my first time doing deep learning with textual data.
My current preprocessing consists of:
Removing the PhraseID and SentenceID columns from the dataset
Removing punctuation and upper case letters
Shuffling the order of the dataset
Separating the data and labels into different dataframes
One-hot encoding the labels
Tokenizing the data using the Keras preprocessing Tokenizer
Padding the sequences to the same length
I think there's an issue in the tokenization stage, or maybe I just don't understand how the model takes the tokenized words as an input vector and can learn from it.
My relevant tokenization code is:
def tokenize_data(self, df, max_features=5000):
self.logger.log(f'Tokenizing with {max_features} features')
tokenizer = Tokenizer(num_words=max_features, split=' ')
tokenizer.fit_on_texts(df.values)
train_set = tokenizer.texts_to_sequences(df.values)
if self.logger.verbose_f : self.logger.verbose(train_set[:10])
return train_set
def pad_sequences(self, data, maxlen=5000):
result = keras.preprocessing.sequence.pad_sequences(data,
value=0,
padding='post',
maxlen=maxlen)
if self.logger.verbose_f:
df = pd.DataFrame(result)
df.to_csv("processed.csv")
return result
The output of the pad sequences looks like this:
7,821,3794,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
8,74,44,344,325,2904,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
etc etc for each instance.
These values get fed into the model like this to act as the training data.
Do I need to do some sort of normalisation before I give train on this?
Or am I completely barking up the wrong tree?
Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using the Cifar-10 dataset and I am trying to proceed with transfer learnin by using keras library.
My code is here - https://github.com/YanaNeykova/Cifar-10
Upon running line
model.fit(X_train, y_train, batch_size=32, epochs=10,
verbose=1, callbacks=[checkpointer],validation_split=0.2, shuffle=True)
I get an error ( visible in the file), and therefore I cannot proceed further.
I tried it also with importing additionally the model function from keras , but I get again the same result - function model is not recognized.
Can someone advise how I can proceed ?
Many thanks in advance!
Error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-977cb2a1e5d6> in <module>()
1 model.fit(X_train, y_train, batch_size=32, epochs=10,
----> 2 verbose=1, callbacks=[checkpointer],validation_split=0.2, shuffle=True)
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1008 else:
1009 ins = x + y + sample_weights
-> 1010 self._make_train_function()
1011 f = self.train_function
1012
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
517 updates=updates,
518 name='train_function',
--> 519 **self._function_kwargs)
520
521 def _make_test_function(self):
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in function(inputs, outputs, updates, **kwargs)
2742 msg = 'Invalid argument "%s" passed to K.function with TensorFlow backend' % key
2743 raise ValueError(msg)
-> 2744 return Function(inputs, outputs, updates=updates, **kwargs)
2745
2746
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in __init__(self, inputs, outputs, updates, name, **session_kwargs)
2573 raise ValueError('Some keys in session_kwargs are not '
2574 'supported at this '
-> 2575 'time: %s', session_kwargs.keys())
2576 self._callable_fn = None
2577 self._feed_arrays = None
ValueError: ('Some keys in session_kwargs are not supported at this time: %s', dict_keys(['metric']))
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to preprocess a large .txt file, that is around 12GB.
The following code gives an
Invalid Argument
error. I think it happens because the data is too large.
Is there any way to read a document this big?
Do I need this big data to train the words to generate word vectors?
Or is there some other error?
with open('data/text8') as f:
text = f.read()
| 1 | 1 | 0 | 0 | 0 | 0 |
I've got a problem with online updating my Word2Vec model.
I have a document and build model by it. But this document can update with new words, and I need to update vocabulary and model in general.
I know that in gensim 0.13.4.1 we can do this
My code:
model = gensim.models.Word2Vec(size=100, window=10, min_count=5, workers=11, alpha=0.025, min_alpha=0.025, iter=20)
model.build_vocab(sentences, update=False)
model.train(sentences, epochs=model.iter, total_examples=model.corpus_count)
model.save('model.bin')
And after this I have new words. For e.x.:
sen2 = [['absd', 'jadoih', 'sdohf'], ['asdihf', 'oisdh', 'oiswhefo'], ['a', 'v', 'b', 'c'], ['q', 'q', 'q']]
model.build_vocab(sen2, update=True)
model.train(sen2, epochs=model.iter, total_examples=model.corpus_count)
What's wrong and how can I solve my problem?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am currently developing a Twitter content-based recommender system and have a word2vec model pre-trained on 400 million tweets.
How would I go about using those word embeddings to create a document/tweet-level embedding and then get the user embedding based on the tweets they had posted?
I was initially intending on averaging those words in a tweet that had a word vector representation and then averaging the document/tweet vectors to get a user vector but I wasn't sure if this was optimal or even correct. Any help is much appreciated.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using both Nltk and Scikit Learn to do some text processing. I have a data set containing of sentences that some of them has explained the situation in French and English(French part is duplicated) which I want to delete french part. Following in one of my sentence:
"quipage de Global Express en provenance deTokyo Japon vers Dorval a d effectuer une remise des gaz sur la piste cause d un probl me de volets Il fut autoris se poser sur la piste Les services d urgence n ont pas t demand s appareil s est pos sans encombre D lai d environ minutes sur l exploitation The crew of Global Express from Tokyo Japan to Dorval had to pull up on Rwy at because of a flap problem It was cleared to land on Rwy Emergency services were not requested The aircraft touched down without incident Delay of about minutes to operations Regional Report of m d y with record s "
I want to remove all words that are in French. I have tried following code so far but the result is not good enough.
x=sentence
x=x.split()
import langdetect
from langdetect import detect
for word in x:
lang=langdetect.detect(word)
if lang=='fr':
print(word)
x.remove(word)
the following is my output:
l
un
sur
une
oiseaux
avoir
un
le
du
un
est
Is this a good approach? how I can improve its performance in order to reach better results.
| 1 | 1 | 0 | 0 | 0 | 0 |
I train my doc2vec model:
data = ["Sentence 1",
"Sentence 2",
"Sentence 3",
"Sentence 4"]
tagged_data = [TaggedDocument(words=word_tokenize(_d.lower()), tags[str(i)])
for i, _d in enumerate(data)]
training part:
model = Doc2Vec(size=100, window=10, min_count=1, workers=11, alpha=0.025,
min_alpha=0.025, iter=20)
model.build_vocab(tagged_data, update=False)
model.train(tagged_data,epochs=model.iter,total_examples=model.corpus_count)
Save model:
model.save("d2v.model")
And it's work. Than I want to add some sentence to my vocabulary and model. E.x.:
new_data = ["Sentence 5",
"Sentence 6",
"Sentence 7"]
new_tagged_data=
[TaggedDocument(words=word_tokenize(_d.lower()),tags[str(i+len(data))])
for i,_d in enumerate(new_data)]
And than update model:
model.build_vocab(new_tagged_data, update=True)
model.train(new_tagged_data,
epochs=model.iter,total_examples=model.corpus_count)
But it doesn't work. Jupiter urgently shut down and no answer. I use the same way with word2vec model and it works!
What can be a problem with this?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using this code to train news article dataset.
https://github.com/borislavmavrin/stance-detection/blob/master/model_matchingLSTM_wdev.py
When I load GoogleNews word2vec file, it gives me error.
ValueError: Cannot create a tensor proto whose content is larger than 2GB.
The stacktrace starts from line https://github.com/borislavmavrin/stance-detection/blob/master/model_matchingLSTM_wdev.py#L614,
and then goes to https://github.com/borislavmavrin/stance-detection/blob/master/model_matchingLSTM_wdev.py#L154
Any help here would be appreciated. I don't want to change the structure of this code right now, I am just focused more on results for now as this is just a prototype I want to do on this dataset. If the results are good enough, I might write my own model or improve the existing one.
| 1 | 1 | 0 | 1 | 0 | 0 |
As hard as it is for me to explain my problem, here goes my best:
Im trying to, in a first step, confirm if a certain position of a previously created array is 0 and do replace that 0 with a string. After that I want to confirm if this same position is 0 again and if its not , join the previous string with a new one.
For better understanding I will show a piece of my code:
room1=np.array([["chair","table","book","computer","person"],[0,0,0,0,0]])
The above is the array(or matrix)
if int(room1[1,k])==0:
room1[1,k]=tipoObjF[1] #tipoObjF[1] being the string I want to replace the 0
else:
room1[1,k]=room1[1,k]+tipoObjF[1]
Here is where I want to do as I mentioned before: Check if a certain position is 0 and if it is, replace it with a String. Otherwise just join both Strings.
When im running it the following error appears:
ValueError: invalid literal for int() with base10: 'chair1'
I hope I was able to properly explain my problem.
This error appears in a project im working on using ROS and chair1 is the first thing that replaces the 0 and is what should be joining in the else statement making it Chair1Chair1 the result im expecting.
Thank you in advance for anyone willing to help
Edit:
In the end the array should look as follows:
room1=np.array([["chair","table","book","computer","person"],["chair1chair",0,0,0,0]])
| 1 | 1 | 0 | 0 | 0 | 0 |
I started to do the medical image analysis for a project.
In this project I have images of human kidney(s) with and without stones. The aim is to predict if the given new image has stone or not.
I chose the KNN classifier model to do classification but I do not understand the image processing. I have some knowledge on segmentation. I can convert it into array for processing but I need some pointers to understand the process.
Image - https://i.stack.imgur.com/9FDUM.jpg
| 1 | 1 | 0 | 1 | 0 | 0 |
I am training an RNN on the following task: Given a sequence of thirty words, and then classify the sequence into binary class.
Is there a benefit to having more than 30 cells (LSTM, GRU or plain RNN) in my network?
I've seen many examples online where similar networks are trained with multiple layers that each have 100 cells, but this does not make sense to me.
How does it help to have more cells than the length of the sequence? (in my case this length is 30)
I'm confused because from my understanding, each cell takes in two inputs
1. A new element of the sequence
2. The output from the previous cell
So after 30 cells, there will be no new sequence elements to input into the cell. Each cell will just be processing the output of the previous cell (receiving no new info).
I am using LSTM cells for this task (however, I'm guessing the actual type of RNN cell used is irrelevant).
When GRU units are same as my sequence length
visible = Input(shape=(30,))
print(np.shape(visible ))
embed=Embedding(vocab_size,2)(visible)
print(np.shape(embed ))
x2=keras.layers.GRU(30, return_sequences=True)(embed)
print(np.shape(x2))
shapes:
(?, 30)
(?, 30, 2)
(?, ?, 30)
When GRU units are not the same as my sequence length
visible = Input(shape=(30,))
print(np.shape(visible ))
embed=Embedding(vocab_size,2)(visible)
print(np.shape(embed ))
x2=keras.layers.GRU(250, return_sequences=True)(embed)
print(np.shape(x2))
shapes:
(?, 30)
(?, 30, 2)
(?, ?, 250)
How does the shape changes from (?, 30, 2) to (?, ?, 250) or to (?, ?, 30) even?
| 1 | 1 | 0 | 1 | 0 | 0 |
It might be that i am trying to work with data structures that dont fit my need, however...given this:
import itertools
listOfFileData = [['[', 'Emma', 'by', 'Jane', 'Austen'] ,['[', 'Persuasion', 'by', 'Jane', 'Austen'] ,['[', 'Sense', 'and', 'Sensibility', 'by'] ,
['[', 'The', 'King', 'James', 'Bible'] ,['[', 'Poems', 'by', 'William', 'Blake'] ,['[', 'Stories', 'to', 'Tell', 'to'] ,
['[', 'The', 'Adventures', 'of', 'Buster'] ,['[', 'Alice', "'", 's', 'Adventures'] ,
['[', 'The', 'Ball', 'and', 'The'] ,['[', 'The', 'Wisdom', 'of', 'Father'] ,['[', 'The', 'Man', 'Who', 'Was'] ,
['[', 'The', 'Parent', "'", 's'] ,['[', 'Moby', 'Dick', 'by', 'Herman'] ,['[', 'Paradise', 'Lost', 'by', 'John'] ,
['[', 'The', 'Tragedie', 'of', 'Julius'] ,['[', 'The', 'Tragedie', 'of', 'Hamlet'] ,['[', 'The', 'Tragedie', 'of', 'Macbeth'] ,
['[', 'Leaves', 'of', 'Grass', 'by'] ]
#print(len(listOfFileData)) # should show 18 files, each is a list of tokens.
filesDataPairsList = list(itertools.combinations(listOfFileData, 2)) # requires itertools library file(s)
filesDataPairsListTesting = []
for i in range(2,19,2): # 2,4,6,8,...18
combinationOfPairsList = list(itertools.combinations(listOfFileData[:i], 2)) # make a list, of increasingly sized pairs
filesDataPairsListTesting.append(combinationOfPairsList)
#print(len(filesDataPairsListTesting)) # should have 9 lists
#print(len(filesDataPairsListTesting[8])) # should have 153 pairs
How do i get to each pair, within a loop? I've been working around something like the following. But i'm not getting there.
for permutations in filesDataPairsListTesting:
# print(len(permutations)) # if uncommented should read, 1,6,15,28....153
for numOfPairs in range(len(permutations)):
for pair in permutations:
permutations[0]
permutations[1]
I would like to access each list pair [[],[]], with the intention of being able to process each of the documents from each pair within the for block.
So with element 0 in my filesDataPairsListTesting list. I could just get to each item easily, as
permutations[0]
permutations[1]
But the 2nd element then has 6 pairs... ? So i have to iterate through element 1 6 times, (how?) so that i can get to permutations[0], permutations[1]. It's this part that is throwing me.
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to make an personal assistant using artificial intelligence and machine learging techniques. I am using Python 3.7 and I have an question.
When software starts, first it will ask user's name. I want it to get user's name.
in = input('Hey, what is your name?')
#some classifier things
#...
print = input('Nice to meet you ' + in + '!')
But I want to know name correctly if user enters an sentence.
Here is an example:
Hey, what is your name?
John
Nice to meet you John!
But I want to get name even if person enters like this:
Hey, what is your name?
It's John.
Nice to meet you John!
But I couldn't understand how can I just get the user's name. I think I should classify the words in sentence but I don't know. Can you help?
| 1 | 1 | 0 | 1 | 0 | 0 |
I have been working with Deep Q Learning on Windows 10 Machine. I have version 0.4.1 of pytorch with NVIDA graphics card.
def select_action(self, state):
probs = F.softmax(self.model(Variable(state, volatile = True))*7)
action = probs.multinomial()
return action.data[0,0]
From this section of the code, I keep getting this error:
TypeError: multinomial() missing 1 required positional arguments: "num_samples"
If any other information is needed, It will be very quickly provided.
| 1 | 1 | 0 | 0 | 0 | 0 |
from sklearn.feature_extraction.text import TfidfVectorizer
filename='train1.txt'
dataset=[]
with open(filename) as f:
for line in f:
dataset.append([str(n) for n in line.strip().split(',')])
print (dataset)
tfidf=TfidfVectorizer()
tfidf.fit(dataset)
dict1=tfidf.vocabulary_
print 'Using tfidfVectorizer'
for key in dict1.keys():
print key+" "+ str(dict1[key])
I'm reading strings in file train1.txt. But when trying to execute the statement tfidf.fit(dataset),its resulting in an error. I'm unable to fix the error completely.Looking for help.
Error Log:
Traceback (most recent call last):
File "Q1.py", line 52, in <module>
tfidf.fit(dataset)
File "/opt/anaconda2/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 1361, in fit
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "/opt/anaconda2/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 869, in fit_transform
self.fixed_vocabulary_)
File "/opt/anaconda2/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 792, in _count_vocab
for feature in analyze(doc):
File "/opt/anaconda2/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 266, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "/opt/anaconda2/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 232, in <lambda>
return lambda x: strip_accents(x.lower())
AttributeError: 'list' object has no attribute 'lower'
| 1 | 1 | 0 | 0 | 0 | 0 |
From my code below:
def dot(docA,docB):
the_sum=0
for (key,value) in docA.items():
the_sum+=value*docB.get(key,0)
return the_sum
def cos_sim(docA,docB):
sim=dot(docA,docB)/(math.sqrt(dot(docA,docA)*dot(docB,docB)))
return sim
def doc_freq(doclist):
df={}
for doc in doclist:
for feat in doc.keys():
df[feat]=df.get(feat,0)+1
return df
def idf(doclist):
N=len(doclist)
return {feat:math.log(N/v) for feat,v in doc_freq(doclist).items()}
tf_med=doc_freq(bow_collections["medline"])
tf_wsj=doc_freq(bow_collections["wsj"])
idf_med=idf(bow_collections["medline"])
idf_wsj=idf(bow_collections["wsj"])
print(tf_med)
print(idf_med)
So I've managed to finally get this far, though I can't seem to find information on what I have to do next in terms of Python, sure the maths is there but I don't feel it necessary to spend hours trying to understand what it means. Just a quick reassurance this is what I get from tf_med:
{'NUM': 37, 'early': 3, 'case': 3, 'organ': 1, 'transplantation': 1, 'section': 1,
'healthy': 1, 'ovary': 1, 'fertile': 1, 'woman': 1, 'unintentionally': 1,
'unknowingly': 1, 'subjected': 1, 'oophorectomy': 1, 'described': 4, .... , }
And here is what I get from idf_med:
{'NUM': 0.3011050927839216, 'early': 2.8134107167600364, 'case': 2.8134107167600364,
'organ': 3.912023005428146, 'transplantation': 3.912023005428146, 'section':
3.912023005428146, 'healthy': 3.912023005428146, 'ovary': 3.912023005428146, 'fertile':
3.912023005428146, .... , }
Though now I don't know how to compute these two together to get me my TF-IDF and from there my average cosine similarities. I understand they need to be multiplied but how on earth do I go about doing that!
| 1 | 1 | 0 | 0 | 0 | 0 |
Is there a way to find similar docs like we do in word2vec
Like:
model2.most_similar(positive=['good','nice','best'],
negative=['bad','poor'],
topn=10)
I know we can use infer_vector,feed them to have similar ones, but I want to feed many positive and negative examples as we do in word2vec.
is there any way we can do that! thanks !
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using a pretrained Word2Vec model for tweets to create vectors for each word. https://www.fredericgodin.com/software/. I will then compute the average of this and use a classifier to determine sentiment.
My training data is very large and the pretrained Word2Vec model has been trained on millions of tweets, with dimensionality = 400. My problem is that it is taking too long to give vectors to the words in my training data. Is there a way to reduce the time taken to build the word vectors?
Cheers.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have an awfully large corpora as input to my doc2vec training, around 23mil documents streamed using an iterable function. I was wondering if it were at all possible to see the development of my training progress, possibly through finding out which iteration its currently on, words per second or some similar metric.
I was also wondering how to speed up the performance of doc2vec, other than reducing the size of the corpus. I discovered the workers parameter and I'm currently training on 4 processes; the intuition behind this number was that multiprocessing cannot take advantage of virtual cores. I was wondering if this was the case for the doc2vec workers parameter or if I could use 8 workers instead or even potentially higher (I have a quad-core processor, running Ubuntu).
I have to add that using the unix command top -H reports only around a 15% CPU usage per python process using 8 workers and around 27% CPU usage per process on 4 workers.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a file with 3 million sentences (approx). Each sentence has around 60 words. I want to combine all the words and find unique words from them.
I tried the following code:
final_list = list()
for sentence in sentence_list:
words_list = nltk.word_tokenize(sentence)
words = [word for word in words_list if word not in stopwords.words('english') ]
final_list = final_list + set(words)
This code gives unique words but, it's taking too long to process. Around 50k sentences per hour. It might take 3 days to process.
I tried with lambda function too:
final_list = list(map(lambda x: list(set([word for word in sentence])) ,sentence_list))
But, there is no significant improvement in execution. Please suggest a better solution with an effective time of execution. Parallel processing suggestions are welcome.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to convert written numbers to numeric values.
For example, to extract millions from this string:
text = 'I need $ 150000000, or 150 million,1 millions, 15 Million, 15million, 15Million, 15 m, 15 M, 15m, 15M, 15 MM, 15MM, 5 thousand'
To:
'I need $ 150000000, or 150000000,1000000, 15000000, 15000000, 15000000, 15000000, 15000000, 15000000, 15000000, 15000000, 15000000, 5 thousand'
I use this function to remove any separators in the numbers first:
def foldNumbers(text):
""" to remove "," or "." from numbers """"
text = re.sub('(?<=[0-9])\,(?=[0-9])', "", text) # remove commas
text = re.sub('(?<=[0-9])\.(?=[0-9])', "", text) # remove points
return text
And I have written this regex to findall of the possible patterns for common Million notations. This 1) finds digits and does a look ahead for 2) common notation for millions, 3) The "[a-z]?" part is to handle optional "s" on million or millions where I have already removed "'".
re.findall(r'(?:[\d\.]+)(?= million[a-z]?|million[a-z]?| Million[a-z]?|Million[a-z]?|m| m|M| M|MM| MM)',text)
which correctly matches Million numbers and returns:
['150', '1', '15', '15', '15', '15', '15', '15', '15', '15', '15']
What I need to do now is to write a replacement pattern to insert "000000" after the digits, or to iterate through and multiply the digits by 100000. I have tried this so far:
re.sub(r'(?:[\d\.]+)(?= million[a-z]?|million[a-z]?| Million[a-z]?|Million[a-z]?|m| m|M| M|MM| MM)', "000000 ", text)
which returns:
'I need $ 150,000,000, or 000000 million,000000 millions, 000000 Million, 000000 million, 000000 Million, 000000 m, 000000 M, 000000 m, 000000 M, 000000 MM, 000000 MM, 5 thousand'
I think I need to do a look behind (?<=), however I haven't worked with this before and after several attempts I cant seem to work it through.
FYI: My plan is to tackle "Millions" first and then to replicate the solution for Thousands (K), Billions (B), Trillions (T) and possibly for other units such as distances, currencies etc. I have searched SO and google for any solutions in NLP, text cleaning and mining articles but did not find anything.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am struggling with this piece of code . I need to create a 1 and 2 gram model and map the grams with their frequency; After I need to write the 2 models to one EXCEL file in two different sheets..
I come to here displaying the 2 model gram and frequency but struggling on on how to append the outcome and create the excel file.
import nltk
nltk.download('punkt')
f = open('data.json','r')
raw = f.read()
tokens = nltk.word_tokenize(raw)
#Create your bigrams
bgs = nltk.bigrams(tokens)
#compute frequency distribution for all the bigrams in the text
fdist = nltk.FreqDist(bgs)
for k,v in fdist.items():
print (k,v)
Thank you
| 1 | 1 | 0 | 0 | 0 | 0 |
I have some code that calculate the Softmax over time, but I can't understand a line. Are there anyone can explain for me?
def softmax_over_time(x):
assert(K.ndim(x) > 2)
e = K.exp(x - K.max(x, axis=1, keepdims=True))
s = K.sum(e, axis=1, keepdims=True)
return e / s
Anyone can explain to me why we use "x - K.max(x, axis=1, keepdims=True)"?
I think the true must be "K.max(x, axis=1, keepdims=True)"?
| 1 | 1 | 0 | 0 | 0 | 0 |
I've got a problem and don't know how to solve it.
E.x. I have a dynamically expanding file which contains lines splited by '
'
Each line - a message (string) which is built by some pattern and value part which is specific only for this line.
E.x.:
line 1: The temperature is 10 above zero
line 2: The temperature is 16 above zero
line 3: The temperature is 5 degree zero
So, as you see, the constant part (pattern) is
The temperature is zero
Value part:
For line 1 will be: 10 above
For line 2 will be: 16 above
For line 3 will be: 5 degree
Of course it's very simple example.
In fact there're too many lines and about ~50 pattern in one file.
The value part may be anything - it can be number, word, punctuation, etc!
And my question is - how can I find all possible patterns from data?
| 1 | 1 | 0 | 0 | 0 | 0 |
Here is my input data:
data['text'].head()
0 process however afforded means ascertaining di...
1 never occurred fumbling might mere mistake
2 left hand gold snuff box which capered hill cu...
3 lovely spring looked windsor terrace sixteen f...
4 finding nothing else even gold superintendent ...
Name: text, dtype: object
And here is the one hot encoded label (multi-class classification where the number of classes = 3)
[[1 0 0]
[0 1 0]
[1 0 0]
...
[1 0 0]
[1 0 0]
[0 1 0]]
Here is what I think happens step by step, please correct me if I'm wrong:
Converting my input text data['text'] to a bag of indices (sequences)
vocabulary_size = 20000
tokenizer = Tokenizer(num_words = vocabulary_size)
tokenizer.fit_on_texts(data['text'])
sequences = tokenizer.texts_to_sequences(data['text'])
data = pad_sequences(sequences, maxlen=50)
What is happening is my data['text'].shape which is of shape (19579, ) is being converted into an array of indices of shape (19579, 50), where each word is being replaced by the index found in tokenizer.word_index.items()
Loading the glove 100d word vector
embeddings_index = dict()
f = open('/Users/abhishekbabuji/Downloads/glove.6B/glove.6B.100d.txt')
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print(embedding_index)
{'the': array([-0.038194, -0.24487 , 0.72812 , -0.39961 , 0.083172, 0.043953,
-0.39141 , 0.3344 , -0.57545 , 0.087459, 0.28787 , -0.06731 ,
0.30906 , -0.26384 , -0.13231 , -0.20757 , 0.33395 , -0.33848 ,
-0.31743 , -0.48336 , 0.1464 , -0.37304 , 0.34577 , 0.052041,
0.44946 , -0.46971 , 0.02628 , -0.54155 , -0.15518 , -0.14107 ,
-0.039722, 0.28277 , 0.14393 , 0.23464 , -0.31021 , 0.086173,
0.20397 , 0.52624 , 0.17164 , -0.082378, -0.71787 , -0.41531 ,
0.20335 , -0.12763 , 0.41367 , 0.55187 , 0.57908 , -0.33477 ,
-0.36559 , -0.54857 , -0.062892, 0.26584 , 0.30205 , 0.99775 ,
-0.80481 , -3.0243 , 0.01254 , -0.36942 , 2.2167 , 0.72201 ,
-0.24978 , 0.92136 , 0.034514, 0.46745 , 1.1079 , -0.19358 ,
-0.074575, 0.23353 , -0.052062, -0.22044 , 0.057162, -0.15806 ,
-0.30798 , -0.41625 , 0.37972 , 0.15006 , -0.53212 , -0.2055 ,
-1.2526 , 0.071624, 0.70565 , 0.49744 , -0.42063 , 0.26148 ,
-1.538 , -0.30223 , -0.073438, -0.28312 , 0.37104 , -0.25217 ,
0.016215, -0.017099, -0.38984 , 0.87424 , -0.72569 , -0.51058 ,
-0.52028 , -0.1459 , 0.8278 , 0.27062 ], dtype=float32),
So what we have now are the word vectors for every word of 100 dimensions.
Creating the embedding matrix using the glove word vector
vocabulary_size = 20000
embedding_matrix = np.zeros((vocabulary_size, 100))
for word, index in tokenizer.word_index.items():
if index > vocabulary_size - 1:
break
else:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
So we now have the a vector of 100 dimensions for EACH of the 20000 words. The
And here is the architecture:
model_glove = Sequential()
model_glove.add(Embedding(vocabulary_size, 100, input_length=50, weights=[embedding_matrix], trainable=False))
model_glove.add(Dropout(0.5))
model_glove.add(Conv1D(64, 5, activation='relu'))
model_glove.add(MaxPooling1D(pool_size=4))
model_glove.add(LSTM(100))
model_glove.add(Dense(3, activation='softmax'))
model_glove.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model_glove.summary())
I get
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_7 (Embedding) (None, 50, 100) 2000000
_________________________________________________________________
dropout_7 (Dropout) (None, 50, 100) 0
_________________________________________________________________
conv1d_7 (Conv1D) (None, 46, 64) 32064
_________________________________________________________________
max_pooling1d_7 (MaxPooling1 (None, 11, 64) 0
_________________________________________________________________
lstm_7 (LSTM) (None, 100) 66000
_________________________________________________________________
dense_7 (Dense) (None, 3) 303
=================================================================
Total params: 2,098,367
Trainable params: 98,367
Non-trainable params: 2,000,000
_________________________________________________________________
The input to the above architecture will be the training data
array([[ 0, 0, 0, ..., 4867, 22, 340],
[ 0, 0, 0, ..., 12, 327, 2301],
[ 0, 0, 0, ..., 255, 388, 2640],
...,
[ 0, 0, 0, ..., 17, 15609, 15242],
[ 0, 0, 0, ..., 9517, 9266, 442],
[ 0, 0, 0, ..., 3399, 379, 5927]], dtype=int32)
of shape (19579, 50)
and labels as one hot encodings..
My trouble is understanding the following what exactly is happening to my (19579, 50) as it goes through each of the following lines:
model_glove = Sequential()
model_glove.add(Embedding(vocabulary_size, 100, input_length=50, weights=[embedding_matrix], trainable=False))
model_glove.add(Dropout(0.5))
model_glove.add(Conv1D(64, 5, activation='relu'))
model_glove.add(MaxPooling1D(pool_size=4))
I understand why we need model_glove.add(Dropout(0.5)), this is to shut down some hidden units with a probability of 0.5 to avoid the model from being overly complex. But I have no idea why we need the Conv1D(64, 5, activation='relu'), the MaxPooling1D(pool_size=4) and how this goes into my model_glove.add(LSTM(100)) unit..
| 1 | 1 | 0 | 0 | 0 | 0 |
In a pandas column I have list of POS tags as string. I thought this must be string because print(dataset['text_posTagged'][0][0]) prints [.
dataset['text_posTagged']
['VBP', 'JJ', 'NNS', 'VBP', 'JJ', 'IN', 'PRP', 'VBP', 'TO', 'VB', 'PRP', 'RB', 'VBZ', 'DT', 'JJ', 'PRP$', 'NN', 'NN', 'NN', 'NN', 'VBZ', 'JJ']
['UH', 'DT', 'VB', 'VB', 'PRP$', 'NN', 'TO', 'JJ', 'IN', 'PRP', 'MD', 'VB', 'DT', 'VBZ', 'DT', 'NN', 'NN']
['NN', 'VBD', 'NN', 'NN', 'NN', 'DT', 'IN', 'IN', 'NN', 'IN', 'NN', 'NN', 'VBD', 'IN', 'JJ', 'NN', 'NN']
To convert this to an actual list I used the following.
dataset['text_posTagged'] = dataset.text_posTagged.apply(lambda x: literal_eval(x)).
However, this gives ValueError: malformed node or string: nan
When I applied the same in a column that has list of words, it works fine.
dataset['text']
['are', 'red', 'violets', 'are', 'blue', 'if', 'you', 'want', 'to', 'buy', 'us', 'here', 'is', 'a', 'clue', 'our', 'eye', 'amp', 'cheek', 'palette', 'is', 'al']
['is', 'it', 'too', 'late', 'now', 'to', 'say', 'sorry']
['our', 'amazonian', 'clay', 'full', 'coverage', 'foundation', 'comes', 'in', '40', 'shades', 'of', 'creamy', 'goodness']
The following prints are
dataset['text'] = dataset.text.apply(lambda x: literal_eval(x)).
print(dataset['text'][0][0])
What is wrong with applying literal_eval on list of POS tags? How to do it properly?
| 1 | 1 | 0 | 0 | 0 | 0 |
I need to lemmatize some words with Python
I have installed NLTK, but I get the following errors
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
I have installed nltk and previously imported the library
I would like to know why I have this error.
Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
I tried to use ELMO embeddings (ElmoEmbedder) from DeepPavlov library. It works really slow, 64 second per 100 senteces.
I tried to increase mini_batch_size, but it didn't speed up algorithm.
Is it possible to speed up ElmoEmbedder?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am a NLP novice trying to learn, and would like to better understand how Named Entity Recognition (NER) is implemented in practice, for example in popular python libraries such as spaCy.
I understand the basic concept behind it, but I suspect I am missing some details.
From the documentation, it is not clear to me for example how much preprocessing is done on the text and annotation data; and what statistical model is used.
Do you know if:
In order to work, the text has to go through chunking before the model is trained, right? Otherwise it wouldn't be able to perform anything useful?
Are the text and annotations typically normalized prior to the training of the model? So that if a named entity is at the beginning or middle of a sentence it can still work?
Specifically in spaCy, how are things implemented concretely? Is it a HMM, CRF or something else that is used to build the model?
Apologies if this is all trivial, I am having some trouble finding easy to read documentation on NER implementations.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am novice in Python and NLP, and my problem is how to finding out Intent of given questions, for example I have sets of questions and answers like this :
question:What is NLP; answer: NLP stands for Natural Language Processing
I did some basic POS tagger on given questions in above question I get entety [NLP] I also did String Matching using this algo.
Basically I faced following issues :
If user ask what is NLP then it will return exact answers
If user ask meaning of NLP then it fail
If user ask Definition of NLP then it fail
If user ask What is Natural Language Processing then it fail
So how I should identify user intent of given questions because in my case String matching or pattern matching not works.
| 1 | 1 | 0 | 1 | 0 | 0 |
Say i have
item : 0123456789
I need to find "item" word in the text document and store "0123456789" into some variable
is there a way to do it in R or Python
| 1 | 1 | 0 | 0 | 0 | 0 |
My goal is to identify whether two sentences is duplicated.
I'm trying to compare the parser trees of the two sentences.
I have extract the tags from the parser trees in the following format
['ROOT', 'SBARQ', 'WHADVP', 'WRB', 'SQ', 'VP', 'VBP', 'ADJP', 'RB', 'JJ', 'NP', 'NNP', 'NP', 'NP', 'NNS', 'VP', 'VBG', 'NP', 'NP', 'NNS', 'SBAR', 'WHNP', 'WDT', 'S', 'VP', 'VBP', 'ADVP', 'RB', 'VP', 'VBN', 'PP', 'IN', 'NP', 'NNP', '.']
['ROOT', 'SBARQ', 'WHADVP', 'WRB', 'SQ', 'VBP', 'NP', 'NNS', 'VP', 'VB', 'NP', 'NP', 'NNP', 'NNS', 'SBAR', 'WHNP', 'WDT', 'S', 'VP', 'MD', 'VP', 'VB', 'VP', 'VBN', 'ADVP', 'RB', 'PP', 'IN', 'NP', 'NNP', '.']
I want to get the length of common sublists of the two lists. In the above case, the results would be 4('ROOT', 'SBARQ', 'WHADVP', 'WRB')+5('SBAR', 'WHNP', 'WDT', 'S', 'VP')+2('ADVP', 'RB')+5('PP', 'IN', 'NP', 'NNP', '.').
Or do you have any other solutions can make use of the parse tree for the similarity of two sentences.
One more issue is, what is the fastest way to get the parse tree? Since I have more than 300,000 sentence pairs to compare...
Thanks in advance!
| 1 | 1 | 0 | 0 | 0 | 0 |
When I try to install es_core_news_sm
with this commmand
python -m spacy download es_core_news_sm
with conda I get this error
No module name spacy._main_;'spacy'is a package and cannot be directly executed.
Thank you so much!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to make a POS tagger for determiners and prepositions of Sorani Kurdish. I am using the following code to put every tag after each proposition or determiner in my Kurdish text.
import os
SOR = open("SOR-1.txt", "r+", encoding = 'utf-8')
old_text = SOR.read()
punkt = [".", "!", ",", ":", ";"]
text = ""
for i in old_text:
if i in punkt:
text+=" "+i
else:
text += i
d = {"DET":["ئێمە" , "ئێوە" , "ئەم" , "ئەو" , "ئەوان" , "ئەوەی", "چەند" ], "PREP":["بۆ","بێ","بێجگە","بە","بەبێ","بەدەم","بەردەم","بەرلە","بەرەوی","بەرەوە","بەلای","بەپێی","تۆ","تێ","جگە","دوای","دەگەڵ","سەر","لێ","لە","لەبابەت","لەباتی","لەبارەی","لەبرێتی","لەبن","لەبەینی","لەبەر","لەدەم","لەرێ","لەرێگا","لەرەوی","لەسەر","لەلایەن","لەناو","لەنێو","لەو","لەپێناوی","لەژێر","لەگەڵ","ناو","نێوان","وەک","وەک","پاش","پێش","" ], "punkt":[".", ",", "!"]}
text = text.split()
for w in text:
for pos in d:
if w in d[pos]:
SOR.write(w+"/"+pos+" ")
SOR.close()
What I want to do is to add POS tags inside the text after each of the words in the defined dictionary, but the result is a separate list of words and POS tags at the end of the file.
| 1 | 1 | 0 | 0 | 0 | 0 |
I´m new to LDA and doing some experiments with Python + LDA and some sample datasets.
I already got some very interesting results and now I asked myself a question but couldn´t find an answer so far.
Since I worked with customer reviews/ratings of a certain app the documents contain different topics (e.g. one talks about the app performance, price, functionality). So for my understanding I got three topics within one document.
My question: Is LDA capable to assign more than one topic to one document?
Thank you for your answer!
| 1 | 1 | 0 | 0 | 0 | 0 |
Hello I have a python string in such a way
s = "Hello world
"
I want to count the number of trailing new line characters in the string in this case it is 2.
If I use s.strip() it just removes new lines and returns the string but not how many new line characters it has removed in the process.
How to get the count as well as remove the trailing new line characters.
Thanks.
| 1 | 1 | 0 | 0 | 0 | 0 |
I tried to get the morphological attributes of the verb using Spacy like below:
import spacy
from spacy.lang.it.examples import sentences
nlp = spacy.load('it_core_news_sm')
doc = nlp('Ti è piaciuto il film?')
token = doc[2]
nlp.vocab.morphology.tag_map[token.tag_]
output was:
{'pos': 'VERB'}
But I want to extract
V__Mood=Cnd|Number=Plur|Person=1|Tense=Pres|VerbForm=Fin": {POS: VERB}
Is it possible to extract the mood, tense,number,person information as specified in the tag-map https://github.com/explosion/spacy/blob/master/spacy/lang/it/tag_map.py like above using Spacy?
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm using Spacy for "POS Tagging" and getting below error. I have a dataframe, which has the column "description" in which I need to extract the POS for each word
Dataframe :
No. Description
1 My net is not working
2 I will be out for dinner
3 Can I order food
4 Wifi issue
Code :
import pandas as pd
read_data = pd.read_csv('C:\\Users\\abc\\def\\pqr\\Data\\training_data.csv', encoding="utf-8")
entity = []
for parsed_doc in read_data['Description']:
doc = nlp(parsed_doc)
a = [(X.text, X.tag_) for X in doc.ents]
entity.append(a)
The above code is throwing error:
Error : AttributeError: 'spacy.tokens.span.Span' object has no
attribute 'tag_'
However, the same code is working fine for "Label" attribute and also if I use a single sentence
doc = nlp('can you please help me to install wifi')
for i in doc:
print (i.text, i.tag_)
| 1 | 1 | 0 | 0 | 0 | 0 |
I trained Gensim W2V model on 500K sentences (around 60K) words and I want to calculate the perplexity.
What will be the best way to do so?
for 60K words, how can I check what will be a proper amount of data?
Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
Is there a way to iterate through each state, force the environment to go to that state, and then take a step and then use the "info" dictionary returned to see what are all the possible successor states?
Or an even easier way to recover all possible successor states for each state, perhaps somewhere hidden?
I saw online that something called MuJoKo or something like that has a set_state function, but I don't want to create a new environment, I just want to set the state of the ones already provided by openAi gym.
Context: trying to implement topological order value iteration, which requires making a graph where each state has an edge to any state that any action could ever transition it to.
I realize that obviously in some games that's just not provided, but for the ones where it is, is there a way?
(Other than the brute force method of running the game and taking every step I haven't yet taken at whatever state I land at until I've reached all states and seen everything, which depending on the game could take forever)
This is my first time using OpenAi gym so please explain as detailed as you can. For example, I have no idea what Wrappers are.
Thanks!
| 1 | 1 | 0 | 1 | 0 | 0 |
I want to convert text to sequence using keras with indonesian languages. but the keras tokenizer only detect the known word.
How to add known words in keras? or any solution for me to convert text to sequence?
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=n_most_common_words, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True)
tokenizer.fit_on_texts(concated['TITLE'].values)
txt = ["bisnis di indonesia sangat maju"]
seq = list(tokenizer.texts_to_sequences_generator(txt))
the "seq" variable resulting empty array if i used indonesian languages, its work perfectly if i used the english word. how to use keras for different languages? or anyway to add some known word to keras?
Thanks
| 1 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.