text
stringlengths 0
27.6k
| python
int64 0
1
| DeepLearning or NLP
int64 0
1
| Other
int64 0
1
| Machine Learning
int64 0
1
| Mathematics
int64 0
1
| Trash
int64 0
1
|
|---|---|---|---|---|---|---|
My dataset structure:
Text: 'Good service, nice view, location'
Tag: '{SERVICE#GENERAL, positive}, {HOTEL#GENERAL, positive}, {LOCATI
ON#GENERAL, positive}'
And the point here is that I don't know how can I structure my data frame. If you have any recommendations, these will be really nice to me. Thank you.
| 1
| 1
| 0
| 0
| 0
| 0
|
I tried for a few days to install spaCy and it's giving me different errors.
now it gives me this error (attached link to the error):
(https://drive.google.com/file/d/1V_n1WB-HlVPTHHlsBJ0zpdQYYHlInM-W/view?usp=sharing)
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm currently working on generating distractor for multiple choice questions. Training set consists of question, answer and 3 distractor and I need to predict 3 distractor for test set. I have gone through many research papers regarding this but the problem in my case is unique. Here the problem is the questions and answers are for a comprehension(usually a big passage of text story) but the comprehension based on which is not given nor any supporting text is given for the question. Moreover, the answers and distractor are not a single word but sentences. The research paper I went mostly worked with some kind of support text. Even the SciQ dataset had some supporting text but the problem im working is different
This research paper was the one which I thought closely went by what I wanted and I'm planning to implement this. Below is an excerpt from the paper which the authors say worked better than NN models.
We solve DG as the following ranking problem: Problem.
Given a candidate distractor set D and a MCQ dataset M = {(qi , ai , {di1, ..., dik})} N i=1, where qi is the question stem, ai is the key, Di = {di1...dik} ⊆ D are the distractors associated with qi and ai , find a point-wise ranking function r: (qi , ai , d) → [0, 1] for d ∈ D, such that distractors in Di are ranked higher than those in D − Di.
My questions are a) From what I understood, The above lines says we first create a big list containing all the distractors in the dataset and then we create a pointwise ranking function with respect to all distractors for every question? So if we have n questions and d distractors. We will have a (nxd) matrix where pointwise function values range between o and 1. Also, a question's own distractors should be ranked higher than the rest. Right?
To learn the ranking function, we investigate two types of models: feature-based models and NNbased models.
Feature-based Models: Given a tuple (q, a, d), a feature-based model first transforms it to a feature vector φ(q, a, d) ∈ R d with the function φ. We design the following features for DG, resulting in a 26-dimension feature vector:
Emb Sim. Embedding similarity between q and d and the similarity
between a and d.
POS Sim. Jaccard similarity between a and d’s POS tags.
ED. The edit distance between a and d.
Token Sim. Jaccard similarities between q and d’s tokens, a and d’s tokens, and q and a’s tokens.
Length. a and d’s character and token lengths and the difference
of lengths.
Suffix. The absolute and relative length of a and d’s longest
common suffix.
Freq. Average word frequency in a and d.
Single. Singular/plural consistency of a and d. This
Wiki Sim.
My questions: Will these feature generation idea applies to both word distractors and sentence distractors? ( As per the paper, they claim it will).
Apart from all of these, I have other simple questions such as should I remove stopwords here?
I'm new to NLP. So any suggestions about which SOTA implementation would work here would be very helpful. Thanks in advance.
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm working on a NLP project using as dataset amazon digital music reviews. I'm preprocessing all the reviews by lemmatizing, stemming, tokenizing, removing punctuations and stopwords...
However I got stuck in a problem. Is there a way to preprocessing the text by saying to python:
`if there is 'new york', 'los angeles', 'hip hop' like words, then do not split them but melt: 'new_york', 'los_angeles', 'hip_hop'
?
I do not want to map manually all of them and I tried to play with bigrams and with pos but with no success.
Can you help me?
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to remove digit except '3d', this word.
I've tried some methods but failed.
Please look through my simple code below:
s = 'd3 4 3d'
rep_ls = re.findall('([0-9]+[a-zA-Z]*)', s)
>> ['3', '4', '3d']
for n in rep_ls:
if n == '3d':
continue
s = s.replace(n, '')
>> s = 'd d'
>> expected = 'd 3d'
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to break down the text column of a dataframe, and get the top words broken down per row/document. I have the top words, in this example it is machine and learning both at counts of 8. However I'm unsure how to break down the top words per document instead of the whole dataframe.
Below are the results for the top words for the dataframe as a whole:
machine 8
learning 8
important 2
think 1
significant 1
import pandas as pd
y = ['machine learning. i think machine learning rather significant machine learning',
'most important aspect is machine learning. machine learning very important essential',
'i believe machine learning great, machine learning machine learning']
x = ['a','b','c']
practice = pd.DataFrame(data=y,index=x,columns=['text'])
What I am expecting is next to the text column, is another column that indicates the top word. For Example for the word 'Machine' the dataframe should look like:
a / … / 3
b / … / 2
c / … / 3
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm assembling a twitter hashtag dictionary using Python. The keys are the hashtag itself and the corresponding entry is a large collection of tweets that contain this hashtag appended end-to-end. I've got a separate list of all hashtagless tweets and am adding them to dictionary entries according to cosine similarity. Everything is working but is VERY slow (a few hours for 4000 tweets). The nested for loops are giving me O(N^2) runtime. Does anyone have any ideas on how I could improve my runtime? Any suggestions will be greatly appreciated!
taglessVects = normalize(vectorizer.transform(needTags))
dictVects = normalize(vectorizer.transform(newDict))
#newDict contains: newDict[hashtag]: "tweets that used that hashtag"
#needTags is a list of all the tweets that didn;t use a hashtag
for dVect, entry in zip(dictVects, newDict):
for taglessVect, tweet in zip(taglessVects, needTags):
if cosine_similarity(taglessVect, dVect) > .9:
newDict[entry] = newDict[entry] + ' ' + tweet
return newDict
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a set of documents (3000) which each contain a short description. I want to use Word2Vec model to see if I can cluster these documents based on the description.
I'm doing it the in the following way, but I am not sure if this is a "good" way to do it. Would love to get feedback.
I'm using Google's trained w2v model.
wv = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz',binary=True,encoding="ISO-8859-1", limit = 100000)
Each document is split into words where stop words are removed, and I have used stemming as well.
My initial idea was to fetch the word vector for each word in each documents description, average it, and then cluster based on this.
doc2vecs = []
for i in range(0, len(documents_df['Name'])):
vec = [0 for k in range(300)]
for j in range(0, len(documents_df['Description'][i])):
if documents_df['Description'][i][j] in wv:
vec += wv[documents_df['Description'][i][j]]
doc2vecs.append(vec/300)
I'm then finding similarities using
similarities = squareform(pdist(doc2vecs, 'cosine'))
Which returns a matrix of the cosine between each vector in doc2vec.
I then try to cluster the documents.
num_clusters = 2
km = cluster.KMeans(n_clusters=num_clusters)
km.fit(doc2vecs)
So basically what I am wondering is:
Is this method of clustering the average word vector for each word in the document a reasonable way to cluster the documents?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am successfully converting documents using this module available on TensorFlow hub.
The output of each document is a 512 dimensional vector, however this is too large for my application and I would like to reduce the dimensionality, which the module itself does not provide.
I can see a few options:
Use another package with a lower dimensionality output.
Use something such as PCA or tSNE to reduce the dimensions.
The problem with using PCA or tSNE is that this needs to be fit to the data of many example vectors - this would mean as new documents arrived and had been converted to a 512-dim vector, I would need to keep fitting another model, and then updating the old document vectors - this would be a huge issue in my application.
Are there any other dimensionality reduction techniques which can operate on a single data point?
| 1
| 1
| 0
| 1
| 0
| 0
|
The code read data from specific column in excel column ( in my case i used columns = 'profile')
The result is in dataframe as below:
profile
0 https://scontent-lga3-1.xx.fbcdn.net/v/t1.0-1/...
1 https://scontent-lga3-1.xx.fbcdn.net/v/t1.0-1/...
2 https://scontent-lga3-1.xx.fbcdn.net/v/t1.0-1/...
So, I try to loop the data in dataframe. My problem is the algorithm includes the header (profile) as well, so it turns error. Below is my work:
results = []
for result in df :
result = CF.face.detect(result)
if result == []:
#do something
else:
#do something
print(results)
The error I got from this code is (invalid as it loop the 'profile' as well):
status_code: 400
code: InvalidURL
code: InvalidURL
message: Invalid image URL.
My question is, how to write the code so it will loop all the data within column (excluding the 'profile')? I am not sure if put 'df' in 'for result in df ' is a correct way or vice versa.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to learn NLP with python. Although I work with a variety of programming languages I'm looking for some kind of from the ground up solution that I can put together to come up with a product that has a high standard of spelling and grammer like grammerly?
I've tried some approaches with python. https://pypi.org/project/inflect/
Spacy for parts of speech.
Could someone point me in the direction of some kind of fully fledged API, that I can pull apart and try and work out how to get to a decent standard of english, like grammerly.
Many thanks,
Vince.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a text dataset containing messages from users on a website. Please check the image in the link as stack is not allowing me to post this image directly.
dataframe of the first five rows
Reading those messages i want to find out the intent of the users whether they are buyer, seller or neutral. I have tried topic modelling using both LDA and NMF but it's not giving me answers. As i am getting very different topics and i cannot find a way to relate it to buyer seller or neutral. And i cannot manually label these data because it's a huge dataset containing 200,000 thousands of rows. So which technique or algorithm can i use to solve this problem.
| 1
| 1
| 0
| 1
| 0
| 0
|
In text processing tasks, one of the first things to do is figure out how often each word appears in a given document. In this task, you will be completing a function that returns the unique word frequencies of a tokenized word document.
write code to complete the count_frequencies function. The input argument (arr), is a list of strings, representing a tokenized word document. An example input would look like this:
['the', 'dog', 'got', 'the', 'bone']
Your count_frequencies function should return a list of tuples, where the first element in the tuple is a unique word from arr and the second element in the tuple is the frequency with which it appears in arr. The returned list should be sorted in alphabetical order by the first element of each tuple. For the above example, the correct output would be the following list of tuples:
**[('bone', 1), ('dog', 1), ('got', 1), ('the', 2)]**
A couple more examples (with solutions) are shown below:
**Input: ['we', 'came', 'we', 'saw', 'we', 'conquered']**
**Solution: [('came', 1), ('conquered', 1), ('saw', 1), ('we', 3)]**
**Input: ['a', 'square', 'is', 'a', 'rectangle']**
**Solution: [('a', 2), ('is', 1), ('rectangle', 1), ('square', 1)]**
You can write your own test cases in the input text box.
In this case, your test case should be space-separated words, representing an input list for the count_frequencies function.
| 1
| 1
| 0
| 0
| 0
| 0
|
I used Chris Mccormick tutorial on BERT using pytorch-pretained-bert to get a sentence embedding as follows:
tokenized_text = tokenizer.tokenize(marked_text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertModel.from_pretrained('bert-base-uncased')
model.eval()
with torch.no_grad():
encoded_layers, _ = model(tokens_tensor, segments_tensors)
# Holds the list of 12 layer embeddings for each token
# Will have the shape: [# tokens, # layers, # features]
token_embeddings = []
# For each token in the sentence...
for token_i in range(len(tokenized_text)):
# Holds 12 layers of hidden states for each token
hidden_layers = []
# For each of the 12 layers...
for layer_i in range(len(encoded_layers)):
# Lookup the vector for `token_i` in `layer_i`
vec = encoded_layers[layer_i][batch_i][token_i]
hidden_layers.append(vec)
token_embeddings.append(hidden_layers)
Now, I am trying to get the final sentence embedding by summing the last 4 layers as follows:
summed_last_4_layers = [torch.sum(torch.stack(layer)[-4:], 0) for layer in token_embeddings]
But instead of getting a single torch vector of length 768 I get the following:
[tensor([-3.8930e+00, -3.2564e+00, -3.0373e-01, 2.6618e+00, 5.7803e-01,
-1.0007e+00, -2.3180e+00, 1.4215e+00, 2.6551e-01, -1.8784e+00,
-1.5268e+00, 3.6681e+00, ...., 3.9084e+00]), tensor([-2.0884e+00, -3.6244e-01, ....2.5715e+00]), tensor([ 1.0816e+00,...-4.7801e+00]), tensor([ 1.2713e+00,.... 1.0275e+00]), tensor([-6.6105e+00,..., -2.9349e-01])]
What did I get here? How do I pool the sum of the last for layers?
Thank you!
| 1
| 1
| 0
| 0
| 0
| 0
|
I am practicing with building an article summarizer. I built something using the script below. I would like to export the model and use it for deployment but can't find a way around it.
Here is the script for the analyzer.
#import necessary libraries
import re
import gensim
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
import networkx as nx
file = open("somefile.txt","r")
data=file.readlines()
file.close()
#define preprocessing steps
#lower case
#remove everything inside []
#remove 's
#fetch only ascii characters
def preprocessor(text):
newString = text.lower()
newString = re.sub("[\(\[].*?[\)\]]", "", newString)
newString = re.sub("'s","",newString)
newString = re.sub("[^'0-9.a-zA-Z]", " ", newString)
tokens=newString.split()
return (" ".join(tokens)).strip()
#call above function
text=[]
for i in data:
text.append(preprocessor(i))
all_sentences=[]
for i in text:
sentences=i.split(".")
for i in sentences:
if(i!=''):
all_sentences.append(i.strip())
# tokenizing the sentences for training word2vec
tokenized_text = []
for i in all_sentences:
tokenized_text.append(i.split())
#define word2vec model
model_w2v = gensim.models.Word2Vec(
tokenized_text,
size=200, # desired no. of features/independent variables
window=5, # context window size
min_count=2,
sg = 0, # 1 for cbow model
hs = 0,
negative = 10, # for negative sampling
workers= 2, # no.of cores
seed = 34)
#train word2vec
model_w2v.train(tokenized_text, total_examples= len(tokenized_text), epochs=model_w2v.epochs)
#define function to obtain sentence embedding
def word_vector(tokens, size):
vec = np.zeros(size).reshape((1, size))
count = 0.
for word in tokens:
try:
vec += model_w2v[word].reshape((1, size))
count += 1.
except KeyError: # handling the case where the token is not in vocabulary
continue
if count != 0:
vec /= count
return vec
#call above function
wordvec_arrays = np.zeros((len(tokenized_text), 200))
for i in range(len(tokenized_text)):
wordvec_arrays[i,:] = word_vector(tokenized_text[i], 200)
# similarity matrix
sim_mat = np.zeros([len(wordvec_arrays), len(wordvec_arrays)])
#compute similarity score
for i in range(len(wordvec_arrays)):
for j in range(len(wordvec_arrays)):
if i != j:
sim_mat[i][j] = cosine_similarity(wordvec_arrays[i].reshape(1,200), wordvec_arrays[j].reshape(1,200))[0,0]
#Generate a graph
nx_graph = nx.from_numpy_array(sim_mat)
#compute pagerank scores
scores = nx.pagerank(nx_graph)
#sort the scores
sorted_x = sorted(scores.items(), key=lambda kv: kv[1],reverse=True)
sent_list=[]
for i in sorted_x:
sent_list.append(i[0])
#extract top 10 sentences
num=10
summary=''
for i in range(num):
summary=summary+all_sentences[sent_list[i]]+'. '
print(summary)
I want to have an exported model that I can pass to a flask API later. I need help with that.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am looking for steps/process to extract information from a Invoice using machine learning/NLP/Deep learning techniques. What will be the steps/process to be followed ?
The approach would need clarification on below
Suppose there are invoices from 2 Vendors, how a model needs to be created to extract the value mentioned for below fields? Will it have Keyword extraction ? Does custom NER needs to be implemented, if so how ? How should the training data be created for this ?
Invoice Number
Invoice Date
Invoice Amount
Address
| 1
| 1
| 0
| 0
| 0
| 0
|
I am able to use universal dependencies parser from Stanford in NLTK, But is there any way to use universal dependencies, enhanced in NLTK? As shown here Stanford Parser
Thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a df with variable named url. Each url string in url has a unique six character alphanumeric ID in the URL string. Ive been trying to extract a specific part of each string, the article_id from all urls, and then add it to the df as a new variable.
For example, xwpd7w is the article_id for https://www.vice.com/en_us/article/xwpd7w/how-a-brooklyn-gang-may-have-gotten-crazy-rich-dealing-for-el-chapo
How do I extract article_ids from all urls in the df based on their position next to /article/? Using any method, regex or not?
I have so far done the following:
df.url.str.split()
ex output: [https://www.vice.com/en_au/article/j539yy/smo...
df['cutcurls'] = df.url.str.join(sep=' ')
ex output: h t t p s : / / w w w . v i c e . c o m / e n
Any ideas?
| 1
| 1
| 0
| 0
| 0
| 0
|
In Keras, I can have the following code:
docs
Out[9]:
['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!',
'Weak',
'Poor effort!',
'not good',
'poor work',
'Could have done better.']
labels = array([1,1,1,1,1,0,0,0,0,0])
voc_size = 50
encoded = [one_hot(d, voc_size) for d in docs]
max_length = 4
padded_docs = pad_sequences(encoded, maxlen=max_length, padding='post')
My understanding is that, the 'one_hot' encoding already creates an equal length of each doc based on the vocabulary size. So why does each doc need to be padded again?
EDIT: another example for more clarification:
A one-hot encoding is a representation of categorical variables (e.g. cat, dog, rat) as binary vectors (e.g. [1,0,0], [0,1,0], [0,0,1]).
So in this case, cat, dog and rat are encoded as equal length of vector. How is this different from the example above?
| 1
| 1
| 0
| 1
| 0
| 0
|
I have built a binary text classifier. Trained it to recognize sentences for clients based on 'New' or 'Return'. My issue is that real data may not always have a clear distinction between new or return, even to an actual person reading the sentence.
My model was trained to 0.99% accuracy with supervised learning using Logistic Regression.
#train model
def train_model(classifier, feature_vector_train, label, feature_vector_valid,valid_y, is_neural_net=False):
classifier.fit(feature_vector_train, label)
predictions = classifier.predict(feature_vector_valid)
if is_neural_net:
predictions = predictions.argmax(axis=-1)
return classifier , metrics.accuracy_score(predictions, valid_y)
# Linear Classifier on Count Vectors
model, accuracy = train_model(linear_model.LogisticRegression(), xtrain_count, train_y, xtest_count,test_y)
print ('::: Accuracy on Test Set :::')
print ('Linear Classifier, BoW Vectors: ', accuracy)
And this would give me an accuracy of 0.998.
I now can pass a whole list of sentences to test this model and it would catch if the sentences has a new or return word yet I need an evaluation metric because some sentences will have no chance of being new or return as real data is messy as always.
My question is: What evaluation metrics can I use so that each new sentence that gets passed through the model shows a score?
Right now I only use the following code
with open('realdata.txt', 'r') as f:
samples = f.readlines()
vecs = count_vect.transform(sentence)
visit = model.predict(vecs)
num_to_label= {0:'New', 1:'Return'}
for s, p in zip(sentence, visit):
#printing each sentence with the predicted label
print(s + num_to_label[p])
For example I would expect
Sentence Visit (Metric X)
New visit 2nd floor New 0.95
Return visit Evening Return 0.98
Afternoon visit North New 0.43
Therefore I'd know to not trust those will metrics below a certain percentage because the tool isnt reliable.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm using Keras for the layers, optimizer, and model and my model is Sequential
I've got two DQN networks and I'm making them duel each other in a simulated environment however after about 35 episodes (different each time) the script just stops without any errors. I've isolated my issue to be somewhere around when the agent runs the prediction model for the current state to get the action. The process is called but never completed and the script just stops without any error. How can I debug this issue?
| 1
| 1
| 0
| 1
| 0
| 0
|
I am building a LSTM for text classification in with Keras, and am playing around with different input sentences to get a sense of what is happening, but I'm getting strange outputs. For example:
Sentence 1 = "On Tuesday, Ms. [Mary] Barra, 51, completed a remarkable personal odyssey when she was named as the next chief executive of G.M.--and the first woman to ascend to the top job at a major auto company."
Sentence 2 = "On Tuesday, Ms. [Mary] Barra, 51, was named as the next chief executive of G.M.--and the first woman to ascend to the top job at a major auto company."
The model predicts the class "objective" (0), output 0.4242 when the Sentence 2 is the only element in the input array. It predicts "subjective" (1), output 0.9061 for Sentence 1. If they are both (as separate strings) fed as input in the same array, both are classified as "subjective" (1) - but Sentence 1 outputs 0.8689 and 2 outputs 0.5607. It seems as though they are affecting each other's outputs. It does not matter which index in the input array each sentence is.
Here is the code:
max_length = 500
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=5000, lower=True,split=' ')
tokenizer.fit_on_texts(dataset["sentence"].values)
#print(tokenizer.word_index) # To see the dicstionary
X = tokenizer.texts_to_sequences(dataset["sentence"].values)
X = pad_sequences(X, maxlen=max_length)
y = np.array(dataset["label"])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
X_train = sequence.pad_sequences(X_train, maxlen=max_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_length)
embedding_vector_length = 32
###LSTM
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
model = Sequential()
model.add(Embedding(5000, embedding_vector_length, input_length=max_length))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='sigmoid'))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
from keras import optimizers
sgd = optimizers.SGD(lr=0.9)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3, batch_size=64)
# save model
model.save('LSTM.h5')
I then reloaded the model in a separate script and am feeding it hard-coded sentences:
model = load_model('LSTM.h5')
max_length = 500
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=5000, lower=True,split=' ')
tokenizer.fit_on_texts(article_sentences)
#print(tokenizer.word_index) # To see the dicstionary
X = tokenizer.texts_to_sequences(article_sentences)
X = pad_sequences(X, maxlen=max_length)
prediction = model.predict(X)
print(prediction)
for i in range(len(X)):
print('%s
Label:%d' % (article_sentences[i], prediction[i]))
I set the random seed before training the model and in the script where I load the model, am I missing something when loading the model? Should I be arranging my data differently?
| 1
| 1
| 0
| 1
| 0
| 0
|
I have trained a classifier model using logistic regression on a set of strings that classifies strings into 0 or 1. I currently have it where I can only test one string at a time. How can I have my model run through more than one sentence at a time, maybe from a .csv file so I dont have to input each sentence individually?
def train_model(classifier, feature_vector_train, label, feature_vector_valid,valid_y, is_neural_net=False):
classifier.fit(feature_vector_train, label)
# predict the labels on validation dataset
predictions = classifier.predict(feature_vector_valid)
if is_neural_net:
predictions = predictions.argmax(axis=-1)
return classifier , metrics.accuracy_score(predictions, valid_y)
then
model, accuracy = train_model(linear_model.LogisticRegression(), xtrain_count, train_y, xtest_count,test_y)
Currently how I test my model
sent = ['here I copy a string']
converting text to count bag of words vectors
count_vect = CountVectorizer(analyzer='word', token_pattern=r'\w{1,}',ngram_range=(1, 2))
x_feature_vector = count_vect.transform(sent)
pred = model.predict(x_feature_vector)
and I get the sentence and its prediction
I wanted the model to classify all my new sentences at once and give a classification to each sentence.
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to compare two lists (result, ground-truth). Output should contain 1 if both are match, if not '0' and output positive sensitive. For example:
result= [1,2,3,4,5]
ground-truth=[2,4]
Output= [0,1,0,1,0]
I implemented python code for this:
def comparedkeground(dke,grd):
correct=np.zeros(len(dke))
try:
for i in range(len(grd)):
a=dke.index(grd[i])
correct[a]=1
except:
'ValueError'
return correct
This code give perfect result for some cases: for example :
d=[1,2,30,4,6, 8, 50, 90, 121]
e=[30, 2, 50, 90]
print(comparedkeground(d,e))
[0. 1. 1. 0. 0. 0. 1. 1. 0.]
cc=['word', 'flags', 'tv', 'nanjo', 'panjo']
ccc=['panjo', 'tv']
print(comparedkeground(cc,ccc))
[0. 0. 1. 0. 1.]
But same code not working:
u=['Lyme-disease vaccine', 'United States', 'Lyme disease', 'Allen Steere']
u1= ['drugs', 'Lyme-disease vaccine', 'Lyme disease']
print(comparedkeground(u,u1))
[0. 0. 0. 0.]
| 1
| 1
| 0
| 0
| 0
| 0
|
I can not make Stanford Parser Version 3.5.1 work. I know that newer versions of this tool are available but I have tons of old code using this particular version. This is for an academic course.
I am using Windows 7, JDK 1.8.0_65, python 3.3.3 and NLTK 3.0.2
My environment variables are as follows:
CLASSPATH : C:\Program Files (x86)\stanford-parser-full-2015-01-30\jars\stanford-parser-3.5.1-models.jar;C:\Program Files (x86)\stanford-parser-full-2015-01-30\jars\stanford-parser-3.5.1-sources.jar;C:\Program Files (x86)\stanford-parser-full-2015-01-30\jars\stanford-parser.jar
JAVA_HOME : C:\Program Files\Java\jdk1.8.0_65\bin
Path : C:\ProgramData\Oracle\Java\javapath;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Common Files\Apple\Internet Services\;C:\Program Files\Git\cmd;C:\Program Files (x86)\stanford-parser-full-2015-01-30\jars\
I run this code:
from nltk.parse import stanford
parser = stanford.StanfordParser(model_path='C:\Program Files
(x86)\stanford-parser-full-2015-01-30\edu\stanford\lp\models\lexparser\englishPCFG.ser.gz')
parser.raw_parse('I love apples')
And I am getting this error
Loading parser from serialized file C:\Program Files
(x86)\stanford-parser-full-2015-01-30\edu\stanford\lp\models\lexparser\englishPCFG.ser.gz
...
java.io.IOException: Unable to resolve "C:\Program Files
(x86)\stanford-parser-full-2015-01-30\edu\stanford\lp\models\lexparser\englishPCFG.ser.gz"
as either class path, filename or URL
at
edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:463)
at edu.stanford.nlp.io.IOUtils.readStreamFromString(IOUtils.java:396)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromSerializedFile(LexicalizedParser.java:599)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromFile(LexicalizedParser.java:394)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:181)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.main(LexicalizedParser.java:1395)
Loading parser from text file C:\Program Files
(x86)\stanford-parser-full-2015-01-30\edu\stanford\lp\models\lexparser\englishPCFG.ser.gz
java.io.IOException: Unable to resolve "C:\Program Files
(x86)\stanford-parser-full-2015-01-30\edu\stanford\lp\models\lexparser\englishPCFG.ser.gz"
as either class path, filename or URL
at
edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:463)
at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:591)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromTextFile(LexicalizedParser.java:533)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromFile(LexicalizedParser.java:396)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:181)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.main(LexicalizedParser.java:1395)
Exception in thread "main" java.lang.NullPointerException
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:183)
at
edu.stanford.nlp.parser.lexparser.LexicalizedParser.main(LexicalizedParser.java:1395)
Traceback (most recent call last): File
"C:\Users\Zimtyth\Desktop\PFE\Implémentation\Codes\Code
final\Lib_Stanford_Parser.py", line 100, in
resultat = parse_sent("My name is Melroy and i want to win.") File "C:\Users\Zimtyth\Desktop\PFE\Implémentation\Codes\Code
final\Lib_Stanford_Parser.py", line 10, in parse_sent
return parser.raw_parse(sent) File "C:\Python33\lib\site-packages
ltk\parse\stanford.py", line 152, in
raw_parse
return next(self.raw_parse_sents([sentence], verbose)) File "C:\Python33\lib\site-packages
ltk\parse\stanford.py", line 170, in
raw_parse_sents
return self._parse_trees_output(self._execute(cmd, '
'.join(sentences), verbose)) File
"C:\Python33\lib\site-packages
ltk\parse\stanford.py", line 230, in
_execute
stdout=PIPE, stderr=PIPE) File "C:\Python33\lib\site-packages
ltk\internals.py", line 161, in java
raise OSError('Java command failed : ' + str(cmd)) OSError: Java command failed : ['C:\Program
Files\Java\jdk1.8.0_65\bin\java.exe', '-mx1000m', '-cp',
'C:\Program Files
(x86)\stanford-parser-full-2015-01-30\jars\stanford-parser.jar;C:\Program
Files
(x86)\stanford-parser-full-2015-01-30\jars\stanford-parser-3.5.1-models.jar',
'edu.stanford.nlp.parser.lexparser.LexicalizedParser', '-model',
'C:\Program Files
(x86)\stanford-parser-full-2015-01-30\edu\stanford\lp\models\lexparser\englishPCFG.ser.gz',
'-sentences', 'newline', '-outputFormat', 'penn', '-encoding', 'utf8',
'c:\users\zimtyth\appdata\local\temp\tmpbf5zdg']
I have already checked a couple of answers in SO about this like this but still I could not make it work. It looks like a Java problem, please tell me what am I doing wrong here?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am creating a text summarizer and using a basic model to work with using Bag of words approach.
the code i am performing is using the nltk library.
the file read is a large file with over 2500000 words.
below is the loop i am working on with but this takes over 2 hours to run and complete. is there a way to optimize this code
f= open('Complaints.csv', 'r')
raw = f.read()
len(raw)
tokens = nltk.word_tokenize(raw)
len(tokens)
freq = nltk.FreqDist(text)
top_words = [] # blank dictionary
top_words = freq.most_common(100)
print(top_words)
sentences = sent_tokenize(raw)
print(raw)
ranking = defaultdict(int)
for i, sent in enumerate(raw):
for word in word_tokenize(sent.lower()):
if word in freq:
ranking[i]+=freq[word]
top_sentences = nlargest(10, ranking, ranking.get)
print(top_sentences)
This is only one one file and the actual deployment has more than 10-15 files of similar size.
How we can improve this.
Please note these are the text from a chat bot and are actual sentences hence there was no requirement to remove whitespaces, stemming and other text pre processing methods
| 1
| 1
| 0
| 0
| 0
| 0
|
I created a Doc object from a custom list of tokens according to documentation like so:
import spacy
from spacy.tokens import Doc
nlp = spacy.load("my_ner_model")
doc = Doc(nlp.vocab, words=["Hello", ",", "world", "!"])
How do I write named entities tags to doc with my NER model now?
I tried to do doc = nlp(doc), but that didn't work for me raising a TypeError.
I can't just join my list of words into a plain text to do doc = nlp(text) as usual because in this case spaCy splits some words in my texts into two tokens which I can not accept.
| 1
| 1
| 0
| 0
| 0
| 0
|
I would like to try out an idea about autoencoder.
The model is like this:
input (pictures) - conv2d - pooling - dense - dense(supervised output) - dense - conv - upsampling - output (pictures)
If it is possible to train the NN having desired outputs for dense(supervised output) and output (pictures)? In other words, I want to make a classifier-and-back.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm interested in using tf-idf with FastText library, but have found a logical way to handle the ngrams. I have used tf-idf with SpaCy vectors already for what I have found several examples like these ones:
http://dsgeek.com/2018/02/19/tfidf_vectors.html
https://www.aclweb.org/anthology/P16-1089
http://nadbordrozd.github.io/blog/2016/05/20/text-classification-with-word2vec/
But for FastText library is not that clear to me, since it has a granularity that isn't that intuitive, E.G.
For a general word2vec aproach I will have one vector for each word, I can count the term frequency of that vector and divide its value accordingly.
But for fastText same word will have several n-grams,
"Listen to the latest news summary" will have n-grams generated by a sliding windows like:
lis ist ste ten tot het...
These n-grams are handled internally by the model so when I try:
model["Listen to the latest news summary"]
I get the final vector directly, hence what I have though is to split the text into n-grams before feeding the model like:
model['lis']
model['ist']
model['ten']
And make the tf-idf from there, but that seems like an inefficient approach both, is there a standar way to apply tf-idf to vector n-grams like these.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to use TFI/DF and CountVectorizer in one pipeline.
i did the following:
pipe = Pipeline([
('tfic', TfidfVectorizer()),
('cvec', CountVectorizer()),
('lr' ,LogisticRegression())
])
and the parameters:
pipe_parms = {
'cvec__max_features' : [100,500],
'cvec__ngram_range' : [(1,1),(1,2)],
'cvec__stop_words' : [ 'english', None]
}
gridSearch:
gs = GridSearchCV(pipe, param_grid= pipe_parms, cv=3)
I got an error
lower not found.
Using either countVectorizer or TfidfVectorizer works, but not both.
i read other questions on stackoverflow and they indicated that i should be using TfidfTransformer() instead if i want both to work using one pipeline.
Doing that, i am getting an error 'could not convert string to float'
Is there a way to use the two vectorizores in one pipeline? or what other methods do you suggest.
Thank you
Edit:
I found a solution to combine 2 parallel transformers (count and Tfidf vectorizers in this case) by using FeatureUnion.
I wrote a short blog post about it here:
https://link.medium.com/OPzIU0T3N0
| 1
| 1
| 0
| 1
| 0
| 0
|
After training a classifier, I tried passing a few sentences to check if it is going to classify it correctly.
During that testing the results are not appearing well.
I suppose some variables are not correct.
Explanation
I have a dataframe called df that looks like this:
news type
0 From: mathew <mathew@mantis.co.uk>
Subject: ... alt.atheism
1 From: mathew <mathew@mantis.co.uk>
Subject: ... alt.space
2 From: I3150101@dbstu1.rz.tu-bs.de (Benedikt Ro... alt.tech
...
#each row in the news column is a document
#each row in the type column is the category of that document
Preprocessing:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from sklearn import metrics
vectorizer = TfidfVectorizer( stop_words = 'english')
vectors = vectorizer.fit_transform(df.news)
clf = SVC(C=10,gamma=1,kernel='rbf')
clf.fit(vectors, df.type)
vectors_test = vectorizer.transform(df_test.news)
pred = clf.predict(vectors_test)
Attempt to check how some sentences are classified
texts = ["The space shuttle is made in 2018",
"stars are shining",
"galaxy"]
text_features = vectorizer.transform(texts)
predictions = clf.predict(text_features)
for text, predicted in zip(texts, predictions):
print('"{}"'.format(text))
print(" - Predicted as: '{}'".format(df.type[pred]))
print("")
The problem is that it returns this:
"The space shuttle is made in 2018"
- Predicted as: 'alt.atheism NaN
alt.atheism NaN
alt.atheism NaN
alt.atheism NaN
alt.atheism NaN
What do you think?
EDIT
Example
This is kind of how it should look like :
>>> docs_new = ['God is love', 'OpenGL on the GPU is fast']
>>> X_new_counts = count_vect.transform(docs_new)
>>> X_new_tfidf = tfidf_transformer.transform(X_new_counts)
>>> predicted = clf.predict(X_new_tfidf)
>>> for doc, category in zip(docs_new, predicted):
... print('%r => %s' % (doc, twenty_train.target_names[category]))
...
'God is love' => soc.religion.christian
'OpenGL on the GPU is fast' => comp.graphics
| 1
| 1
| 0
| 1
| 0
| 0
|
clean_train_reviews is a list of strings.
Each string is a review, an example is included below:
classic war worlds timothy hines entertaining film obviously goes
great effort lengths faithfully recreate h g wells classic book mr
hines succeeds watched film appreciated fact standard predictable
hollywood fare comes every year e g spielberg version tom cruise
slightest resemblance book obviously everyone looks different things
movie envision amateur critics look criticize everything others rate
movie important bases like entertained people never agree critics
enjoyed effort mr hines put faithful h g wells classic novel found
entertaining made easy overlook critics perceive shortcomings
Using the vectorizer initialized below, the above string is converted into a feature vector of the form:
(sentence_index, feature_index) count
An example is:
(0, 1905) 3
This means "a sentence with id of 0 and feature with id or index of 1905 occurs 3 times in this string.
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
max_features = 5000)
train["sentiment"] is a string of 1's and 0's (1=positive sentiment, 0=negative sentiment)
train_data_features = vectorizer.fit_transform(clean_train_reviews)
forest = RandomForestClassifier(n_estimators = 100)
forest = forest.fit( train_data_features, train["sentiment"] )
My question is:
The random forest is trained on the feature vector (all numeric values) and the sentiment (which is again numeric). But the test data set is plain text english. When the trained model is run on the test data, how does the model know what to make of plain text in the test data because the model was
only trained on feature vectors, which were only numbers? Or does the forest object retain information about the plain text in the training data?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have regex code
https://regex101.com/r/o5gdDt/8
As you see this code
(?<!\S)(?<![\d,])(?:(?!(?:1[2-9]\d\d|20[01]\d|2020))\d{4,}[\u00BC-\u00BE\u2150-\u215E]?|\d{1,3}(?:,\d{3})+)(?![\d,])[\u00BC-\u00BE\u2150-\u215E]?(?!x)(?!/)
can capture all digits which sperated by 3 digits in text like
"here is 100,100"
"23,456"
"1,435"
all more than 4 digit number like without comma separated
2345
1234 " here is 123456"
also this kind of number
65,656½
65,656½,
23,123½
The only tiny issue here is if there is a comma(dot) after the first two types it can not capture those. for example, it can not capture
"here is 100,100,"
"23,456,"
"1,435,"
unfortunately, there is a few number intext which ends with comma...can someone gives me an idea of how to modify this to capture above also?
I have tried to do it and modified version is so:
(?<!\S)(?<![\d,])(?:(?!(?:1[2-9]\d\d|20[01]\d|2020))\d{4,}[\u00BC-\u00BE\u2150-\u215E]?|\d{1,3}(?:,\d{3})+)(?![\d])[\u00BC-\u00BE\u2150-\u215E]?(?!x)(?!/)
basically I delete comma in (?![\d,]) but it causes to another problem in my context
it captures part of a number that is part of equation like this :
4,310,747,475x2
57,349,565,416,398x.
see here:
https://regex101.com/r/o5gdDt/10
I know that is kind of special question I would be happy to know your ides
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a use case where I want to match one list of words with a list of sentences and bring the most relevant sentences
I am working in python. What I have already tried is using KMeans where we cluster our set of documents into the clusters and then predict the sentence that in which structure it resides. But in my case I have already available list of words available.
def getMostRelevantSentences():
Sentences = ["This is the most beautiful place in the world.",
"This man has more skills to show in cricket than any other game.",
"Hi there! how was your ladakh trip last month?",
"Isn’t cricket supposed to be a team sport? I feel people should decide first whether cricket is a team game or an individual sport."]
words = ["cricket","sports","team","play","match"]
#TODO: now this should return me the 2nd and last item from the Sentences list as the words list mostly matches with them
So from the above code I want to return the sentences which are closely matching with the words provided. I don't want to use the supervised machine learning here. Any help will be appreciated.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have hundreds of images of handwritten notes. They were written from different people but they are in sequence so you know that for example person1 wrote img1.jpg -> img100.jpg. The style of handwriting varies a lot from person to person but there are parts of the notes which are always fixed, I imagine that could help an algorithm (it helps me!).
I tried tesseract and it failed pretty bad at recognizing the text. I'm thinking since each person has like 100 images is there an algorithm I can train by feeding it a small number of examples, like 5 or less and it can learn from that? Or would it not be enough data? From searching around it seems looks like I need to implement a CNN (e.g. this paper).
My knowledge of ai is limited though, is this something that I could still do using a library and some studying? If so, what should I do going forward?
| 1
| 1
| 0
| 0
| 0
| 0
|
I run the language translator using TextBlob. It can translate from a string. However, I tried to loop the textblob translator for the data in a dataframe which in dataframe might have a mixed of different languages (en and es).
The code I used is :
for content in data:
blob = TextBlob(content)
for i in data:
blob = TextBlob(i)
blob.translate(from_lang = 'en', to = 'es')
The error is :
83 result = result.encode('utf-8')
84 if result.strip() == source.strip():
---> 85 raise NotTranslated('Translation API returned the input string unchanged.')
86
87 def _request(self, url, host=None, type_=None, data=None):
NotTranslated: Translation API returned the input string unchanged.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataset with the following features
data = {
description:'the tea was amazing, had great taste.'
country:'Chile'
year: 1980
designation:'random'
points: 80
}
I am looking for a way to use these features to build a model to predict points.
Description seems to hold alot of information about points.
How do i feed this data into the model and also which model?
| 1
| 1
| 0
| 1
| 0
| 0
|
I have this plot, as you can see there are red and blue points.
The points have been randomly plotted, basically, my task is that i need to identify red and blue areas where there is more concentration of the same color.
With "concentration" i mean an area (or more than one area) where blue or red are >80% more than the other color.
The problem is that i cannot use a clustering algorithm because i already know the classes, I only need to find a mechanism that discard areas where there are the same cencentration of both colors (50% each less or more).
The rules i would use are:
an area where there are more than X points
the points of that area are 80% (at least) of the same color.
So my goal is passing a "test point" and understand if it is in a specific area or not.
Is there an algorithm to do something like that?
NOTE: The areas on the plot are (obviously) manually painted, just to give you a sense of what i need to do, programmatically.
| 1
| 1
| 0
| 0
| 0
| 0
|
Hi!
I am trying to understand how BERT is dealing with text that has number within.
More concretely I'm trying to find the most similar line in document(text+numbers) and specific line(text+numbers).
I tried an example with BERT of 30 characters and cosine similarity:
sentence2 = "I have 2 apple"; score(between sentence1 & sentence2): 0.99000436
sentence3 = "I have 3 apple"; score(between sentence1 & sentence3): 0.98602057
sentence4 = "I have 0 apple"; score(between sentence1 & sentence4): 0.97923964
sentence5 = "I have 2.1 apple"; score(between sentence1 & sentence5): 0.95482975
I do not understand why sentence4 has smaller score than sentence3(0 closer to 1 than 3), and 2.1 is closer to 1 than 3...
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm new in Python.
Wrote the function to specify the bag of words.
DICT_SIZE = 5000
WORDS_TO_INDEX = words_counts
"""INDEX_TO_WORDS = ####### YOUR CODE HERE #######"""
ALL_WORDS = WORDS_TO_INDEX.keys()
It's the function:
def my_bag_of_words(text, words_to_index, dict_size):
"""
text: a string
dict_size: size of the dictionary
return a vector which is a bag-of-words representation of 'text'
"""
result_vector = np.zeros(dict_size)
sentence_tokens = nltk.word_tokenize(text)
attributes = []
for i, k in words_to_index.items():
if k<dict_size:
attributes.append(i)
for i in attributes:
for k in sentence_tokens:
if i==k:
result_vector[attributes.index(i)]=+1
return result_vector
I tried to test the function and it works too
def test_my_bag_of_words():
words_to_index = {'hi': 0, 'you': 1, 'me': 2, 'are': 3}
examples = ['hi how are you']
answers = [[1, 1, 0, 1]]
for ex, ans in zip(examples, answers):
if (my_bag_of_words(ex, words_to_index, 4) != ans).any():
print(my_bag_of_words(ex, words_to_index, 4))
return "Wrong answer for the case: '%s'" % ex
return 'Basic tests are passed.'
print(test_my_bag_of_words())
Basic tests are passed.
After I want to apply it to all text in the Dataset
X_train_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_train])
X_val_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_val])
X_test_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_test])
print('X_train shape ', X_train_mybag.shape)
print('X_val shape ', X_val_mybag.shape)
print('X_test shape ', X_test_mybag.shape)
And in this case appears the error:
IndexError Traceback (most recent call last)
<ipython-input-30-364e76658e6f> in <module>()
----> 1 X_train_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_train])
2 X_val_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_val])
3 X_test_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_test])
4 print('X_train shape ', X_train_mybag.shape)
5 print('X_val shape ', X_val_mybag.shape)
1 frames
<ipython-input-25-814e004d61c2> in my_bag_of_words(text, words_to_index, dict_size)
20 for k in sentence_tokens:
21 if i==k:
---> 22 result_vector[attributes.index(i)]=+1
23 return result_vector
IndexError: index 5000 is out of bounds for axis 0 with size 5000
Can anybody help me to understand what mistake did I made in the code in the function my_bag_of_words, please?
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to analyse some text on a Google Compute server on Google Cloud Platform (GCP) using the Word2Vec model.
However, the un-compressed word2vec model from https://mccormickml.com/2016/04/12/googles-pretrained-word2vec-model-in-python/ is over 3.5GB and it will take time to download it manually and upload it to a cloud instance.
Is there any way to access this (or any other) pre-trained Word2Vec model on a Google Compute server without uploading it myself?
| 1
| 1
| 0
| 0
| 0
| 0
|
Let's say I have a bag of keywords.
Ex :
['profit low', 'loss increased', 'profit lowered']
I have a pdf document and I parse the entire text from that,
now I want to get the sentences which match the bag of words.
Lets say one sentence is :
'The profit in the month of November lowered from 5% to 3%.'
This should match as in bag of words 'profit lowered' matches this sentence.
What will be the best approach to solve this problem in python?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have cleaned and de-duplicated text data with a 'count_raw_id' column which implies the number of raw ids that are mapped to one cleaned id
A clean id represent that it is unique and has some raw ids mapped to it
Now i don't want to split my cleaned text data('clean_df') randomly
I need some Criteria based sampling to create two datasets out of this whole cleaned file of about 2k rows one to train the model and one to test the model
I don't want to use train_test_split of sklearn to split my data as it will my data randomly.I want some way out to query my data such that i can use some other sampling technique also i can't use stratified sampling as i don't have actual labels for these records
import pandas as pd
data = {'clean_id': [1,2,3,4],
'all_terms': [['activation', 'brand', 'admin', 'sale', 'commission',
'administration', 'assistant', 'manager'],
['activation', 'brand', 'group', 'commission', 'mktg',
'marketing', 'manager'],
['activation', 'brand', 'info', 'specialist', 'service',
'manager', 'customer'],
['activation', 'brand', 'lead', 'greece', 'commission',
'mktg', 'mgr', 'marketing']],
'count_raw_id': [8,2,4,5]}
clean_df = pd.DataFrame(data)
len(clean_df)
#output : 2150
| 1
| 1
| 0
| 0
| 0
| 0
|
How do I detect what language a text is written in using NLTK?
The examples I've seen use nltk.detect, but when I've installed it on my mac, I cannot find this package.
| 1
| 1
| 0
| 0
| 0
| 0
|
We're implementing NLP solution, where we have a bunch of paragraphs text and tables. We've used google's burt for NLP, and it works great on text. However, if we ask a question whose answer lies in a table value then our nlp solution wouldn't work. Because it only works on natural language text (sentence, paragraph etc).
So, in order to get the answer from a table (dataframe) we're thinking to convert the whole dataframe into a natural language text which perserve the relation of each cell with its corresponding column name and row. For example:
+------------+-----------+--------+--+
| First Name | Last Name | Gender | |
+------------+-----------+--------+--+
| Ali | Asad | Male | |
| Sara | Dell | Female | |
+------------+-----------+--------+--+
Will become:
First Name is Ali, Last Name is Asad, and Gender is Male
First Name is Sara, Last Name is Dell, and Gender is Female
This will help us to find the right answer, for example, if I ask 'What's the Gender of 'Ali', then our NLP solution will give us the answer 'Male'.
I'm wondering is there any library available in python that converts a dataframe into a natural language text. Or shall I have to do it manually?
Many thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to use low-rank-approximation for latent semantic indexing. I thought that doing low rank approximation reduces matrix dimensions but it contradicts the results I get.
Assume I have my dictionary with 40 000 words and 2000 documents. Then my term-by-document matrix is 40 000 x 2000.
According to wikipedia, I have to do SVD of a matrix and then apply
This is the code I use for SVD and low rank approximation (the matrix is sparse):
import scipy
import numpy as np
u, s, vt = scipy.sparse.linalg.svds(search_matrix, k=20)
search_matrix = u @ np.diag(s) @ vt
print('u: ', u.shape) # (40000, 20)
print('s: ', s.shape) # (20, )
print('vt: ', vt.shape) # (20, 2000)
The result matrix is: (40 000 x 20) * (20 x 20) * (20, 2000) = 40 000 x 2000, which is exactly what I started with.
So... how does the low-rank-approximation reduce the dimensions of the matrix exactly?
Also, I will be doing queries on this approximated matrix to find correlation between user vector and each document (naive search engine). The user vector has dimensions 40 000 x 1 to start with (bag of words). According to the same wikipedia page, this is what I should do:
The code:
user_vec = np.diag((1 / s)) @ u.T @ user_vec
And it produces a matrix 20 x 1 which is what I expected!
((20 x 20) * (20 x 40 000) * (40 000 x 1) = (20 x 1)). But now, it has dimensions that do not match the search_matrix I want to multiply it with.
So... What am I doing wrong and why?
Sources:
https://en.wikipedia.org/wiki/Latent_semantic_analysis
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a list of paragraphs, I would like to check if these words are valid English words or not. Sometimes, due to some external issues, i might not get valid English words in these paragraphs. I am aware of libraries like pyenchant and nltk which have a set of dictionaries and provide accuracy of some level but both of these have few drawbacks. I wonder if there exists another library or procedure that can provide me with what I am looking for with at-most accuracy possible.
| 1
| 1
| 0
| 0
| 0
| 0
|
I was kind of posting this to get some ideas, but I wanted to go through some text and figure out how to tag body parts and injuries. Any idea how I could do this?
For example if I had this text: "Wizards guard John Wall will undergo surgery to repair a ruptured left Achilles tendon. The procedure, which has yet to be scheduled, will be performed by Dr. Robert Anderson in Green Bay, WI. Wall is expected to return to full basketball activity in approximately 12 months from the time of the surgery."
And I wanted to extract John Wall and left Achilles tendon how do you guys think I could go about doing this?
| 1
| 1
| 0
| 0
| 0
| 0
|
Getting the following error message when setting up a 3D-GAN for ModelNet10:
InvalidArgumentError: Input to reshape is a tensor with 27000 values, but the requested shape has 810000 [Op:Reshape]
In my opinion the batch is not properly created and thereby the shape of the tensor is not valid. Tried different things but can´t get the batch set up..
I am more than thankful for any hints how to clean up my code!
Thanks in advance!
import time
import numpy as np
import tensorflow as tf
np.random.seed(1)
from tensorflow.keras import layers
from IPython import display
# Load the data
modelnet_path = '/modelnet10.npz'
data = np.load(modelnet_path)
X, Y = data['X_train'], data['y_train']
X_test, Y_test = data['X_test'], data['y_test']
X = X.reshape(X.shape[0], 30, 30, 30, 1).astype('float32')
#Hyperparameters
BUFFER_SIZE = 3991
BATCH_SIZE = 30
LEARNING_RATE = 4e-4
BETA_1 = 5e-1
EPOCHS = 100
#Random seed for image generation
n_examples = 16
noise_dim = 100
seed = tf.random.normal([n_examples, noise_dim])
train_dataset = tf.data.Dataset.from_tensor_slices(X).batch(BATCH_SIZE)
# Build the network
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Reshape((30, 30, 30, 1), input_shape=(30, 30, 30)))
model.add(layers.Conv3D(16, 6, strides=2, activation='relu'))
model.add(layers.Conv3D(64, 5, strides=2, activation='relu'))
model.add(layers.Conv3D(64, 5, strides=2, activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(10))
return model
discriminator = make_discriminator_model()
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(15*15*15*128, use_bias=False,input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.ReLU())
model.add(layers.Reshape((15,15,15,128)))
model.add(layers.Conv3DTranspose(64, (5,5,5), strides=(1,1,1), padding='valid', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.ReLU())
model.add(layers.Conv3DTranspose(32, (5,5,5), strides=(2,2,2), padding='valid', use_bias=False, activation='tanh'))
return model
generator = make_generator_model()
#Optimizer & Loss function
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
optimizer = tf.keras.optimizers.Adam(lr=LEARNING_RATE, beta_1=BETA_1)
#Training
def train_step(shapes):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_shapes = generator(noise, training=True)
real_output = discriminator(shapes, training=True)
fake_output = discriminator(generated_shapes, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gen_gradients = gen_tape.gradient(gen_loss, generator.trainable_variables)
disc_gradients = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
optimizer.apply_gradients(zip(gen_gradients, generator.trainable_variables))
optimizer.apply_gradients(zip(disc_gradients, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for shape_batch in dataset:
train_step(shape_batch)
display.clear_output(wait=True)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
display.clear_output(wait=True)
train(X_test, EPOCHS)
| 1
| 1
| 0
| 1
| 0
| 0
|
dataset image I have transportation dataset which contains 6 categorical variables(i.e sender,reciver,truckername,fromcity,tocity,vehicletype) and one continuous variable(i.e weight).i want to predict sale(which is continuous variable). i have 13000 records in dataset.
I have already tried one hot encoding but there are more than 300 category in each variable that means (300*6 = 1800 variables).so how can
encode the columns or is there any other solution to this?
Here you can see sample dataset:
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm working with about 24k text files and am splitting some lines on '-'. It works for some files, however it fails to split for some other files.
company_participants is a list with N >= 1 elements, with each element consisting of a name followed by a hyphen ("-"), followed by the job title. To get the names, I use:
names_participants = [name.split('-')[0].strip() for name in company_participants]
On closer inspection, I found that it does not recognise "-" as "-" for some reason.
For example, the first element in company_participants is "robert isom - president"
Calling company_participants[0].split()[2] returns "-" since I've split on whitespace, and the hyphen is the third element (index 2).
When I then run a boolean on whether this is equal to "-", I get False.
company_participants[0].split()[2] == "-" # Item at index 2 is the hyphen
# Output = False
Any idea what's going on here? Is there something else that looks like a hyphen but isn't one?
Many thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
rather than finding the similarity between two string ,i just want find the similarity of the meaning of the two strings for ex.
what are the types of hyper threading
is there any categoriesin hyper threading
should have similarity .Till now i tried cosine similarity and word mover distance but i am not getting accurate result for some of the strings
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm trying to solve KNN using tensorflow. After I get the K neighbours for N vectors, I have a N by K tensor. Now, for each vector in N, I need to use tf.unique_with_counts to find the majority vote. However, I cannot iterate in a tensor and I cannot run tf.unique_with_counts with a multi-dimensional tensor. It keeps giving me InvalidArgumentError (see above for traceback): unique expects a 1D vector.
Example:
def knnVote():
'''
KNN using majority vote
'''
#nearest indices
A = tf.constant([1, 1, 2, 4, 4, 4, 7, 8, 8])
print(A.shape)
nearest_k_y, idx, votes = tf.unique_with_counts(A)
print("y", nearest_k_y.eval())
print("idx", idx.eval())
print("votes", votes.eval())
majority = tf.argmax(votes)
predict_res = tf.gather(nearest_k_y, majority)
print("majority", majority.eval())
print("predict", predict_res.eval())
return predict_res
Result:
y [1 2 4 7 8]
idx [0 0 1 2 2 2 3 4 4]
votes [2 1 3 1 2]
majority 2
predict 4
But how can I extend this to N by D input A, such as the case when A = tf.constant([[1, 1, 2, 4, 4, 4, 7, 8, 8],
[2, 2, 3, 3, 3, 4, 4, 5, 6]])
| 1
| 1
| 0
| 1
| 0
| 0
|
Issue
I am trying to run the spaCy CLI but my training data and dev data seem somehow to be incorrect as seen when I run debug:
| => python3 -m spacy debug-data en
./CLI_train_randsplit_anno191022.json ./CLI_dev_randsplit_anno191022.json --pipeline ner --verbose
=========================== Data format validation ===========================
✔ Corpus is loadable
=============================== Training stats ===============================
Training pipeline: ner
Starting with blank model 'en'
0 training docs
0 evaluation docs
✔ No overlap between training and evaluation data
✘ Low number of examples to train from a blank model (0)
It's recommended to use at least 2000 examples (minimum 100)
============================== Vocab & Vectors ==============================
ℹ 0 total words in the data (0 unique)
10 most common words:
ℹ No word vectors present in the model
========================== Named Entity Recognition ==========================
ℹ 0 new labels, 0 existing labels
0 missing values (tokens with '-' label)
✔ Good amount of examples for all labels
✔ Examples without occurrences available for all labels
✔ No entities consisting of or starting/ending with whitespace
================================== Summary ==================================
✔ 5 checks passed
✘ 1 error
Trying to train anyway yields:
| => python3 -m spacy train en ./models/CLI_1 ./CLI_train_randsplit_anno191022.json ./CLI_dev_randsplit_anno191022.json -n 150 -p 'ner' --verbose
dropout_from = 0.2 by default
dropout_to = 0.2 by default
dropout_decay = 0.0 by default
batch_from = 100.0 by default
batch_to = 1000.0 by default
batch_compound = 1.001 by default
Training pipeline: ['ner']
Starting with blank model 'en'
beam_width = 1 by default
beam_density = 0.0 by default
beam_update_prob = 1.0 by default
Counting training words (limit=0)
learn_rate = 0.001 by default
optimizer_B1 = 0.9 by default
optimizer_B2 = 0.999 by default
optimizer_eps = 1e-08 by default
L2_penalty = 1e-06 by default
grad_norm_clip = 1.0 by default
parser_hidden_depth = 1 by default
subword_features = True by default
conv_depth = 4 by default
bilstm_depth = 0 by default
parser_maxout_pieces = 2 by default
token_vector_width = 96 by default
hidden_width = 64 by default
embed_size = 2000 by default
Itn NER Loss NER P NER R NER F Token % CPU WPS
--- --------- ------ ------ ------ ------- -------
✔ Saved model to output directory
models/CLI_1/model-final
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/spacy/cli/train.py", line 389, in train
scorer = nlp_loaded.evaluate(dev_docs, verbose=verbose)
File "/usr/local/lib/python3.7/site-packages/spacy/language.py", line 673, in evaluate
docs, golds = zip(*docs_golds)
ValueError: not enough values to unpack (expected 2, got 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/site-packages/spacy/__main__.py", line 35, in <module>
plac.call(commands[command], sys.argv[1:])
File "/usr/local/lib/python3.7/site-packages/plac_core.py", line 328, in call
cmd, result = parser.consume(arglist)
File "/usr/local/lib/python3.7/site-packages/plac_core.py", line 207, in consume
return cmd, self.func(*(args + varargs + extraopts), **kwargs)
File "/usr/local/lib/python3.7/site-packages/spacy/cli/train.py", line 486, in train
best_model_path = _collate_best_model(meta, output_path, nlp.pipe_names)
File "/usr/local/lib/python3.7/site-packages/spacy/cli/train.py", line 548, in _collate_best_model
bests[component] = _find_best(output_path, component)
File "/usr/local/lib/python3.7/site-packages/spacy/cli/train.py", line 567, in _find_best
accs = srsly.read_json(epoch_model / "accuracy.json")
File "/usr/local/lib/python3.7/site-packages/srsly/_json_api.py", line 50, in read_json
file_path = force_path(location)
File "/usr/local/lib/python3.7/site-packages/srsly/util.py", line 21, in force_path
raise ValueError("Can't read file: {}".format(location))
ValueError: Can't read file: models/CLI_1/model0/accuracy.json
My training and dev docs were generated using spacy.gold.docs_to_json(), saved as json files using the function:
def make_CLI_json(mock_docs, CLI_out_file_path):
CLI_json = docs_to_json(mock_docs)
with open(CLI_out_file_path, 'w') as json_file:
json.dump(CLI_json, json_file)
I verified them both to be valid json at http://www.jsonlint.com.
I created the docs from which these json originated using the function:
def import_from_doccano(jx_in_file_path, view=True):
annotations = load_jsonl(jx_in_file_path)
mock_nlp = English()
sentencizer = mock_nlp.create_pipe("sentencizer")
unlabeled = 0
DATA = []
mock_docs = []
for anno in annotations:
# get DATA (as used in spacy inline training)
if "label" in anno.keys():
ents = [tuple([label[0], label[1], label[2]])
for label in anno["labels"]]
else:
ents = []
DATUM = (anno["text"], {"entities": ents})
DATA.append(DATUM)
# mock a doc for viz in displacy
mock_doc = mock_nlp(anno["text"])
if "labels" in anno.keys():
entities = anno["labels"]
if not entities:
unlabeled += 1
ents = [(e[0], e[1], e[2]) for e in entities]
spans = [mock_doc.char_span(s, e, label=L) for s, e, L in ents]
mock_doc.ents = _cleanup_spans(spans)
sentencizer(mock_doc)
if view:
displacy.render(mock_doc, style='ent')
mock_docs.append(mock_doc)
print(f'Unlabeled: {unlabeled}')
return DATA, mock_docs
I wrote the function above to return the examples in both the format required for inline training (e.g. as shown at https://github.com/explosion/spaCy/blob/master/examples/training/train_ner.py) as well as to form these kind of “mock” docs so that I can use displacy and/or the CLI. For the latter purpose, I followed the code shown at https://github.com/explosion/spaCy/blob/master/spacy/cli/converters/jsonl2json.py with a couple of notable differences. The _cleanup_spans() function is identical to the one in the example. I did not use the minibatch() but made a separate doc for each of my labeled annotations. Also, (perhaps critically?) I found that using the sentencizer ruined many of my annotations, possibly because the spans get shifted in a way that the _cleanup_spans() function fails to repair properly. Removing the sentencizer causes the docs_to_json() function to throw an error. In my function (unlike in the linked example) I therefore run the sentencizer on each doc after the entities are written to them, which preserves my annotations properly and allows the docs_to_json() function to run without complaints.
The function load_jsonl called within import_from_doccano() is defined as:
def load_jsonl(input_path):
data = []
with open(input_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line.replace('
|\r',''), strict=False))
print('Loaded {} records from {}'.format(len(data), input_path))
print()
return data
My annotations are each of length ~10000 characters or less. They are exported from doccano
(https://doccano.herokuapp.com/) as JSONL using the format:
{"id": 1, "text": "EU rejects ...", "labels": [[0,2,"ORG"], [11,17, "MISC"], [34,41,"ORG"]]}
{"id": 2, "text": "Peter Blackburn", "labels": [[0, 15, "PERSON"]]}
{"id": 3, "text": "President Obama", "labels": [[10, 15, "PERSON"]]}
...
The data are split into train and test sets using the function:
def test_train_split(DATA, mock_docs, n_train):
L = list(zip(DATA, mock_docs))
random.shuffle(L)
DATA, mock_docs = zip(*L)
DATA = [i for i in DATA]
mock_docs = [i for i in mock_docs]
TRAIN_DATA = DATA[:n_train]
train_docs = mock_docs[:n_train]
TEST_DATA = DATA[n_train:]
test_docs = mock_docs[n_train:]
return TRAIN_DATA, TEST_DATA, train_docs, test_docs
And finally each is written to json using the following function:
def make_CLI_json(mock_docs, CLI_out_file_path):
CLI_json = docs_to_json(mock_docs)
with open(CLI_out_file_path, 'w') as json_file:
json.dump(CLI_json, json_file)
I do not understand why the debug shows 0 training docs and 0 development docs, or why the train command fails. The JSON look correct as far as I can tell. Is my data formatted incorrectly, or is there something else going on? Any help or insights would be greatly appreciated.
This is my first question on SE- apologies in advance if I've failed to follow some or other guideline. There are a lot of components involved so I'm not sure how I might produce a minimal code example that would replicate my problem.
Environment
Mac OS 10.15 Catalina
Everything is pip3 installed into user path
No virtual environment
| => python3 -m spacy info --markdown
## Info about spaCy
* **spaCy version:** 2.2.1
* **Platform:** Darwin-19.0.0-x86_64-i386-64bit
* **Python version:** 3.7.4
| 1
| 1
| 0
| 0
| 0
| 0
|
I am doing ruled based phrase matching in Spacy. I am trying the following example but it is not working.
Example
import spacy
from spacy.matcher import Matcher
nlp = spacy.load('en_core_web_sm')
doc = nlp('Hello world!')
pattern = [{"LOWER": "hello"}, {"IS_PUNCT": True}, {"LOWER": "world"}]
matcher = Matcher(nlp.vocab)
matcher.add('HelloWorld', None, pattern)
matches = matcher(doc)
print(matches)
then final matches is giving empty string. Would you please correct me?
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to do something like this if i have a textual transcript of a speech recognition system i want to convert this text like this - Triple A converts in AAA. Can someone help ?
| 1
| 1
| 0
| 1
| 0
| 0
|
I am using Amazon Comprehend Medical for entity detection of injuries.
Lets say I have a piece of text as follows:
John had surgery to repair a dislocated left knee and a full ACL tear."
Amazon comprehend medical (ACM) is able to recognize dislocated as a medical condition. However consider the next piece of text:
"John is sidelined with a dislocated right kneecap."
In this piece of text ACM is not able to recognize dislocated as a medical condition. Similarly, if I were to put in a piece of text like "Left ankle sprain", ACM is able to recognize ankle sprain as a medical condition however if I were to put in "sprained left ankle" it does not catch on to the word sprained as a medical condition.
Is there any way in which I can clean my text of change the order of the words so that those entities can be tagged accurately?
| 1
| 1
| 0
| 0
| 0
| 0
|
Could anyone please help me to fix this? I am trying to install pyenchant in colab to perform a possible suggestion if a word is spelled wrongly. I would like to use pyenchant.
This is what I tried;
!pip install pyenchant==1.6.8
but it output the following error;
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
My idea is to get some possible suggestion if a word is wrong,
I plan to do the following
import enchant
test = enchant.Dict("en_US")
test.suggest("Posible")
Could anyone suggest how can I achieve this? I am working on colab. Please help me on how to install pyenchant in colab or any other possible way I can achieve possible suggestion if a word is wrong.
| 1
| 1
| 0
| 0
| 0
| 0
|
I was training a model in Colab, but, I shut down my computer and this training stoped. Every 5 epochs I save the weights. I think it is but I don't know how. How it's possible to continue the training with the weights previously saved?
Thanks.
| 1
| 1
| 0
| 0
| 0
| 0
|
For example:
I have a input tensor(input), shaped (?,10) dtype=float32, the first dimension means batch_size.
And a mask tensor(mask), shaped (?,10). mask[sample_number] is like [True,True,False,...], means the masks
And a label tensor(avg_label), shaped (?,) ,means the correct mean value of masked positions for each sample
I want to train the model , but can't find a good way to get the output.
The tf.reduce_... (e.g. tf.reduce_mean) functions don't seem to support argument about masking.
I try tf.boolean_mask ,But it will flatten the output shape into only one dimension,throwing the sample_number dimension, so it cannot differentiate among the samples
I considered tf.where, like:
masked=tf.where(mask,input,tf.zeros(tf.shape(input)))
avg_out=tf.reduce_mean(masked,axis=1)
loss=tf.pow(avg_out-avg_label,2)
But the code above is certainly not working because False set to 0 will change avg. If use np.nan ,it will always get nan. i wonder if there is a value representing absence when doing reduce operations.
How can i do this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a Word2Vec model that I'm building where I have a vocab_list of about 30k words. I have a list of sentences (sentence_list) about 150k large. I am trying to remove tokens (words) from the sentences that weren't included in vocab_list. The task seemed simple, but nesting for loops and reallocating memory is slow using the below code. This task took approx. 1hr to run so I don't want to repeat it.
Is there a cleaner way to try this?
import numpy as np
from datetime import datetime
start=datetime.now()
timing=[]
result=[]
counter=0
for sent in sentences_list:
counter+=1
if counter %1000==0 or counter==1:
print(counter, 'row of', len(sentences_list), ' Elapsed time: ', datetime.now()-start)
timing.append([counter, datetime.now()-start])
final_tokens=[]
for token in sent:
if token in vocab_list:
final_tokens.append(token)
#if len(final_tokens)>0:
result.append(final_tokens)
print(counter, 'row of', len(sentences_list),' Elapsed time: ', datetime.now()-start)
timing.append([counter, datetime.now()-start])
sentences=result
del result
timing=pd.DataFrame(timing, columns=['Counter', 'Elapsed_Time'])
| 1
| 1
| 0
| 0
| 0
| 0
|
The issue I face is that I want to match properties (houses/apartments etc) that are similar to each other (e.g. longitude and latitude (numerical), bedrooms (numerical), district (categorial), condition (categorical) etc.) using deep learning. The data is heterogenous because we mix numerical and categorical data and the problem is unsupervised because we don’t use any labels.
My goal is to get a measure for how similar properties are so I can find the top matches for each target property. I could use KNN, but I want to use something that allows me to find embeddings and that uses deep learning.
I suppose I could determine a mixed distance measure such as the Gower Distance as the loss function, but how would I go about setting up a model that determines the, say, the top 10 matches for each target property in my sample?
Any help or points to similar problem sets (Kaggle, notebooks, github) would be very appreciated.
Thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
I am new to NLP and Word Embeddings and still need to learn many concepts within these topics, so any pointers would be appreciated. This question is related to this and this, and I think there may have been developments since these questions had been asked. Facebook MUSE provides aligned, supervised word embeddings for 30 languages, and it can be used to calculate word similarity across different languages. As far as I understand, The embeddings provided by MUSE satisfy the requirement of coordinate space compatibilty. It seems that it is possible to load these embeddings into libraries such as Gensim, but I wonder:
Is it possible to load multiple-language word embeddings
into Gensim (or other libraries), and if so:
What type of similarity measure
might fit in this use case?
How to use these loaded word embeddings
to calculate cross-lingual similarity score of phrases* instead of
words?
*e.g., "ÖPNV" in German vs "Trasporto pubblico locale" in Italian for the English term "Public Transport".
I am open o any implementation (libraries/languages/embeddings) though I may need some time to learn this topic. Thank you in advance.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to train my own Address classifier model using Stanford CRF-NER but the performance is very low. I am confused about the format of the training data I have trained with. The training data is typically the list of districts, cities, provinces and their respective labels. But the model is not tagging the respective address tags to its tokens.
The format of the training data is as below:
BARAT PROVINCE
MALUKU PROVINCE
MALUKU PROVINCE
KABUPATEN REGENCY
SIMEULUE REGENCY
KABUPATEN REGENCY
ACEH REGENCY
This is the just a sample of training data in csv format, There are 3 labels PROVINCE, REGENCY and DISTRICT
Here is the output of tagged tokens:
You can all tokens has been tagged as DISTRICT though I have REGENCY, DISTRICT AND PROVINCE as labelled data.
I wanted to know if my format of training data is correct is only works on contextual data at sentence level Since I saw Stanford NER working well on sentence level.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am facing a problem with regex usage. I am using the following regex:
\\S*the[^o\\s]*(?<!theo)\\b
The sentence that I am using is:
If the world says that theo is not oreo cookies then thetatheoder theotatheder thetatheder is extratheaterly good.
What i want from output is to have patterns: the, then, thetatheder, extratheaterly?
So in short, I am okay with 'the(The)' as a complete string or substring in a string that does not contain 'theo'.
How can I modify my regex to achieve this? What I am thinking is to apply, pipe operation or question mark. But none of them seems to be feasible.
| 1
| 1
| 0
| 1
| 0
| 0
|
I have a problem where I am tasked with creating three classifiers (two "out of box", one "optimized") for predicting sentiment analysis using sklearn.
The instructions are to:
Ingest the training set, train classifiers
Save the classifiers to disk
In a separate program, load the classifiers from disk
Predict using the test set
Steps 1-3 are no problem and quite frankly work well, the issue is using model.predict(). I am using sklearn's TfidfVectorizer, which creates a feature vector from text. My issue lies in that the feature vector I create for the training set is different than the training vector that is created for the testing set, since the text that is being provided is different.
Below is an example from the train.tsv file...
4|z8DDztUxuIoHYHddDL9zQ|So let me set the scene first, My church social group took a trip here last saturday. We are not your mothers church. The churhc is Community Church of Hope, We are the valleys largest GLBT church so when we desended upon Organ stop Pizza, in LDS land you know we look a little out of place. We had about 50 people from our church come and boy did we have fun. There was a baptist church a couple rows down from us who didn't see it coming. Now we aren't a bunch of flamers frolicking around or anything but we do tend to get a little loud and generally have a great time. I did recognized some of the music so I was able to sing along with those. This is a great place to take anyone over 50. I do think they might be washing dirtymob money or something since the business is cash only.........which I think caught a lot of people off guard including me. The show starts at 530 so dont be late !!!!!!
:-----:|:-----:|:-----:
2|BIeDBg4MrEd1NwWRlFHLQQ|Decent but terribly inconsistent food. I've had some great dishes and some terrible ones, I love chaat and 3 out of 4 times it was great, but once it was just a fried greasy mess (in a bad way, not in the good way it usually is.) Once the matar paneer was great, once it was oversalted and the peas were just plain bad. I don't know how they do it, but it's a coinflip between good food and an oversalted overcooked bowl. Either way, portions are generous.
4|NJHPiW30SKhItD5E2jqpHw|Looks aren't everything....... This little divito looks a little scary looking, but like I've said before "you can't judge a book by it's cover". Not necessarily the kind of place you will take your date (unless she's blind and hungry), but man oh man is the food ever good! We have ordered breakfast, lunch, & dinner, and it is all fantastico. They make home-made corn tortillas and several salsas. The breakfast burritos are out of this world and cost about the same as a McDonald's meal. We are a family that eats out frequently and we are frankly tired of pretty places with below average food. This place is sure to cure your hankerin for a tasty Mexican meal.
2|nnS89FMpIHz7NPjkvYHmug|Being a creature of habit anytime I want good sushi I go to Tokyo Lobby. Well, my group wanted to branch out and try something new so we decided on Sakana. Not a fan. And what's shocking to me is this place was packed! The restaurant opens at 5:30 on Saturday and we arrived at around 5:45 and were lucky to get the last open table. I don't get it... Messy rolls that all tasted the same. We ordered the tootsie roll and the crunch roll, both tasted similar, except of course for the crunchy captain crunch on top. Just a mushy mess, that was hard to eat. Bland tempura. No bueno. I did, however, have a very good tuna poke salad, but I would not go back just for that. If you want good sushi on the west side, or the entire valley for that matter, say no to Sakana and yes to Tokyo Lobby.
2|FYxSugh9PGrX1PR0BHBIw|I recently told a friend that I cant figure out why there is no good Mexican restaurants in Tempe. His response was what about MacAyo's? I responded with "why are there no good Mexican food restaurants in Tempe?" Seriously if anyone out there knows of any legit Mexican in Tempe let me know. And don't say restaurant Mexico!
Here is the train.py file:
import nltk, re, pandas as pd
from nltk.corpus import stopwords
import sklearn, string
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import islice
import time
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp = MLPClassifier()
rf = RandomForestClassifier()
mlp_opt = MLPClassifier(
activation = 'tanh',
hidden_layer_sizes = (1000,),
alpha = 0.009,
learning_rate = 'adaptive',
learning_rate_init = 0.01,
max_iter = 250,
momentum = 0.9,
solver = 'lbfgs',
warm_start = False
)
print("Training Classifiers")
mlp_opt.fit(X, Y)
mlp.fit(X, Y)
rf.fit(X, Y)
dump(mlp_opt, "C:\\filepath\Models\\mlp_opt.joblib")
dump(mlp, "C:\\filepath\\Models\\mlp.joblib")
dump(rf, "C:\\filepath\\Models\\rf.joblib")
print("Trained Classifiers")
main()
And here is the Tester.py file:
from nltk.corpus import stopwords
import sklearn, string, nltk, re, pandas as pd, numpy, time
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath\\dev.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=X.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp_opt = load("C:\\filepath\\Models\\mlp_opt.joblib")
mlp = load("C:\\filepath\\Models\\mlp.joblib")
rf = load("C:\\filepath\\Models\\rf.joblib")
print("Testing Classifiers")
mlp_opt_preds = mlp_opt.predict(X)
mlp_preds = mlp.predict(X)
rf_preds = rf.predict(X)
mlp_opt_performance = check_performance(mlp_opt_preds, Y)
mlp_performance = check_performance(mlp_preds, Y)
rf_performance = check_performance(rf_preds, Y)
print("MLP OPT PERF: {}".format(mlp_opt_performance))
print("MLP PERF: {}".format(mlp_performance))
print("RF PERF: {}".format(rf_performance))
main()
What I end up with is an error:
Testing Classifiers
Traceback (most recent call last):
File "Reader.py", line 121, in <module>
main()
File "Reader.py", line 109, in main
mlp_opt_preds = mlp_opt.predict(X)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn
eural_network\multilayer_perceptron.py", line 953, in predict
y_pred = self._predict(X)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn
eural_network\multilayer_perceptron.py", line 676, in _predict
self._forward_pass(activations)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn
eural_network\multilayer_perceptron.py", line 102, in _forward_pass
self.coefs_[i])
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\utils\extmath.py", line 173, in safe_sparse_dot
return np.dot(a, b)
**ValueError: shapes (2000,13231) and (12299,1000) not aligned: 13231 (dim 1) != 12299 (dim 0)**
I know the error is related to the differences in the feature vector
size -- since the vectors are being created from the text in the data.
I do not know enough about NLP or Machine Learning to devise a
solution to workaround this problem. How can I create a way to have
the model predict using the feature sets in the test data?
I tried making edits per answers below to save the feature vector:
Train.py now looks like:
import nltk, re, pandas as pd
from nltk.corpus import stopwords
import sklearn, string
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import islice
import time
import pickle
from joblib import dump, load
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tsv_file = "filepath\\train.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
vectorizer = TfidfVectorizer()
test = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=test.todense(), columns=vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp = MLPClassifier()
rf = RandomForestClassifier()
mlp_opt = MLPClassifier(
activation = 'tanh',
hidden_layer_sizes = (1000,),
alpha = 0.009,
learning_rate = 'adaptive',
learning_rate_init = 0.01,
max_iter = 250,
momentum = 0.9,
solver = 'lbfgs',
warm_start = False
)
print("Training Classifiers")
mlp_opt.fit(X, Y)
mlp.fit(X, Y)
rf.fit(X, Y)
dump(mlp_opt, "filepath\\Models\\mlp_opt.joblib")
dump(mlp, "filepath\\Models\\mlp.joblib")
dump(rf, "filepath\\Models\\rf.joblib")
pickle.dump(test, open("filepath\\tfidf_vectorizer.pkl", 'wb'))
print("Trained Classifiers")
main()
And Test.py now looks like:
from nltk.corpus import stopwords
import sklearn, string, nltk, re, pandas as pd, numpy, time
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from joblib import dump, load
import pickle
def ID_to_Num(arr):
le = preprocessing.LabelEncoder()
new_arr = le.fit_transform(arr)
return new_arr
def Num_to_ID(arr):
le = preprocessing.LabelEncoder()
new_arr = le.inverse_transform(arr)
return new_arr
def check_performance(preds, acts):
preds = list(preds)
acts = pd.Series.tolist(acts)
right = 0
total = 0
for i in range(len(preds)):
if preds[i] == acts[i]:
right += 1
total += 1
return (right / total) * 100
# This function removes numbers from an array
def remove_nums(arr):
# Declare a regular expression
pattern = '[0-9]'
# Remove the pattern, which is a number
arr = [re.sub(pattern, '', i) for i in arr]
# Return the array with numbers removed
return arr
# This function cleans the passed in paragraph and parses it
def get_words(para):
# Create a set of stop words
stop_words = set(stopwords.words('english'))
# Split it into lower case
lower = para.lower().split()
# Remove punctuation
no_punctuation = (nopunc.translate(str.maketrans('', '', string.punctuation)) for nopunc in lower)
# Remove integers
no_integers = remove_nums(no_punctuation)
# Remove stop words
dirty_tokens = (data for data in no_integers if data not in stop_words)
# Ensure it is not empty
tokens = [data for data in dirty_tokens if data.strip()]
# Ensure there is more than 1 character to make up the word
tokens = [data for data in tokens if len(data) > 1]
# Return the tokens
return tokens
def minmaxscale(data):
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
return df_scaled
# This function takes the first n items of a dictionary
def take(n, iterable):
#https://stackoverflow.com/questions/7971618/python-return-first-n-keyvalue-pairs-from-dict
#Return first n items of the iterable as a dict
return dict(islice(iterable, n))
def main():
tfidf_vectorizer = pickle.load(open("filepath\\tfidf_vectorizer.pkl", 'rb'))
tsv_file = "filepath\\dev.tsv"
csv_table=pd.read_csv(tsv_file, sep='\t', header=None)
csv_table.columns = ['class', 'ID', 'text']
s = pd.Series(csv_table['text'])
new = s.str.cat(sep=' ')
vocab = get_words(new)
s = pd.Series(csv_table['text'])
corpus = s.apply(lambda s: ' '.join(get_words(s)))
csv_table['dirty'] = csv_table['text'].str.split().apply(len)
csv_table['clean'] = csv_table['text'].apply(lambda s: len(get_words(s)))
print(type(corpus))
print(corpus.head())
X = tfidf_vectorizer.transform(corpus)
print(X)
df = pd.DataFrame(data=X.todense(), columns=tfidf_vectorizer.get_feature_names())
result = pd.concat([csv_table, df], axis=1, sort=False)
Y = result['class']
result = result.drop('text', axis=1)
result = result.drop('ID', axis=1)
result = result.drop('class', axis=1)
X = result
mlp_opt = load("filepath\\Models\\mlp_opt.joblib")
mlp = load("filepath\\Models\\mlp.joblib")
rf = load("filepath\\Models\\rf.joblib")
print("Testing Classifiers")
mlp_opt_preds = mlp_opt.predict(X)
mlp_preds = mlp.predict(X)
rf_preds = rf.predict(X)
mlp_opt_performance = check_performance(mlp_opt_preds, Y)
mlp_performance = check_performance(mlp_preds, Y)
rf_performance = check_performance(rf_preds, Y)
print("MLP OPT PERF: {}".format(mlp_opt_performance))
print("MLP PERF: {}".format(mlp_performance))
print("RF PERF: {}".format(rf_performance))
main()
But that yields:
Traceback (most recent call last):
File "Filepath\Reader.py", line 128, in <module>
main()
File "Filepath\Reader.py", line 95, in main
X = tfidf_vectorizer.transform(corpus)
File "C:\Users\Jerry\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\sparse\base.py", line 689, in __getattr__
raise AttributeError(attr + " not found")
AttributeError: transform not found
| 1
| 1
| 0
| 1
| 0
| 0
|
I am doing binary classification for title sentences in news. (To determinate whether the new is political biased)
I am using the Bert embedding from https://pypi.org/project/bert-embedding/ to embedding training sentences (one raw one title sentence) in Dataframes then feed vectorised Data into logistic regression, but the output data shape from the Bert embedding doesn't support logistic regression model. How can I parse this to make it fit logistic regression model?
Before I used tifdVectorizer it works perfectly and the output is numpy array like
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
each row is vectorised data for one sentence and It's an array with size of 1903
And I have 516 titles in training data.
The output shapes are:
train_x.shape: (516, 1903) test_x.shape (129, 1903)
train_y.shape: (516,) test_y.shape (129,)
But after I switched into Bert_Embedding
the output vector for ONE row is numpy array list like
[list([array([ 9.79349554e-01, -7.06475616e-01 ...... ]dtype=float32),
array([ ........ ],dtype=float32), ......................
array([ ........ ],dtype=float32)]
the output shape is like:
train_x.shape: (516, 1) test_x.shape (129, 1)
train_y.shape: (516,) test_y.shape (129,)
def transform_to_Bert(articles_file: str, classified_articles_file: str):
df = get_df_from_articles_file(articles_file, classified_articles_file)
df_train, df_test, _, _ = train_test_split(df, df.label, stratify=df.label, test_size=0.2)
bert_embedding = BertEmbedding()
df_titles_values=df_train.title.values.tolist()
result_train = bert_embedding(df_titles_values)
result_test = bert_embedding(df_test.title.values.tolist())
train_x = pd.DataFrame(result_train, columns=['A', 'Vector'])
train_x = train_x.drop(columns=['A'])
test_x = pd.DataFrame(result_test, columns=['A', 'Vector'])
test_x=test_x.drop(columns=['A'])
test_x=test_x.values
train_x=train_x.values
print(test_x)
print(train_x)
train_y = df_train.label.values
test_y = df_test.label.values
return {'train_x': train_x, 'test_x': test_x, 'train_y': train_y, 'test_y': test_y, 'input_length': train_x.shape[1], 'vocab_size': train_x.shape[1]}
Column A is the original title string in the result. So I just drop it.
Below is the code where I use tifd vectoriser which works for logistical model.
def transform_to_tfid(articles_file: str, classified_articles_file: str):
df = get_df_from_articles_file(articles_file, classified_articles_file)
df_train, df_test, _, _ = train_test_split(df, df.label, stratify=df.label, test_size=0.2)
vectorizer = TfidfVectorizer(stop_words='english', )
vectorizer.fit(df_train.title)
train_x= vectorizer.transform(df_train.title)
train_x=train_x.toarray()
print(type(train_x))
print(train_x)
test_x= vectorizer.transform(df_test.title)
test_x=test_x.toarray()
print(test_x)
train_y = df_train.label.values
test_y = df_test.label.values
return {'train_x': train_x, 'test_x': test_x, 'train_y': train_y, 'test_y': test_y, 'input_length': train_x.shape[1], 'vocab_size': train_x.shape[1]}
model=LogisticRegression(solver='lbfgs')
model.fit(train_x, train_y)
the error is ValueError: setting an array element with a sequence.
I expected output shape from Bert: train_x.shape: (516, 1) test_x.shape (129, 1) is like that from tifd: train_x.shape: (516, 1903) test_x.shape (129, 1903)so that it fits the logistic model
| 1
| 1
| 0
| 0
| 0
| 0
|
Iam trying to check if keyword occurs in the sentence and then add the said keyword. I managed to write this solution but it only works if the search term is one word (said keyword). How to improve it to work when keyword occurs in a sentence? Here is my code:
keyword = []
for i in keywords['keyword']:
keyword.append(i) #this was in a dataframe after readin xlsx file with Pandas so I made it a list
hit = []
for i in phrase['Search term']:
if i in keyword:
hit.append(i)
else:
hit.append("blank")
phrase['Keyword'] = hit
This only works when a single keyword occurs in "Phrase" - like "cat" but won't work if the word "cat" is part of a sentence. Any pointers to improve it ?
Thank you all in advance
| 1
| 1
| 0
| 0
| 0
| 0
|
I have installed Spacy using conda.
conda install -c conda-forge spacy
python -m spacy download en
And installed version was
import spacy
nlp=spacy.load('en_core_web_sm')
doc = nlp(u"Let's visit St. Louis in the U.S. next year.")
len(doc)
len(doc.vocab)
len(nlp.vocab)
len(doc.vocab) and len(nlp.vocab) showing only 486.
How can we load it to show 57852.
Please help me on this.
Thanks,
Venkat
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to build a deep learning model to predict the top 5 probable movie genres, using movies' synopses as input. The movie genres I'm including in the data are 19, but regardless of test input, the model always predicts the same 5 movie genres. Below is my code building the model. However, the accuracy during fitting is 90%. Can you point me to the right direction as to what I'm doing wrong?
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers.core import Activation, Dropout, Dense
from keras.layers import Flatten, LSTM
from keras.layers import GlobalMaxPooling1D
from keras.models import Model
from keras.layers.embeddings import Embedding
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.layers import Input
from keras.layers.merge import Concatenate
import pandas as pd
import numpy as np
import re
data = pd.read_csv('train.csv', encoding = 'utf-8')
#Create column with comma separated genres
data['genres_comma'] = data['genres'].str.split()
mlb = MultiLabelBinarizer()
#Create new dataframe with one hot encoded labels
train = pd.concat([
data.drop(['genres', 'genres_comma'], 1),
pd.DataFrame(mlb.fit_transform(data['genres_comma']), columns=mlb.classes_),
], 1)
genre_names = list(mlb.classes_)
genres = train.drop(['movie_id', 'synopsis'], 1)
def preprocess_text(sen):
# Remove punctuations and numbers
sentence = re.sub('[^a-zA-Z]', ' ', sen)
# Single character removal
sentence = re.sub(r"\s+[a-zA-Z]\s+", ' ', sentence)
# Removing multiple spaces
sentence = re.sub(r'\s+', ' ', sentence)
return sentence
X = []
sentences = list(train['synopsis'])
for sen in sentences:
X.append(preprocess_text(sen))
y = genres.values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
#Convert text inputs into embedded vectors.
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(X_train)
X_train = tokenizer.texts_to_sequences(X_train)
X_test = tokenizer.texts_to_sequences(X_test)
vocab_size = len(tokenizer.word_index) + 1
maxlen = 200
X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)
#GloVe word embeddings to convert text inputs to their numeric counterparts
from numpy import asarray
from numpy import zeros
embeddings_dictionary = dict()
glove_file = open('glove.6B.100d.txt', encoding="utf8")
for line in glove_file:
records = line.split()
word = records[0]
vector_dimensions = asarray(records[1:], dtype='float32')
embeddings_dictionary[word] = vector_dimensions
glove_file.close()
embedding_matrix = zeros((vocab_size, 100))
for word, index in tokenizer.word_index.items():
embedding_vector = embeddings_dictionary.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
#Model Creation
deep_inputs = Input(shape=(maxlen,))
embedding_layer = Embedding(vocab_size, 100, weights=[embedding_matrix], trainable=False)(deep_inputs)
LSTM_Layer_1 = LSTM(128)(embedding_layer)
dense_layer_1 = Dense(19, activation='sigmoid')(LSTM_Layer_1)
model = Model(inputs=deep_inputs, outputs=dense_layer_1)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
print(model.summary())
history = model.fit(X_train, y_train, batch_size=128, epochs=5, verbose=1, validation_split=0.2)
score = model.evaluate(X_test, y_test, verbose=1)
| 1
| 1
| 0
| 0
| 0
| 0
|
i have tokenized a text in a column into a new column 'token_sentences' of sentence tokens.
i want to use 'token_sentences' column to create a new column 'token_words' containing tokenized words.
df i am using
article_id article_text
1 Maria Sharapova has basically no friends as te...
2 Roger Federer advance...
3 Roger Federer has revealed that organisers of ...
4 Kei Nishikori will try to end his long losing ...
added token_sentences column
article_id article_text token_sentences
1 Maria Sharapova has basically no friends as te... [Maria Sharapova has basically no friends as te
2 Roger Federer advance... [Roger Federer advance...
3 Roger Federer has revealed that organisers of ... [Roger Federer has revealed that organisers of...
4 Kei Nishikori will try to end his long losing ... [Kei Nishikori will try to end his long losing...
which is a list of sentences in every row.
i am unable to flatten the list in token_sentences column to be able to used in the next step
i want use token_sentences column
to make the df look like
article_id article_text token_sentences token_words
1 Maria... ["Maria Sharapova..",["..."]] [Maria, Sharapova, has, basically, no, friends,...]
2 Roger... ["Roger Federer advanced ...",["..."]] [Roger,Federer,...]
3 Roger... ["Roger Federer...",["..."]] [Roger ,Federer,...]
4 Kei ... ["Kei Nishikori will try...",["..."]] [Kei,Nishikori,will,try,...]
| 1
| 1
| 0
| 0
| 0
| 0
|
I feel like I have a dumb question, but here goes anyway..
I'm trying to go from data that looks something like this:
a word form lemma POS count of occurrance
same word form lemma Not the same POS another count
same word form lemma Yet another POS another count
to a result that looks like this:
the word form total count all possible POS and their individual counts
So for example I could have:
ring total count = 100 noun = 40, verb = 60
I have my data in a CSV file. I want to do something like this:
for row in all_rows:
if row[0] is the same as row[0] in the next row, add the values from row[3] together to get the total count
buuut I can't seem to figure out how to do that. Help?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am having trouble constructing a pandas DataFrame with sparse dtype. My input is a bunch of feature vectors stored as dicts or Counters. With sparse data like bag-of-words representation of text, it is often inappropriate and infeasible to store the data as a dense document x term matrix, and is necessary to maintain the sparsity of the data structure.
For example, say the input is:
docs = [{'hello': 1}, {'world': 1, '!': 2}]
Output should be equivalent to:
import pandas as pd
out = pd.DataFrame(docs).astype(pd.SparseDtype(float))
without creating dense arrays along the way. (We can check out.dtypes and out.sparse.density.)
Attempt 1:
out = pd.DataFrame(dtype=pd.SparseDtype(float))
out.loc[0, 'hello'] = 1
out.loc[1, 'world'] = 1
out.loc[1, '!'] = 2
But this produces a dense data structure.
Attempt 2:
out = pd.DataFrame({"hello": pd.SparseArray([]),
"world": pd.SparseArray([]),
"!": pd.SparseArray([])})
out.loc[0, 'hello'] = 1
But this raises TypeError: SparseArray does not support item assignment via setitem.
The solution I eventually found below did not work in earlier versions of Pandas where I tried it.
| 1
| 1
| 0
| 0
| 0
| 0
|
I use Python 3 and NLTK 3.0.0 with WordNet 3.0.
I would like to use this data (semeval2007) with WordNet 2.1.
Is that possible to use WordNet 2.1 with Python 3?
Is that possible to replace WordNet 3.0 with WordNet 2.1? How can i do that?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am running spacy on a paragraph of text and it's not extracting text in quote the same way for each, and I don't understand why that is
nlp = spacy.load("en_core_web_lg")
doc = nlp("""A seasoned TV exec, Greenblatt spent eight years as chairman of NBC Entertainment before WarnerMedia. He helped revive the broadcast network's primetime lineup with shows like "The Voice," "This Is Us," and "The Good Place," and pushed the channel to the top of the broadcast-rating ranks with 18-49-year-olds, Variety reported. He also drove Showtime's move into original programming, with series like "Dexter," "Weeds," and "Californication." And he was a key programming exec at Fox Broadcasting in the 1990s.""")
Here's the whole output:
A
seasoned
TV
exec
,
Greenblatt
spent
eight years
as
chairman
of
NBC Entertainment
before
WarnerMedia
.
He
helped
revive
the
broadcast
network
's
primetime
lineup
with
shows
like
"
The Voice
,
"
"
This
Is
Us
,
"
and
"The Good Place
,
"
and
pushed
the
channel
to
the
top
of
the
broadcast
-
rating
ranks
with
18-49-year-olds
,
Variety
reported
.
He
also
drove
Showtime
's
move
into
original
programming
,
with
series
like
"
Dexter
,
"
"
Weeds
,
"
and
"
Californication
.
"
And
he
was
a
key
programming
exec
at
Fox Broadcasting
in
the 1990s
.
The one that bothers me the most is The Good Place, which is extracted as "The Good Place. Since the quotation is part of the token, I then can't extract text in quote with a Token Matcher later on… Any idea what's going on here?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataframe with a 'description' column with details about the product. Each of the description in the column has long paragraphs. Like
"This is a superb product. I so so loved this superb product that I wanna gift to all. This is like the quality and packaging. I like it very much"
How do I locate/extract the sentence which has the phrase "superb product", and place it in a new column?
So for this case the result will be
expected output
I have used this,
searched_words=['superb product','SUPERB PRODUCT']
print(df['description'].apply(lambda text: [sent for sent in sent_tokenize(text)
if any(True for w in word_tokenize(sent)
if stemmer.stem(w.lower()) in searched_words)]))
The output for this is not suitable. Though it works if I put just one word in " Searched Word" List.
| 1
| 1
| 0
| 0
| 0
| 0
|
General speaking, after I have successfully trained a text RNN model with Pytorch, using PytorchText to leverage data loading on an origin source, I would like to test with other data sets (a sort of blink test) that are from different sources but the same text format.
First I defined a class to handle the data loading.
class Dataset(object):
def __init__(self, config):
# init what I need
def load_data(self, df: pd.DataFrame, *args):
# implementation below
# Data format like `(LABEL, TEXT)`
def load_data_but_error(self, df: pd.DataFrame):
# implementation below
# Data format like `(TEXT)`
Here is the detail of load_data which I load data that trained successfully.
TEXT = data.Field(sequential=True, tokenize=tokenizer, lower=True, fix_length=self.config.max_sen_len)
LABEL = data.Field(sequential=False, use_vocab=False)
datafields = [(label_col, LABEL), (data_col, TEXT)]
# split my data to train/test
train_df, test_df = train_test_split(df, test_size=0.33, random_state=random_state)
train_examples = [data.Example.fromlist(i, datafields) for i in train_df.values.tolist()]
train_data = data.Dataset(train_examples, datafields)
# split train to train/val
train_data, val_data = train_data.split(split_ratio=0.8)
# build vocab
TEXT.build_vocab(train_data, vectors=Vectors(w2v_file))
self.word_embeddings = TEXT.vocab.vectors
self.vocab = TEXT.vocab
test_examples = [data.Example.fromlist(i, datafields) for i in test_df.values.tolist()]
test_data = data.Dataset(test_examples, datafields)
self.train_iterator = data.BucketIterator(
(train_data),
batch_size=self.config.batch_size,
sort_key=lambda x: len(x.title),
repeat=False,
shuffle=True)
self.val_iterator, self.test_iterator = data.BucketIterator.splits(
(val_data, test_data),
batch_size=self.config.batch_size,
sort_key=lambda x: len(x.title),
repeat=False,
shuffle=False)
Next is my code (load_data_but_error) to load others source but causing error
TEXT = data.Field(sequential=True, tokenize=tokenizer, lower=True, fix_length=self.config.max_sen_len)
datafields = [('title', TEXT)]
examples = [data.Example.fromlist(i, datafields) for i in df.values.tolist()]
blink_test = data.Dataset(examples, datafields)
self.blink_test = data.BucketIterator(
(blink_test),
batch_size=self.config.batch_size,
sort_key=lambda x: len(x.title),
repeat=False,
shuffle=True)
When I was executing code, I had an error AttributeError: 'Field' object has no attribute 'vocab' which has a question at here but it doesn't like my situation as here I had vocab from load_data and I want to use it for blink tests.
My question is what the correct way to load and feed new data with a trained PyTorch model for testing current model is?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have an image dataset that I am extracting text data from. I have the text as a string but now want to separate this text into a more structured form.
The data looks like this:
Camden Row,Camberwell, S.E—A. Massey, M.D.4.
Campden Hill, Kensington.
(Hornton House).
Campden Hill Road, Kensington.
James, M.D. 6.
Canning Town. E—R. J. Carey (Widdicombe-
co ee
Cannon Street. E.C.—R. Cresswell, 151.
Cannon Street Road. E.—R. W. Lammiman, 106.
—J. R. Morrison, 57.—B. R. Rygate, 126.—
J. J. Rygate, M.B. 126.
Canonbury N. (see foot note)—J. Cheetham, M.D.
(Springjield House),
Canonbury Lane, N.—H. Bateman,
Roberts, 10.—J. Rose, 3.
As you can see it involves a street name followed by a letter representing (N/S/E/W/NW/SE etc.) and then a persons name and house number.
So far I have been using python NLTK. I am able to extract streets, names and numbers as individual entities using:
tagged = nltk.pos_tag(tokens)
What I would like to achieve is a list of:
[street name, person, house_number]
For example:
[[Cannon Street Road, R. W. Lammiman, 106][Cannon Street Road, J. R. Morrison, 57]]
My plan was to use the street names as an anchor for the start and then the digit for an anchor at the end but this is a bit more complicated due to multiple house numbers on each street.
Can anyone suggest a method/regex that might work for this?
Thank you kindly if so!
James.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataset of keywords and clicks.
I'm trying to build a model where it takes in a phrase of keyword ( not more than 5 words, eg: mechanical engineer ) and outputs a value (like clicks, eg: 56). I'm using the bag of words approach which resulted in about 40% accuracy which is not good enough. Can I get some opinions on what approach you would take to improve the accuracy?
Or perhaps my approach is wrong ?
After cleaning,
Here's my code:
words = []
for row in df['Keyword']:
row = nltk.word_tokenize(row)
for i in row:
words.append(i)
words = sorted(list(set(words)))
training = []
for x in df['Keyword']:
bag = []
wrds = nltk.word_tokenize(x)
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
training.append(bag)
model = keras.Sequential()
inputs = keras.Input(shape=(858,))
x = layers.Embedding(858, 8, input_length=5)(inputs)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation='relu')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='my_model')
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
history = model.fit(X_train, Y_train,
batch_size=50,
epochs=20,
validation_split=0.2,
verbose = 1)
Here's a sample output of my X_train and Y_train.
X_train:
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Y_train:
257.43
I have about 330k samples.
Any input in appreciated. Thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
How can I match the longest 'and chain' available in some text?
For example, consider
"The forum had jam and berry and wine along with bread and butter and cheese and milk, even chocolate and pista!"
How can I match
'jam and berry and wine'
and
'bread and butter and cheese and milk'
without knowing the number of 'and'-separated terms?
This is what I tried.
import spacy
from spacy.matcher import Matcher
nlp = spacy.load('en_core_web_sm')
matcher = Matcher(nlp.vocab)
pattern = [{'IS_ASCII': True}, {'LOWER': 'and'}, {'IS_ASCII': True}]
matcher.add("AND_PAT", None, pattern)
doc = nlp("The forum had jam and berry and wine along with bread and butter and cheese and milk, even chocolate and pista!")
for match_id, start, end in matcher(doc):
print(doc[start: end].text)
but this is not doing the 'lazy' kind of matching that I need.
I had a look at the documentation and it mentions the OP key for making rules but that seems to be useful only when the same token is repeated consecutively.
Also, the matches should be sort of greedy and shouldn't give result as soon as an acceptable pattern is found. In the above example, the desired result is not like (as in my program)
jam and berry
berry and wine
but as
jam and berry and wine
This is a problem which can probably be solved with regex but I was hoping for a solution using spaCy's rule matching. Preferably without even using the REGEX operator as mentioned here.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am attempting to save Doc data and attributes to binary using the new DocBin() class in spacy
I have saved data using pickle before but am looking for a more efficient method.
def serialize_to_disk():
doc_bin = DocBin(attrs=["LEMMA", "ENT_IOB", "ENT_TYPE", "POS", "TAG"], store_user_data=True)
for doc in nlp.pipe(ff):
# print(doc.is_parsed) this DOES produce parsed docs
doc_bin.add(doc)
bytes_data = doc_bin.to_bytes()
print(type(bytes_data))
with open("bytes/test", "wb") as binary_file:
binary_file.write(bytes_data)
def deserialize_from_disk():
nlp = spacy.blank("en")
with open("bytes/test", "rb") as f:
data = f.read()
doc_bin = DocBin().from_bytes(data)
docs = list(doc_bin.get_docs(nlp.vocab))
# this list does not have the tag data. Why?
return docs
when I call doc.is_parsed on the deserialized list, it returns False. Before serialization, this returns True
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to create a text classifer that looks at research abstracts and determines whether they are focused on access to care, based on a labeled dataset I have. The data source is an Excel spreadsheet, with three fields (project_number, abstract, and accessclass) and 326 rows of abstracts. The accessclass is 1 for access related and 0 for not access related (not sure if this is relevant). Anyway, I tried following along a tutorial by wanted to make it relevant by adding my own data and I'm having some issues with my X and Y arrays. Any help is appreciated.
import pandas as pd
import nltk
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn import naive_bayes
from sklearn.metrics import roc_auc_score
df = pd.read_excel("accessclasses.xlsx")
df.head()
#TFIDF vectorizer
stopset = set(stopwords.words('english'))
vectorizer = TfidfVectorizer(use_idf=True, lowercase=True,
strip_accents='ascii', stop_words=stopset)
y = df.accessclass
x = vectorizer.fit_transform(df)
print(x.shape)
print(y.shape)
#above and below seem to be where the issue is.
x_train, x_test, y_train, y_test = train_test_split(x, y)
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm new to text classification, however I get most of the concepts. In short, I have a list of restaurant reviews in an Excel dataset and I want to use them as my training data. Where I'm struggling is with the example syntax for importing both the actual review and the classification (1 = pos, 0 = neg) as part of my training dataset. I understand how to do this if I create my dataset manually in a tuple (i.e., what I have current have #'ed out under train). Any help is appreciated.
import nltk
from nltk.tokenize import word_tokenize
import pandas as pd
df = pd.read_excel("reviewclasses.xlsx")
customerreview= df.customerreview.tolist() #I want this to be what's in
"train" below (i.e., "this is a negative review")
reviewrating= df.reviewrating.tolist() #I also want this to be what's in
"train" below (e.g., 0)
#train = [("Great place to be when you are in Bangalore.", "1"),
# ("The place was being renovated when I visited so the seating was
limited.", "0"),
# ("Loved the ambiance, loved the food", "1"),
# ("The food is delicious but not over the top.", "0"),
# ("Service - Little slow, probably because too many people.", "0"),
# ("The place is not easy to locate", "0"),
# ("Mushroom fried rice was spicy", "1"),
#]
dictionary = set(word.lower() for passage in train for word in
word_tokenize(passage[0]))
t = [({word: (word in word_tokenize(x[0])) for word in dictionary}, x[1])
for x in train]
# Step 4 – the classifier is trained with sample data
classifier = nltk.NaiveBayesClassifier.train(t)
test_data = "The food sucked and I couldn't wait to leave the terrible
restaurant."
test_data_features = {word.lower(): (word in
word_tokenize(test_data.lower())) for word in dictionary}
print (classifier.classify(test_data_features))
| 1
| 1
| 0
| 0
| 0
| 0
|
In a binary text classification with scikit-learn with a SGDClassifier linear model on a TF-IDF representation of a bag-of-words, I want to obtain feature importances per class through the models coefficients. I heard diverging opinions if the columns (features) should be scaled with a StandardScaler(with_mean=False) or not for this case.
With sparse data, centering of the data before scaling cannot be done anyway (the with_mean=False part). The TfidfVectorizer by default also L2 row normalizes each instance already. Based on empirical results such as the self-contained example below, it seems the top features per class make intuitively more sense when not using StandardScaler. For example 'nasa' and 'space' are top tokens for sci.space, and 'god' and 'christians' for talk.religion.misc etc.
Am I missing something? Should StandardScaler(with_mean=False) still be used for obtaining feature importances from a linear model coefficients in such NLP cases?
Are these feature importances without StandardScaler(with_mean=False) in cases like this still somehow unreliable from a theoretical point?
# load text from web
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'),
categories=['sci.space','talk.religion.misc'])
newsgroups_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'),
categories=['sci.space','talk.religion.misc'])
# setup grid search, optionally use scaling
from sklearn.pipeline import Pipeline
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import StandardScaler
text_clf = Pipeline([
('vect', TfidfVectorizer(ngram_range=(1, 2), min_df=2, max_df=0.8)),
# remove comment below to use scaler
#('scaler', StandardScaler(with_mean=False)),
#
('clf', SGDClassifier(random_state=0, max_iter=1000))
])
from sklearn.model_selection import GridSearchCV
parameters = {
'clf__alpha': (0.0001, 0.001, 0.01, 0.1, 1.0, 10.0)
}
# find best model
gs_clf = GridSearchCV(text_clf, parameters, cv=8, n_jobs=-1, verbose=-2)
gs_clf.fit(newsgroups_train.data, newsgroups_train.target)
# model performance, very similar with and without scaling
y_predicted = gs_clf.predict(newsgroups_test.data)
from sklearn import metrics
print(metrics.classification_report(newsgroups_test.target, y_predicted))
# use eli5 to get feature importances, corresponds to the coef_ of the model, only top 10 lowest and highest for brevity of this posting
from eli5 import show_weights
show_weights(gs_clf.best_estimator_.named_steps['clf'], vec=gs_clf.best_estimator_.named_steps['vect'], top=(10, 10))
# Outputs:
No scaling:
Weight? Feature
+1.872 god
+1.235 objective
+1.194 christians
+1.164 koresh
+1.149 such
+1.147 jesus
+1.131 christian
+1.111 that
+1.065 religion
+1.060 kent
… 10616 more positive …
… 12664 more negative …
-0.922 on
-0.939 it
-0.976 get
-0.977 launch
-0.994 edu
-1.071 at
-1.098 thanks
-1.117 orbit
-1.210 nasa
-2.627 space
StandardScaler:
Weight? Feature
+0.040 such
+0.023 compuserve
+0.021 cockroaches
+0.017 how about
+0.016 com
+0.014 figures
+0.014 inquisition
+0.013 time no
+0.012 long time
+0.010 fellowship
… 11244 more positive …
… 14299 more negative …
-0.011 sherzer
-0.011 sherzer methodology
-0.011 methodology
-0.012 update
-0.012 most of
-0.012 message
-0.013 thanks for
-0.013 thanks
-0.028 ironic
-0.032 <BIAS>
| 1
| 1
| 0
| 1
| 0
| 0
|
Basically I have a RomanUrduDataSet (Urdu written with the help of English alphabets e.g Sahi-right) which also includes some English language words. And I have to detect how many words of the English language are included and what are they. In other words, wants to differentiate between two languages i.e English and roman-Urdu both use the same set of alphabets. e.g "Prime Minister Wazeer-azam"
I have tried spacy and spacy_langdetect packages in colab using python it's working good for all other languages but unfortunately including the Roman Urdu words as English language words. Such as for a text "This is English text sai kaha" in which "sai kaha" (well said) belongs to roman Urdu but my code below is including it as English language words.
import spacy
from spacy_langdetect import LanguageDetector
nlp = spacy.load("en")
nlp.add_pipe(LanguageDetector(), name="language_detector", last=True)
text = "This is English text Er lebt mit seinen Eltern und seiner Schwester in Berlin. Yo me divierto todos los días en el parque. Je m'appelle Angélica Summer, j'ai 12 ans et je suis canadienne."
doc = nlp(text)
# document level language detection. Think of it like average language of document!
print(doc._.language['language'])
# sentence level language detection
for i, sent in enumerate(doc.sents):
print(sent, sent._.language)
OUTPUT:
This is English text sai kaha {'language': 'en', 'score': 0.9999982400559537}
Er lebt mit seinen Eltern und seiner Schwester in Berlin. {'language': 'de', 'score': 0.9999979601967207}
Yo me divierto todos los días en el parque. {'language': 'es', 'score': 0.9999976130316337}
Je m'appelle Angélica Summer, j'ai 12 ans et je suis canadienne. {'language': 'fr', 'score': 0.9999962796815557}
but my desired result is:
This English text {'language': 'en', 'score':
sai kaha {'language': 'roman-urdu', 'score':
| 1
| 1
| 0
| 0
| 0
| 0
|
I have trained a spacy model on below sentences.
sent1 - STREET abc city: pqr COUNTY: STATE: qw ziP: 99999
sent2 - STREET qwe city: ewwe COUNTY: STATE: we ziP: 99990
I have annotated as shown below:
risk_street_label STREET
risk_street_value abc
risk_city_label city
risk_city_value pqr
risk_state_label STATE
risk_state_value qw
risk_zip_label ziP
risk_zip_value 99999
Have a training set of around 50 sentences. Containing different values but the label and the order is same.
For similar sentences the prediction is proper.
But while taking prediction for random sentences also it is predicting the classes.
For e.g. - Ram is a great
Prediction:
risk_street_value Ram is a great
I have also trained Watson Knowledge Studio and there it is predicting fine.
Below is an example of Watson Prediction:
RiskStreetLabel STREET RiskStreetValue abc
RiskCityLabel city RiskCityValue pqr
RiskStateLabel STATE RiskStateValue qw
RiskZipLabel ziP RiskZipValue 12345
Can someone please help me as in where I am going wrong?
Below is the spacy standard code:
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
optimizer = nlp.begin_training()
sizes = util.decaying(0.6, 0.2, 1e-4)
for itn in range(iterations):
print("Statring iteration " + str(itn))
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update(
[text], # batch of texts
[annotations], # batch of annotations
drop=0.5, # dropout - make it harder to memorise data
sgd=optimizer, # callable to update weights
losses=losses)
print(losses)
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to use a shortest path function to find the distance between strings in a graph. The problem is that sometimes there are close matches that I want to count. For example, I would like "communication" to count as "communications" or "networking device" to count as "network device". Is there a way to do this in python? (e.g., extract the root of words, or compute a string distance, or perhaps a python library that already have word-form relationships like plural/gerund/misspelled/etc) My problem right now is that my process only works when there is an exact match for every item in my database, which is difficult to keep clean.
For example:
List_of_tags_in_graph = ['A', 'list', 'of', 'tags', 'in', 'graph']
given_tag = 'lists'
if min_fuzzy_string_distance_measure(given_tag, List_of_tags_in_graph) < threshold :
index_of_min = index_of_min_fuzzy_match(given_tag, List_of_tags_in_graph)
given_tag = List_of_tags_in_graph[index_of_min]
#... then use given_tag in the graph calculation because now I know it matches ...
Any thought on easy or quick way to do this? Or, perhaps a different way to think about accepting close-match strongs ... or perhaps just better error handling when strings don't match?
| 1
| 1
| 0
| 0
| 0
| 0
|
Since recently I have been getting this error whenever I run my notebook:
ModuleNotFoundError: No module named 'pytextrank'
Here is the link to my notebook:
https://colab.research.google.com/github/neomatrix369/awesome-ai-ml-dl/blob/master/examples/better-nlp/notebooks/jupyter/better_nlp_summarisers.ipynb#scrollTo=-dJrJ54a3w8S
Although checks show that the library is installed, python import fails - I have had this once in a different scenario and fixed it using:
python -m pip install pytextrank
But this does not have any impact, the error still persists.
This wasn't a problem in the past and the same notebook worked well - I think it might be a regression.
Any thoughts? Any useful feedback will be highly appreciated.
Here is the code that I invoke:
import pytextrank
import sys
import networkx as nx
import pylab as plt
And I get this in the colab cell:
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-3-7f30423e40f2> in <module>()
3 sys.path.insert(0, './awesome-ai-ml-dl/examples/better-nlp/library')
4
----> 5 from org.neomatrix369.better_nlp import BetterNLP
1 frames
/content/awesome-ai-ml-dl/examples/better-nlp/library/org/neomatrix369/summariser_pytextrank.py in <module>()
----> 1 import pytextrank
2 import sys
3 import networkx as nx
4 import pylab as plt
5
ModuleNotFoundError: No module named 'pytextrank'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a large collection of UGC reviews and I want to find how strongly they are associated with an attraction eg The Eiffel tower.
I tried word count frequency but I got results like 'I stayed at a hotel and I could see the Eiffel tower from there' along with relevant reviews.
Is there a was with NLP to find reviews that are more closely associated with the Eiffel tower that can rank 'The view from the Eiffel tower was breathtaking' higher than 'I went to Paris and I saw all the attractions like Eiffel tower '
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a .csv file and I want to separate Non-English Text and English Text in two different files. Below is the code, I tried:
import string
def isEnglish(s):
return s.translate(None, string.punctuation).isalnum()
file=open('File1.csv','r',encoding='UTF-8')
outfile1=open('Eng.csv','w', encoding='utf-8')
outfile2=open('Noneng.csv','w', encoding='utf-8')
for line in file.readlines():
r = isEnglish(line)
if r:
outfile1.write(line+"
")
else:
outfile2.write(line+"
")
The code is not producing the desired result. There is repetitive English text in both the files. I have attached a snapshot of one output file.
| 1
| 1
| 0
| 0
| 0
| 0
|
This is from a text analysis exercise using data from Rotten Tomatoes. The data is in critics.csv, imported as a pandas DataFrame, "critics".
This piece of the exercise is to
Construct the cumulative distribution of document frequencies (df).
The -axis is a document count () and the -axis is the
percentage of words that appear less than () times. For example,
at =5 , plot a point representing the percentage or number of words
that appear in 5 or fewer documents.
From a previous exercise, I have a "Bag of Words"
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
# build the vocabulary and transform to a "bag of words"
X = vectorizer.fit_transform(critics.quote)
# Convert matrix to Compressed Sparse Column (CSC) format
X = X.tocsc()
Evey sample I've found calculates a matrix of documents per word from that "bag of words" matrix in this way:
docs_per_word = X.sum(axis=0)
I buy that this works; I've looked at the result.
But I'm confused about what's actually happening and why it works, what is being summed, and how I might have been able to figure out how to do this without needing to look up what other people did.
| 1
| 1
| 0
| 0
| 0
| 0
|
:) I tried using w = Word(printables), but it isn't working. How should I give the spec for this. 'w' is meant to process Hindi characters (UTF-8)
The code specifies the grammar and parses accordingly.
671.assess :: अहसास ::2
x=number + "." + src + "::" + w + "::" + number + "." + number
If there is only english characters it is working so the code is correct for the ascii format but the code is not working for the unicode format.
I mean that the code works when we have something of the form
671.assess :: ahsaas ::2
i.e. it parses words in the english format, but I am not sure how to parse and then print characters in the unicode format. I need this for English Hindi word alignment for purpose.
The python code looks like this:
# -*- coding: utf-8 -*-
from pyparsing import Literal, Word, Optional, nums, alphas, ZeroOrMore, printables , Group , alphas8bit ,
# grammar
src = Word(printables)
trans = Word(printables)
number = Word(nums)
x=number + "." + src + "::" + trans + "::" + number + "." + number
#parsing for eng-dict
efiledata = open('b1aop_or_not_word.txt').read()
eresults = x.parseString(efiledata)
edict1 = {}
edict2 = {}
counter=0
xx=list()
for result in eresults:
trans=""#translation string
ew=""#english word
xx=result[0]
ew=xx[2]
trans=xx[4]
edict1 = { ew:trans }
edict2.update(edict1)
print len(edict2) #no of entries in the english dictionary
print "edict2 has been created"
print "english dictionary" , edict2
#parsing for hin-dict
hfiledata = open('b1aop_or_not_word.txt').read()
hresults = x.scanString(hfiledata)
hdict1 = {}
hdict2 = {}
counter=0
for result in hresults:
trans=""#translation string
hw=""#hin word
xx=result[0]
hw=xx[2]
trans=xx[4]
#print trans
hdict1 = { trans:hw }
hdict2.update(hdict1)
print len(hdict2) #no of entries in the hindi dictionary
print"hdict2 has been created"
print "hindi dictionary" , hdict2
'''
#######################################################################################################################
def translate(d, ow, hinlist):
if ow in d.keys():#ow=old word d=dict
print ow , "exists in the dictionary keys"
transes = d[ow]
transes = transes.split()
print "possible transes for" , ow , " = ", transes
for word in transes:
if word in hinlist:
print "trans for" , ow , " = ", word
return word
return None
else:
print ow , "absent"
return None
f = open('bidir','w')
#lines = ["'\
#5# 10 # and better performance in business in turn benefits consumers . # 0 0 0 0 0 0 0 0 0 0 \
#5# 11 # vHyaapaar mEmn bEhtr kaam upbhOkHtaaomn kE lIe laabhpHrdd hOtaa hAI . # 0 0 0 0 0 0 0 0 0 0 0 \
#'"]
data=open('bi_full_2','rb').read()
lines = data.split('!@#$%')
loc=0
for line in lines:
eng, hin = [subline.split(' # ')
for subline in line.strip('
').split('
')]
for transdict, source, dest in [(edict2, eng, hin),
(hdict2, hin, eng)]:
sourcethings = source[2].split()
for word in source[1].split():
tl = dest[1].split()
otherword = translate(transdict, word, tl)
loc = source[1].split().index(word)
if otherword is not None:
otherword = otherword.strip()
print word, ' <-> ', otherword, 'meaning=good'
if otherword in dest[1].split():
print word, ' <-> ', otherword, 'trans=good'
sourcethings[loc] = str(
dest[1].split().index(otherword) + 1)
source[2] = ' '.join(sourcethings)
eng = ' # '.join(eng)
hin = ' # '.join(hin)
f.write(eng+'
'+hin+'
')
f.close()
'''
if an example input sentence for the source file is:
1# 5 # modern markets : confident consumers # 0 0 0 0 0
1# 6 # AddhUnIk baajaar : AshHvsHt upbhOkHtaa . # 0 0 0 0 0 0
!@#$%
the ouptut would look like this :-
1# 5 # modern markets : confident consumers # 1 2 3 4 5
1# 6 # AddhUnIk baajaar : AshHvsHt upbhOkHtaa . # 1 2 3 4 5 0
!@#$%
Output Explanation:-
This achieves bidirectional alignment.
It means the first word of english 'modern' maps to the first word of hindi 'AddhUnIk' and vice versa. Here even characters are take as words as they also are an integral part of bidirectional mapping. Thus if you observe the hindi WORD '.' has a null alignment and it maps to nothing with respect to the English sentence as it doesn't have a full stop.
The 3rd line int the output basically represents a delimiter when we are working for a number of sentences for which your trying to achieve bidirectional mapping.
What modification should i make for it to work if the I have the hindi sentences in Unicode(UTF-8) format.
| 1
| 1
| 0
| 0
| 0
| 0
|
this is my code
with open('file.txt', 'r') as source:
# Indentation
polTerm = [line.strip().split()[0] for line in source.readlines()]
polFreq = [int(line.strip().split()[1]) for line in source.readlines()]
this is inside file.txt
anak 1
aset 3
atas 1
bangun 1
bank 9
benar 1
bentuk 1
I got the polTerm just like what I want:
['anak', 'aset', 'atas', 'bangun', 'bank', 'benar', 'bentuk']
but for the polFreq, instead of this:
['1', '3', '1', '1', '9', '1', '1']
what I got is blank list like this:
[ ]
anyone know why this happened? and how to fix this so I can get just like I what I want.
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to run textrank against a large corpus (just my dev env is using 17K sentences)
Hence I have used scipy dok_matrix. However, when assigning the first value to my sparse matrix (i.e., similarity_matrix[1][0]), I get the following error, despite seeing in pycharm debug that my dok_matrix is of size 17K by 17k.
IndexError: row index (1) out of range
What have I done wrong?
def _score_generator(self, sentences, sentence_vectors):
sentence_count = len(sentences)
similarity_matrix = dok_matrix((sentence_count, sentence_count), dtype=np.float32)
for i in range(len(sentences)):
for j in range(len(sentences)):
if i != j:
similarity_matrix[i][j] = cosine_similarity(sentence_vectors[i].reshape(1, 100), sentence_vectors[j].reshape(1, 100))[0, 0]
nx_graph = nx.from_scipy_sparse_matrix(similarity_matrix)
scores = nx.pagerank(nx_graph)
return scores
| 1
| 1
| 0
| 0
| 0
| 0
|
I am currently working in python with spacy and there are different pre-trained models like the en_core_web_sm or the en_core_web_md. One of them is using words vectors to find word similarity and the other one is using context-sensitive tensors.
What is the difference between using context-sensitive tensors and using word vectors? And what is context-senstiive tensors exactly?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a nlp dataset, and according to the Pytorch official tutorial, I change the dataset to the word_to_idx and tag_to_idx, like:
word_to_idx = {'I': 0, 'have': 1, 'used': 2, 'transfers': 3, 'on': 4, 'three': 5, 'occasions': 6, 'now': 7, 'and': 8, 'each': 9, 'time': 10}
tag_to_idx = {'PRON': 0, 'VERB': 1, 'NOUN': 2, 'ADP': 3, 'NUM': 4, 'ADV': 5, 'CONJ': 6, 'DET': 7, 'ADJ': 8, 'PRT': 9, '.': 10, 'X': 11}
I want to complete the POS-Tagging task with BiLSTM. Here is my BiLSTM code:
class LSTMTagger(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
super(LSTMTagger, self).__init__()
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, tagset_size)
# The LSTM takes word embeddings as inputs, and outputs hidden states
self.lstm = nn.LSTM(embedding_dim, hidden_dim, bidirectional=True)
# The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(in_features=hidden_dim * 2, out_features=tagset_size)
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
# tag_scores = F.softmax(tag_space, dim=1)
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
Then I run the training code in Pycharm, like:
EMBEDDING_DIM = 6
HIDDEN_DIM = 6
NUM_EPOCHS = 3
model = LSTMTagger(embedding_dim=EMBEDDING_DIM,
hidden_dim=HIDDEN_DIM,
vocab_size=len(word_to_idx),
tagset_size=len(tag_to_idx))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# See what the scores are before training
with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_idx)
tag_scores = model(inputs)
print(tag_scores)
print(tag_scores.size())
However, it shows error with line tag_scores = model(inputs) and line lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)).
The error is:
Traceback (most recent call last):
line 140, in <module>
tag_scores = model(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
line 115, in forward
lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 559, in forward
return self.forward_tensor(input, hx)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 539, in forward_tensor
output, hidden = self.forward_impl(input, hx, batch_sizes, max_batch_size, sorted_indices)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 519, in forward_impl
self.check_forward_args(input, hx, batch_sizes)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 490, in check_forward_args
self.check_input(input, batch_sizes)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 153, in check_input
self.input_size, input.size(-1)))
RuntimeError: input.size(-1) must be equal to input_size. Expected 6, got 12
I don't know how to debug with it. Could somebody help me fix this issue? Thanks in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
I've doing POS tagging on Bengali Language
but this error shows
when i wrote print(word + tag) then no data goes to the tagged file.
taggedOutput = doTag(tagger,untagged)
tagged = pd.read_csv("Tagged_bangla_hmm.csv",'w',encoding="utf-8", header=None, delimiter = r'\s+',skip_blank_lines=False, engine='python')
for sentence in tagged:
for word, tag in enumerate(sentence):
tagged.write( word + tag )
print(tagged)
print('
')
print ('Finished Tagging')
| 1
| 1
| 0
| 0
| 0
| 0
|
Size of the dataset: 81256,
Classes:200,
Range for each class varies from 2757 under a particular class to as low as 10 under particular class.Its highly unbalanced.
How to balance this dataset and what type of algorithm should be used to train the model.
Right now i have used random over sampler for sampling and Linear SVC to train the model.
| 1
| 1
| 0
| 1
| 0
| 0
|
Hello everyone I've tried searching this topic and haven't been able to find a good answer so I was hoping someone could help me out.
Let's say I am trying to create a ML model using scikit-learn and python. I have a data set as such:
| Features | Topic | Sub-Topic |
|----------|---------|------------------|
| ... | Science | Space |
| ... | Science | Engineering |
| ... | History | American History |
| ... | History | European History |
My features list is composed of just text such as a small paragraph from some essay. Now I want to be able to use ML to predict what the topic and sub-topic of that text will be.
I know I would need to use some sort of NLP to analyze the text such as spaCy. The part where I am confused is on having two output variables: topic and sub-topic. I've read that scikit-learn has something called MultiOutputClassifier, but then there is also something called MultiClass Classification so I'm just a little confused as to what route to take.
Could someone please point me in the right direction as to what regressor to use or how to achieve this?
| 1
| 1
| 0
| 1
| 0
| 0
|
I am working on the Sentiment Analysis for a college project. I have an excel file with a "column" named "comments" and it has "1000 rows". The sentences in these rows have spelling mistakes and for the analysis, I need to have them corrected. I don't know how to process this so that I get and column with correct sentences using python code.
All the methods I found were correcting spelling mistakes of a word not sentence and not on the column level with 100s of rows.
| 1
| 1
| 0
| 0
| 0
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.