text
stringlengths 0
27.6k
| python
int64 0
1
| DeepLearning or NLP
int64 0
1
| Other
int64 0
1
| Machine Learning
int64 0
1
| Mathematics
int64 0
1
| Trash
int64 0
1
|
|---|---|---|---|---|---|---|
I am working on a project in which I am trying to calculate the percentage of inflectional morphology of multiple corpora in order to compare them.
I know how to use the nltk Porter Stemmer in order to get the root of the word, but it would be much more helpful for me if I could return the affix rather than the root. If I could do that, I could just count the number of affixes the stemmer cut off ("ly" "ed" etc) and compare it to the total number of words. It might be a simple flip, but I can't figure out how to do this with the roots.
| 1
| 1
| 0
| 0
| 0
| 0
|
(Note: I am aware that there have been previous posts on this question (e.g. here or here, but they are rather old and I think there has been quite some progress in NLP in the past few years.)
I am trying to determine the tense of a sentence, using natural language processing in Python.
Is there an easy-to-use package for this? If not, how would I need to implement solutions in TextBlob, StanfordNLP or Google Cloud Natural Language API?
TextBlob seems easiest to use, and I manage to get the POS tags listed, but I am not sure how I can turn the output into a 'tense prediction value' or simply a best guess on the tense. Moreover, my text is in Spanish, so I would prefer to use GoogleCloud or StanfordNLP (or any other easy to use solution) which support Spanish.
I have not managed to work with the Python interface for StanfordNLP.
Google Cloud Natural Language API seems to offer exactly what I need (see here, but I have not managed to find out how I would get to this output. I have used Google Cloud NLP for other analysis (e.g. entity sentiment analysis) and it has worked, so I am confident I could set it up if I find the right example of use.
Example of textblob:
from textblob import TextBlob
from textblob.taggers import NLTKTagger
nltk_tagger = NLTKTagger()
blob = TextBlob("I am curious to see whether NLP is able to predict the tense of this sentence., pos_tagger=nltk_tagger)
print(blob.pos_tags)
-> this prints the pos tags, how would I convert them into a prediction of the tense of this sentence?
Example with Google Cloud NLP (after setting up credentials):
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
text = "I am curious to see how this works"
client = language.LanguageServiceClient()
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
tense = (WHAT NEEDS TO COME HERE?)
print(tense)
-> I am not sure about the code that needs to be entered to predict the tense (indicated in the code)
I am quite a newbie to Python so any help on this topic would be highly appreciated! Thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
Can AWS SageMaker handle binary classification using TFidf vectorized text as prediction base?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to save a list of words that I have converted to a dataframe into a table in databricks so that I can view or refer to it later when my cluster restarts.
I have tried the below code but it keeps giving me an error or does run but I can't see the table in the database
myWords_External=[['this', 'is', 'my', 'world'],['this', 'is', 'the', 'problem']]
df1 = pd.DataFrame(myWords_External)
df1.write.mode("overwrite").saveAsTable("temp.eehara_trial_table_9_5_19")
the last line gives me the following error
AttributeError: 'DataFrame' object has no attribute 'write'
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm fine tuning a gpt-2 model following this tutorial:
https://medium.com/@ngwaifoong92/beginners-guide-to-retrain-gpt-2-117m-to-generate-custom-text-content-8bb5363d8b7f
With its associated GitHub repository:
https://github.com/nshepperd/gpt-2
I have been able to replicate the examples, my issue is that I'm not finding a parameter to set the number of iterations.
Basically the training script shows a sample every 100 iterations and save a model version every 1000 iterations. But I'm not finding a parameter to train it for say, 5000 iterations and then close it.
The script for training is here:
https://github.com/nshepperd/gpt-2/blob/finetuning/train.py
EDIT:
As suggested by cronoik I'm trying to replace the while for a for loop.
I'm adding these changes:
Adding one additional argument:
parser.add_argument('--training_steps', metavar='STEPS', type=int, default=1000, help='a number representing how many training steps the model shall be trained for')
Changing the loop:
try:
for iter_count in range(training_steps):
if counter % args.save_every == 0:
save()
Using the new argument:
python3 train.py --training_steps 300
But I'm getting this error:
File "train.py", line 259, in main
for iter_count in range(training_steps):
NameError: name 'training_steps' is not defined
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to pad a text for a seq2seq model.
from keras_preprocessing.sequence import pad_sequences
x=[["Hello, I'm Bhaskar", "This is Keras"], ["This is an", "experiment"]]
pad_sequences(sequences=x, maxlen=5, dtype='object', padding='pre', value="<PAD>")
I encounter following error:
ValueError: `dtype` object is not compatible with `value`'s type: <class 'str'>
You should set `dtype=object` for variable length strings.
However, when I try to do same for integer it works well.
x=[[1, 2, 3], [4, 5, 6]]
pad_sequences(sequences=x, maxlen=5, padding='pre', value=0)
Output:
array([[0, 0, 1, 2, 3],
[0, 0, 4, 5, 6]], dtype=int32)
I hope to get output as:
[["<PAD>", "<PAD>", "<PAD>", "Hello, I'm Bhaskar", "This is Keras"], ["<PAD>", "<PAD>","<PAD>", "This is an", "experiment"]]
| 1
| 1
| 0
| 0
| 0
| 0
|
I have created a LDA model using Gensim, for which I first iterated from num_topics in range 3 to 10, and based on pyLDAvis plots, chose n = 3 in final lda model.
import glob
import sys
sys.path.append('/Users/tcssig/Documents/NLP_code_base/Doc_Similarity')
import normalization
from gensim.models.coherencemodel import CoherenceModel
datalist = []
for filename in glob.iglob('/Users/tcssig/Documents/Speech_text_files/*.*'):
text = open(filename).readlines()
text = normalization.normalize_corpus(text, only_text_chars=True, tokenize=True)
datalist.append(text)
datalist = [datalist[i][0] for i in range(len(datalist))]
from gensim import models,corpora
import spacy
dictionary = corpora.Dictionary(datalist)
num_topics = 3
Lda = models.LdaMulticore
#lda= Lda(doc_term_matrix, num_topics=num_topics,id2word = dictionary, passes=20,chunksize=2000,random_state=3)
doc_term_matrix = [dictionary.doc2bow(doc) for doc in datalist]
dictionary = corpora.Dictionary(datalist)
import numpy as np
import pandas as pd
import spacy
import re
from tqdm._tqdm_notebook import tqdm_notebook,tnrange,tqdm
from collections import Counter,OrderedDict
from gensim import models,corpora
from gensim.summarization import summarize,keywords
import warnings
import pyLDAvis.gensim
import matplotlib.pyplot as plt
import seaborn as sns
Lda = models.LdaMulticore
coherenceList_umass = []
coherenceList_cv = []
num_topics_list = np.arange(3,10)
for num_topics in tqdm(num_topics_list):
lda= Lda(doc_term_matrix, num_topics=num_topics,id2word = dictionary, passes=20,chunksize=4000,random_state=43)
cm = CoherenceModel(model=lda, corpus=doc_term_matrix, dictionary=dictionary, coherence='u_mass')
coherenceList_umass.append(cm.get_coherence())
cm_cv = CoherenceModel(model=lda, corpus=doc_term_matrix, texts=datalist, dictionary=dictionary, coherence='c_v')
coherenceList_cv.append(cm_cv.get_coherence())
vis = pyLDAvis.gensim.prepare(lda, doc_term_matrix, dictionary)
pyLDAvis.save_html(vis,'pyLDAvis_%d.html' %num_topics)
plotData = pd.DataFrame({'Number of topics':num_topics_list,'CoherenceScore':coherenceList_umass})
f,ax = plt.subplots(figsize=(10,6))
sns.set_style("darkgrid")
sns.pointplot(x='Number of topics',y= 'CoherenceScore',data=plotData)
plt.axhline(y=-3.9)
plt.title('Topic coherence')
plt.savefig('Topic coherence plot.png')
#################################################################
#################################################################
lda_final= Lda(doc_term_matrix, num_topics=3,id2word = dictionary, passes=20,chunksize=4000,random_state=43)
lda_final.save('lda_final')
dictionary.save('dictionary')
corpora.MmCorpus.serialize('doc_term_matrix.mm', doc_term_matrix)
a = lda_final.show_topics(num_topics=3,formatted=False,num_words=10)
b = lda_final.top_topics(doc_term_matrix,dictionary=dictionary,topn=10)
topic2wordb = {}
topic2csb = {}
topic2worda = {}
topic2csa = {}
num_topics =lda_final.num_topics
cnt =1
for ws in b:
wset = set(w[1] for w in ws[0])
topic2wordb[cnt] = wset
topic2csb[cnt] = ws[1]
cnt +=1
for ws in a:
wset = set(w[0]for w in ws[1])
topic2worda[ws[0]+1] = wset
for i in range(1,num_topics+1):
for j in range(1,num_topics+1):
if topic2worda[i].intersection(topic2wordb[j])==topic2worda[i]:
topic2csa[i] = topic2csb[j]
print('the final data block')
finalData = pd.DataFrame([],columns=['Topic','words'])
finalData['Topic']=topic2worda.keys()
finalData['Topic'] = finalData['Topic'].apply(lambda x: 'Topic'+str(x))
finalData['words']=topic2worda.values()
finalData['cs'] = topic2csa.values()
finalData.sort_values(by='cs',ascending=False,inplace=True)
finalData.to_csv('CoherenceScore.csv')
print(finalData)
Now i have the trained model with me, but I want to know how I use the model on the docs used for training and also on new unseen document to assign the topic
I'm using the below code to do this but getting the error as below :
unseen_document = 'How a Pentagon deal became an identity crisis for Google'
text = normalization.normalize_corpus(unseen_document, only_text_chars=True, tokenize=True)
bow_vector = dictionary.doc2bow(text)
corpora.MmCorpus.serialize('x.bow_vector', bow_vector)
corpus = [dictionary.doc2bow(text)]
x = lda_final[corpus]
Error Message :
Topic words cs
2 Topic3 {senator, people, power, home, year, believe, ... -0.175486
1 Topic2 {friend, place, love, play, general, house, ye... -0.318839
0 Topic1 {money, doe, fucking, play, love, people, worl... -1.360688
Traceback (most recent call last):
File "LDA_test.py", line 141, in <module>
corpus = [dictionary.doc2bow(text)]
File "/Users/tcssig/anaconda/lib/python3.5/site-packages/gensim/corpora/dictionary.py", line 250, in doc2bow
counter[w if isinstance(w, unicode) else unicode(w, 'utf-8')] += 1
TypeError: coercing to str: need a bytes-like object, list found
| 1
| 1
| 0
| 0
| 0
| 0
|
In the tutorial example of spaCy in Python the results of apples.similarity(oranges) is
0.39289959293092641
instead of 0.7857989796519943
Any reasons for that?
Original docs of the tutorial
https://spacy.io/docs/
A tutorial with a different answer to the one I get:
http://textminingonline.com/getting-started-with-spacy
Thanks
| 1
| 1
| 0
| 0
| 0
| 0
|
I have an interesting problem. I have a list of billions of URLs. Something like:
www.fortune.com
www.newyorktimes.com
www.asdf.com
I also have an English dictionary as a JSON file. https://github.com/dwyl/english-words. How can I count the number of English words detected in the URL?
For example, for the URLS above, the counts should be: 1,3,0 for the words (fortune, new york times). The ideal output is a Pandas dataframe with the URLs and the count of English words in the URL.
The problem is challenging because there isn't a delimiter between words in the URL. It's also kind of a brute force search.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm new to NLP and am struggling to interpret the results I get when looking at a simple example of NLP classification of most important features. Specifically, in the common example I've shown below, I don't understand why the word "this" is informative when it appears in 3/5 negative sentiment sentences, and 3/5 positive sentences?
train = [('I love this sandwich.', 'pos'),
('This is an amazing place!', 'pos'),
('I feel very good about these beers.', 'pos'),
('This is my best work.', 'pos'),
("What an awesome view", 'pos'),
('I do not like this restaurant', 'neg'),
('I am tired of this stuff.', 'neg'),
("I can't deal with this", 'neg'),
('He is my sworn enemy!', 'neg'),
('My boss is horrible.', 'neg')]
from nltk.tokenize import word_tokenize # or use some other tokenizer
all_words = set(word.lower() for passage in train for word in word_tokenize(passage[0]))
t = [({word: (word in word_tokenize(x[0])) for word in all_words}, x[1]) for x in train]
import nltk
classifier = nltk.NaiveBayesClassifier.train(t)
classifier.show_most_informative_features()
Here are the results:
Most Informative Features
this = True neg : pos = 2.3 : 1.0
this = False pos : neg = 1.8 : 1.0
an = False neg : pos = 1.6 : 1.0
. = False neg : pos = 1.4 : 1.0
. = True pos : neg = 1.4 : 1.0
feel = False neg : pos = 1.2 : 1.0
of = False pos : neg = 1.2 : 1.0
not = False pos : neg = 1.2 : 1.0
do = False pos : neg = 1.2 : 1.0
very = False neg : pos = 1.2 : 1.0
Any ideas? I'd love an explanation of what the formula is that calculates the probability of a word / its informativeness.
I also did this super simple example:
train = [('love', 'pos'),
('love', 'pos'),
('love', 'pos'),
('bad', 'pos'),
("bad", 'pos'),
('bad', 'neg'),
('bad', 'neg'),
("bad", 'neg'),
('bad', 'neg'),
('love', 'neg')]
And get the following:
Most Informative Features
bad = False pos : neg = 2.3 : 1.0
love = True pos : neg = 2.3 : 1.0
love = False neg : pos = 1.8 : 1.0
bad = True neg : pos = 1.8 : 1.0
Which while directionally right doesn't seem to match up with any likelihood ratio calculation I can figure out.
| 1
| 1
| 0
| 0
| 0
| 0
|
A possibly very basic question about NLP best practices.
Does punctuation affect the behaviour of NLTK's Parts-of-Speech tagger? Or is it fine to remove punctuation from a sentence before passing it to the POS tagger?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working to find nearly duplicates between short text fields. As an example, a text field looks like this:
TUBING,SHRINK: 3/8",4' LG,FLEXIBLE POLYOLEFIN,HEAT,2:1
in my case, these special characters and numbers are meaningful and removing them might impact to find the right duplicates. Any suggestion on how to deal with this kind of information for the case of text similarity. Thanks in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
Can we extract dynamic entities that we not defined in the nlu file or data file?
Below is my NLU File
intent:benename
ahsan
ali
ahsan
mohsin
ahmed
qaseem
yasir
qaiser
salman
daniyal
For example: above bene_names are easily extract by nlu engine, but what if when user enter a new name? how we can get that name?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am applying wordNet lemmatizer into my corpus and I need to define the pos tagger for lemmatizer:
stemmer = PorterStemmer()
def lemmitize(document):
return stemmer.stem(WordNetLemmatizer().lemmatize(document, pos='v'))
def preprocess(document):
output = []
for token in gensim.utils.simple_preprocess(document):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
print("lemmitize: ", lemmitize(token))
output.append(lemmitize(token))
return output
Now as you can see I am defining pos for verb (and I know wordNet default pos is a noun), however when I lemmatized my document:
the left door closed at the night
I am getting out put as:
output: ['leav', 'door', 'close', 'night']
which this is not what i was expecting. In my above sentences, left points to which door (e.g. right or left). If I choose pos ='n' this problem may solve but it will then act as a wornNet default and there will be no effects on words like taken.
I found a similar issue in here and I modified the exception list in nltk_data/corpora/wordnet/verb.exc and I changed left leave to left left but still, I am getting the same results as leav.
Now I am wondering if there is any solution to this problem or in the best case, is there any way that I can add a custom dictionary of some words (only limited to my document) that wordNet does not lemmatize them like:
my_dict_list = [left, ...]
| 1
| 1
| 0
| 0
| 0
| 0
|
Sentihood Dataset is a dataset for Target Aspect-based Sentiment Analysis. Its Test and Train file are available in Json format. However, when I try loading it using the json module of python, it gives the following error-
JSONDecodeError: Expecting value: line 7 column 1 (char 6)
Is there some other way of loading Json files? I don't have much knowledge of Json and hence would appreciate any help.
Link for Sentihood dataset : https://github.com/uclmr/jack/tree/master/data/sentihood
My code is simply:
with open("sentihood-train.json", "r") as read_it:
data = json.load(read_it)
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm working on a project for text similarity using FastText, the basic example I have found to train a model is:
from gensim.models import FastText
model = FastText(tokens, size=100, window=3, min_count=1, iter=10, sorted_vocab=1)
As I understand it, since I'm specifying the vector and ngram size, the model is been trained from scratch here and if the dataset is small I would spect great resutls.
The other option I have found is to load the original Wikipedia model which is a huge file:
from gensim.models.wrappers import FastText
model = FastText.load_fasttext_format('wiki.simple')
My question is, can I load the Wikipedia or any other model, and fine tune it with my dataset?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am currently trying to build a LSTM RNN using pytorch. One input vector is represented as an array of 50 integers corresponding to a sequence of at most 50 tokens with padding where each integer corresponds to an element from my vocabulary and the index of the 1 in the OHE vector. I want to have an embedding layer that just uses a lookup table to One-hot encode the integer-- kind of like tensorflow's OHE layer.
Something like this "kind of" works
import torch
import numpy as np
import torch.nn as nn
# vocab_size is the number of words in your train, val and test set
# vector_size is the dimension of the word vectors you are using
vocab_size, vector_size = 5, 5
embed = nn.Embedding(vocab_size, vector_size)
# intialize the word vectors, pretrained_weights is a
# numpy array of size (vocab_size, vector_size) and
# pretrained_weights[i] retrieves the word vector of
# i-th word in the vocabulary
pretrained_weights = np.zeros((vocab_size, vector_size))
np.fill_diagonal(pretrained_weights, 1)
tmp =torch.from_numpy(pretrained_weights)
embed.weight = nn.Parameter(tmp,requires_grad=False )
# Then turn the word index into actual word vector
vocab = {"some": 0, "words": 1}
word_indexes = torch.from_numpy(np.array([vocab[w] for w in ["some", "words"]]))
word_vectors = embed(word_indexes)
word_vectors.data.numpy()
>>>output
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.]])
but it is very hacky, and doesn't play nicely with batches of input vectors.
What is the correct way to declare a OHE embedding layer at the begining of an RNN?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataset of 8500 rows of text. I want to apply a function pre_process on each of these rows. When I do it serially, it takes about 42 mins on my computer:
import pandas as pd
import time
import re
### constructing a sample dataframe of 10 rows to demonstrate
df = pd.DataFrame(columns=['text'])
df.text = ["The Rock is destined to be the 21st Century 's new `` Conan '' and that he 's going to make a splash even greater than Arnold Schwarzenegger , Jean-Claud Van Damme or Steven Segal .",
"The gorgeously elaborate continuation of `` The Lord of the Rings '' trilogy is so huge that a column of words can not adequately describe co-writer/director Peter Jackson 's expanded vision of J.R.R. Tolkien 's Middle-earth .",
'Singer/composer Bryan Adams contributes a slew of songs -- a few potential hits , a few more simply intrusive to the story -- but the whole package certainly captures the intended , er , spirit of the piece .',
"You 'd think by now America would have had enough of plucky British eccentrics with hearts of gold .",
'Yet the act is still charming here .',
"Whether or not you 're enlightened by any of Derrida 's lectures on `` the other '' and `` the self , '' Derrida is an undeniably fascinating and playful fellow .",
'Just the labour involved in creating the layered richness of the imagery in this chiaroscuro of madness and light is astonishing .',
'Part of the charm of Satin Rouge is that it avoids the obvious with humour and lightness .',
"a screenplay more ingeniously constructed than `` Memento ''",
"`` Extreme Ops '' exceeds expectations ."]
def pre_process(text):
'''
function to pre-process and clean text
'''
stop_words = ['in', 'of', 'at', 'a', 'the']
# lowercase
text=str(text).lower()
# remove special characters except spaces, apostrophes and dots
text=re.sub(r"[^a-zA-Z0-9.']+", ' ', text)
# remove stopwords
text=[word for word in text.split(' ') if word not in stop_words]
return ' '.join(text)
t = time.time()
for i in range(len(df)):
df.text[i] = pre_process(df.text[i])
print('Time taken for pre-processing the data = {} mins'.format((time.time()-t)/60))
>>> Time taken for pre-processing the data = 41.95724259614944 mins
So, I want to make use of multiprocessing for this task. I took help from here and wrote the following code:
import pandas as pd
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
def func(text):
return pre_process(text)
t = time.time()
results = pool.map(func, [df.text[i] for i in range(len(df))])
print('Time taken for pre-processing the data = {} mins'.format((time.time()-t)/60))
pool.close()
But the code just keeps on running, and doesn't stop.
How can I get it to work?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using spacy to create vectors of a sentence. If the sentence is 'I am working', it gives me a vector of shape (3, 300). Is there any way to get back the text in the sentence using those vectors?
Thank in advance,
Harathi
| 1
| 1
| 0
| 0
| 0
| 0
|
I am looking for a tokenizer that is expanding contractions.
Using nltk to split a phrase into tokens, the contraction is not expanded.
nltk.word_tokenize("she's")
-> ['she', "'s"]
However, when using a dictionary with contraction mappings only, and therefore not taking any information provided by surrounding words into account, it's not possible to decide whether "she's" should be mapped to "she is" or to "she has".
Is there a tokenizer that provides contraction expansion?
| 1
| 1
| 0
| 0
| 0
| 0
|
In python, I already have a list of words and a list of stem. How to create a dictionary where the key is the stem and the value is a list of words with that stem, like this:
{‘achiev’: [‘achieved’, ‘achieve’] ‘accident’: [‘accidentally’, ‘accidental’] … }
| 1
| 1
| 0
| 0
| 0
| 0
|
I am designing a text processing program and need to stem the words for exploratory analysis later. One of my processes is to stem the words and I have to use Porter Stemmer.
I have designed a DataFrame structure to store my data. Furthermore, I have also designed a function to apply to the DataFrame. When I apply the function to the DataFrame, the stemming works but it does not keep the capitalised (or proper nouns) words.
A snippet of my code:
from nltk.stem.porter import PorterStemmer
def stemming(word):
stemmer = PorterStemmer()
word = str(word)
if word.title():
stemmer.stem(word).capitalize()
elif word.isupper():
stemmer.stem(word).upper()
else:
stemmer.stem(word)
return word
dfBody['body'] = dfBody['body'].apply(lambda x: [stemming(y) for y in x])
This is my result with that has no capitalised words:
output
Sample of dataset (my dataset is very large):
file body
PP3169 ['performing', 'Maker', 'USA', 'computer', 'Conference', 'NIPS']
Expected output (after applying stemming function):
file body
PP3169 ['perform', 'Make', 'USA', 'comput', 'Confer', 'NIPS']
Any advice will be greatly appreciated!
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on extraction of positive, negative & neutral keyword in python.There are 10,000 comments in my comments remarks.txt file(encoded UTF-8).I want to import the text file, read the individual row of comments & extract words(tokenize) from the comments mentioned in column c2 & store it in a next adjacent column. I have written a small program calling get_keywords function in Python.I have created get_keywords() function but facing issues passing each row of the dataframe as argument & calling it using iterations to provide keywords & store it in adjacent columns.
Codes are not providing expected column "tokens" with all the processed words in the df dataframe.
import nltk
import pandas as pd
import re
import string
from nltk import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
remarks = pd.read_csv('/Users/ZKDN0YU/Desktop/comments/New
comments/ccomments.txt')
df = pd.DataFrame(remarks, columns= ['c2'])
df.head(50)
df.tail(50)
filename = 'ccomments.txt'
file = open(filename, 'rt', encoding="utf-8")
text = file.read()
file.close()
def get_keywords(row):
# split into tokens by white space
tokens = text.split(str(row))
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(string.punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# remove remaining tokens that are not alphabetic
tokens = [word for word in tokens if word.isalpha()]
# filter out stop words
stop_words = set(stopwords.words('english'))
tokens = [w for w in tokens if not w in stop_words]
# stemming of words
porter = PorterStemmer()
stemmed = [porter.stem(word) for word in tokens]
# filter out short tokens
tokens = [word for word in tokens if len(word) > 1]
return tokens
df['tokens'] = df.c2.apply(lambda row: get_keywords(row['c2']),
axis=1)
for index, row in df.iterrows():
print(index, row['c2'],"tokens : {}".format(row['tokens']))
Expected Output:- A Comments_modified file containing columns 1)index,2) c2(Comments) & 3)tokenized words for all rows of the dataframe having 10,000 comments.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on an NLP project and I need the following functionality illustrated by an example. Say there is a sentence
Tell Sam that he will have to leave without Arthur, as he is sick.
In this statement, the first he has to be tagged to Sam and the second he to Arthur. I work in Python. Any suggestions on what I can use to get the following functionality?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm using the following code to click on the 'Show more reviews' button, but not working.
Code:
link= 'https://www.capterra.com/p/5938/Oracle-Database/'
driver.get(link)
while True:
try:
driver.find_element_by_partial_link_text('Show more reviews').click()
# Wait till the container of the recipes gets loaded
# after load more is clicked.
time.sleep(5)
except (NoSuchElementException, WebDriverException) as e:
break
page_source = driver.page_source
#BEAUTIFUL SOUP OPTION
soup = BeautifulSoup(page_source,"lxml")
Error Statement
NoSuchElementException: no such element: Unable to locate element: {"method":"partial link text","selector":"Show more reviews"}
(Session info: headless chrome=76.0.3809.132)
Thanks in advance.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a data in the following form:
author text
0 garyvee A lot of people misunderstand Gary’s message o...
1 jasonfried "I can’t remember having a goal. An actual goa...
2 biz "Tools that can create media that looks and so...
I tried the following to clean the text:
text_data.loc[:,"text"] = text_data.text.apply(lambda x : str.lower(x))
text_data.loc[:,"text"] = text_data.text.apply(lambda x : " ".join(re.findall('[\w]+',x)))
I got output but it contains digits I dont want that for the Text Analysis
0 a lot of people misunderstand gary s message o...
1 i can t remember having a goal an actual goal ...
2 tools that can create media that looks and sou...
Name: text, dtype: object
but while removing numbers in the text string:
text_data.loc[:,"text"] = text_data.text.apply(lambda x : " ".join(re.sub('^[0-9\.]*$','',x)))
I got the output:
0 a l o t o f p e o p l e m i s u n d e r s t a ...
1 i c a n t r e m e m b e r h a v i n g a g o a ...
2 t o o l s t h a t c a n c r e a t e m e d i a ...
Name: text, dtype: object
How to avoid it? How to implement CountVectorizer?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to test out TensorFlow tf.estimator.DNNClassifier with some simple data
X = [[1,2], [1,12], [1,17], [9,33], [48,49], [48,50]]
Y = [ 1, 1, 1, 1, 2, 3 ]
The classifier takes 2 inputs: x1,x2; and is having this shape:
#these 4 layers supposed to be able to do even 4-time linear separation
hidden_units = [2000,1000,500,100]
n_classes = 4
Hower, things didn't go as wanted, the network couldn't fit. Accuracy quickly got to 8.33 (=5/6) but stuck then. Loss converged to a horizontal line but not the zero line.
The data provided above are 2-time linear separable (right-click image to open in new tab):
Even when the network runs to 10,000 steps, it's still stuck, I guess it's stuck because it fails to separate the 2 values: Y=2 and Y=3, is it so? And how to make the network fit with the mentioned data.
| 1
| 1
| 0
| 1
| 0
| 0
|
I am trying to work on an Arduino bot whose job will be to just recognise voice commands given only by me. I have a Python code for it. But a line is giving me syntax error. That particular line is a print statement which goes like this print len(data), samplerate
data, samplerate = sf.read(b) #reading audio file using soundfile library
print len(data), samplerate
x= len(data)
| 1
| 1
| 0
| 0
| 0
| 0
|
I am a total rookie in computer vision. I am looking to build a model without using pre-trained models for coco dataset or any open-source image datasets. Any articles or references to build such models would be appreciated. I would like to build this model from scratch and make no suggestions on pre-existing trained models or Api are irrelevant to this question. and thanks in advance for any suggestions. the programming language of preference for this project is python
| 1
| 1
| 0
| 0
| 0
| 0
|
i have more than 2000 data sets for ANN. I have applied MLPRegressor in it. My code is working fine. But for testing, i want to fix my testing value for instance i have 50 data sets. From that i want to test first 20 value. How do I fix this in the code? I have used the following code.
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
df = pd.read_csv("0.5-1.csv")
df.head()
X = df[['wavelength', 'phase velocity']]
y = df['shear wave velocity']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import mean_absolute_error
mlp = MLPRegressor(hidden_layer_sizes=(30,30,30))
mlp.fit(X_train,y_train)
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a requirement where I have history of previous requests. Requests may be like "Send me a report of .." or "Get me this doc" and this will get assigned to some one and that person will respond.
I need to build an app which will analyse the previous request and if a new request arrives and if any of the previous requests matches then I should recommend the previous request's solution.
I am trying to implement the above using Python and after some research I found doc2vector is one of the approach to convert the previous requests to a vector and match with vector of new request. I want to know, is this the right approach or are better approaches available?
| 1
| 1
| 0
| 1
| 0
| 0
|
I am trying to calculate the word embeddings using fasttext for the following sentence.
a = 'We are pencil in the hands'
I dont have any pretrained model, so how do i go about it?
| 1
| 1
| 0
| 0
| 0
| 0
|
Trying to run a binary SVM on on the 20_newsgroups dataset. Seem to be getting a ValueError: Found input variables with inconsistent numbers of samples: [783, 1177]. Can anyone suggest why this is happening?
from sklearn.datasets import fetch_20newsgroups
from nltk.corpus import names
from nltk.stem import WordNetLemmatizer
# from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
import pandas as pd
categories = ["comp.graphics", 'sci.space']
data_train = fetch_20newsgroups(subset='train', categories=categories, random_state=42)
data_test = fetch_20newsgroups(subset='test', categories=categories, random_state=42)
def is_letter_only(word) :
return word.isalpha()
all_names = set (names.words())
lemmatizer = WordNetLemmatizer()
def clean_text(docs) :
docs_cleaned = []
for doc in docs:
doc = doc.lower()
doc_cleaned = ' '.join(lemmatizer.lemmatize(word)
for word in doc.split() if is_letter_only(word)
and word not in all_names)
docs_cleaned.append(doc_cleaned)
return docs_cleaned
cleaned_train = clean_text(data_train.data)
label_train = data_train.target
cleaned_test = clean_text(data_train.data)
label_test = data_test.target
len(label_train),len(label_test)
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(stop_words='english', max_features=None)
term_docs_train = tfidf_vectorizer.fit_transform(cleaned_train)
term_docs_test = tfidf_vectorizer.transform(cleaned_test)
from sklearn.svm import SVC
svm = SVC(kernel='linear', C=1.0, random_state=42)
svm.fit(term_docs_train, label_train)
accuracy = svm.score(term_docs_test, label_test)
print(accuracy)
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a huge list of text files to tokenize. I have the following code which works for a small dataset. I am having trouble using the same procedure with a huge dataset, however. I am giving the example of a small dataset as below.
In [1]: text = [["It works"], ["This is not good"]]
In [2]: tokens = [(A.lower().replace('.', '').split(' ') for A in L) for L in text]
In [3]: tokens
Out [3]:
[<generator object <genexpr> at 0x7f67c2a703c0>,
<generator object <genexpr> at 0x7f67c2a70320>]
In [4]: list_tokens = [tokens[i].next() for i in range(len(tokens))]
In [5]: list_tokens
Out [5]:
[['it', 'works'], ['this', 'is', 'not', 'good']]
While all works so well with a small dataset, I encounter problem processing a huge list of lists of strings (more than 1,000,000 lists of strings) with the same code. As I still can tokenize the strings with the huge dataset as in In [3], it fails in In [4] (i.e. killed in terminal). I suspect it is just because the body of the text is too big.
I am here, therefore, seek for suggestions on the improvement of the procedure to obtain lists of strings in a list as what I have in In [5].
My actual purpose, however, is to count the words in each list. For instance, in the example of the small dataset above, I will have things as below.
[[0,0,1,0,0,1], [1, 1, 0, 1, 1, 0]] (note: each integer denotes the count of each word)
If I don't have to convert generators to lists to get the desired results (i.e. word counts), that would also be good.
Please let me know if my question is unclear. I would love to clarify as best as I can. Thank you.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a pandas data frame like below. I want to convert all the text into lowercase. How can I do this in python?
Sample of data frame
[Nah I don't think he goes to usf, he lives around here though]
[Even my brother is not like to speak with me., They treat me like aids patent.]
[I HAVE A DATE ON SUNDAY WITH WILL!, !]
[As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your callertune for all Callers., Press *9 to copy your friends Callertune]
[WINNER!!, As a valued network customer you have been selected to receivea £900 prize reward!, To claim call 09061701461., Claim code KL341., Valid 12 hours only.]
What I tried
def toLowercase(fullCorpus):
lowerCased = [sentences.lower()for sentences in fullCorpus['sentTokenized']]
return lowerCased
I get this error
lowerCased = [sentences.lower()for sentences in fullCorpus['sentTokenized']]
AttributeError: 'list' object has no attribute 'lower'
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a python dictionary contains list's of values. when I am trying to pos_tag the values inside the list, its showing error. Is there any way to fix it?
RuleSet = {1: ['drafts', 'duly', 'signed', 'beneficiary', 'drawn', 'issuing', 'bank', 'quoting', 'lc', ''], 2: ['date', ''], 3: ['signed', 'commerical', 'invoices', 'quadruplicate', 'gross', 'cifvalue', 'goods', '']}
for key in RuleSet:
value = RuleSet[key]
Tagged = nltk.pos_tag(value)
print(Tagged)
IndexError: string index out of range
| 1
| 1
| 0
| 0
| 0
| 0
|
Suppose I have a string s = SU 3180 and (CMG 3200 or SU 3210). I need to split this string into a tree diagram such as this:
X
/ \
SU 3180 ()
/ - \
CMG 3200 SU 3210
The main goal is to show a difference with and / or split as show in the diagram. For example I have shown the or split with hyphen between the split. I have no idea how I should proceed with this. Any ideas are welcome!
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a project, it is almost complete, i am working on it's gui.
i want to show a transparent image for 5 sec while starting the program in python
| 1
| 1
| 0
| 0
| 0
| 0
|
So I need to capture substrings in a string that are in between two single apostrophes.
For this example I have string:
the real question this movie poses is not 'who ? ' but 'why ? '
The output I am currently getting is:
[[" 'who ? ' "], [], []]
I would like for the regex to capture 'why ? ' as well but I do not know why it is not working.
This is my regex
pattern = re.compile(r"(\s+[']{1}\D{2,}[^']+[']{1} | ^[']{1}\D{2,}[^']+[']{1}$)")
The reason I have the \D is that I do not want to capture say '70s and I need at least 2 characters because I do not want the capture the 'n in rock 'n roll.
I figured to add [^'] because before it was capturing the full
'who ? ' but 'why ? '
but instead I need
'who ? ' and
'why ?'
to be separate matches.
Any advice will help, thanks in advance.
| 1
| 1
| 0
| 0
| 0
| 0
|
After transforming my predicted labels from images into a list all_tags and later splitting them and finally storing into word_list which has all the labels stored in a sentence like structure.
All I want to do is use Google's Word2Vec pre-trained model (https://mccormickml.com/2016/04/12/googles-pretrained-word2vec-model-in-python/) to to generate and print all the Word2Vec values of my predicted labels. Imported and mapped the pretrained weight of the model, yet I'm getting the error
KeyError: "word '['cliff'' not in vocabulary"
However, the word 'cliff' is available in the dictionary. Any insight will be well appreciated.
Please check the code snippets below for reference.
execution_path = os.getcwd()
TEST_PATH = '/home/guest/Documents/Aikomi'
prediction = ImagePrediction()
prediction.setModelTypeAsDenseNet()
prediction.setModelPath(os.path.join(execution_path, "/home/guest/Documents/Test1/ImageAI-master/imageai/Prediction/Weights/DenseNet.h5"))
prediction.loadModel()
pred_array = np.empty((0,6), dtype=object)
predictions, probabilities = prediction.predictImage(os.path.join(execution_path, "1.jpg"), result_count=5)
for img in os.listdir(TEST_PATH):
if img.endswith('.jpg'):
image = Image.open(os.path.join(TEST_PATH, img))
image = image.convert("RGB")
image = np.array(image, dtype=np.uint8)
predictions, probabilities = prediction.predictImage(os.path.join(TEST_PATH, img), result_count=5)
temprow = np.zeros((1,pred_array.shape[1]),dtype=object)
temprow[0,0] = img
for i in range(len(predictions)):
temprow[0,i+1] = predictions[i]
pred_array = np.append(pred_array, temprow, axis=0)
all_tags = list(pred_array[:,1:].reshape(1,-1))
_in_sent = ' '.join(list(map(str, all_tags)))
import gensim
from gensim.models import Word2Vec
from nltk.tokenize import sent_tokenize, word_tokenize
import re
import random
import nltk
nltk.download('punkt')
word_list = _in_sent.split()
from gensim.corpora.dictionary import Dictionary
# be sure to split sentence before feed into Dictionary
word_list_2 = [d.split() for d in word_list]
dictionary = Dictionary(word_list_2)
print("
", dictionary, "
")
corpus_bow = [dictionary.doc2bow(doc) for doc in word_list_2]
model = Word2Vec(word_list_2, min_count= 1)
model = gensim.models.KeyedVectors.load_word2vec_format('/home/guest/Downloads/Google.bin', binary=True)
print(*map(model.most_similar, word_list))
| 1
| 1
| 0
| 1
| 0
| 0
|
I am looking at creating a simple chatbot which can use a pdf file as it's source.
For example, the input to the chatbot can be a bank's terms and conditions document and the chatbot would respond to a question which are related to the contents of the document.
Sample Q&A.
Q : What is my monthly fee for my savings account?
A : Your monthly fees is $5 for the savings account if no deposit is made above $2000, else free.
I used pdfminer to read the pdf document and convert it into processed data and spaCy to identify the NER, POS etc.
I learnt about RASA and all the links which I saw uses a defined text response and not using any pdf document as a source.
Can someone provide any guidance on which approach i could follow?
I don't want to use Dialogflow or Lex and want to be in the open source world.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to do sentiment analysis on a review dataset. Since I care more about identifying (extracting) negative sentiments in reviews (unlabeled now but I try to manually label a few hundreds or use Alchemy API), if the review is overall neutral or positive but a part has negative sentiment, I'd like my model to consider it more toward as a negative review. Could someone give me advices on how to do this? I'm thinking about using bag of words/word2vect with supervised (random forest, SVM) /unsupervised learning models (Kmeans).
| 1
| 1
| 0
| 1
| 0
| 0
|
I am rather new to both machine learning, NLP, and LDA, so I'm not sure if I'm even approaching my problem entirely correctly; but I am attempting to do unsupervised topic modelling with known topics and multiple topic selections.
Based on Topic modelling, but with known topics?
I can label every single one of my documents with every single topic, and my unsupervised set effectively becomes supervised (LLDA is a supervised technique).
Reading this paper I've come across some other potential issues -
First, my data is organized with categories and sub-categories. According to the paper LLDA is more effective with significant semantic distinction between texts - which I won't particularly have with my relatively close sub-categories. Additionally, the paper notes that LLDA was not designed to be a multi-label classifier.
I'm hoping to remedy these weakness by including the guided part of GuidedLDA (I haven't read a paper on this, but I did read https://medium.freecodecamp.org/how-we-changed-unsupervised-lda-to-semi-supervised-guidedlda-e36a95f3a164 ).
So is there any algorithm (I would assume a modification of LLDA, but again I'm not super well read in this area) that allows one to use some form of intuition to aid an unsupervised topic-model with known topic classes that selects multiple topics?
As for why I don't just use Guided LDA - well I am planning to test it out and see how well it does (alongside LLDA). But its also not designed for multiple labels.
Slight note if it matters - I am actually using documents and words for my data, I've read about LDA being used with other data types.
Further note - I have a fair amount of experience with Python, though I've heard there is a good topic modelling tool called Mallet that I might explore but have yet to look into (maybe it has something for this?)
| 1
| 1
| 0
| 1
| 0
| 0
|
I am reading the book "deep learning and the game of go" and I not went far in the book; I wrote the foundations (rules, helper classes) and a Qt GUI interface. All works and I decided to write the examples of minimax program, to see if I can beat it ;-)
but it's too slow : it take minutes to play one move, with an initial board of 9x9. With a default depth of 3 moves, I think the computation of the first move would take (9x9)x(9x9-1)x(9x9-2)~ 500 000 positions. Ok, it's python, not C, but I think this could be computed in one minute max.
I removed one call to copy.deepcopy(), which seemed to consume a lot of time. But it stay too slow.
Here is some stuff:
the computing thread:
class BotPlay(QThread):
"""
Thread de calcul du prochain coup par le bot
"""
def __init__(self, bot, bots, game):
"""
constructeur, pour le prochain coup à jouer
:param bot: le bot qui doit jouer
:param bots: l'ensemble des 2
:param game: l'état actuel du jeu (avant le coup à jouer)
"""
QThread.__init__(self)
self.bot = bot
self.bots = bots
self.game = game
played = pyqtSignal(Move, dict, GameState)
def __del__(self):
self.wait()
def run(self):
self.msleep(300)
bot_move = self.bot.select_move(self.game)
self.played.emit(bot_move, self.bots, self.game)
the select move method and its class:
class DepthPrunedMinimaxAgent(Agent):
@bot_thinking(associated_name="minimax prof. -> LONG")
def select_move(self, game_state: GameState):
PonderedMove = namedtuple('PonderedMove', 'move outcome')
best_move_so_far = None
for possible_move in game_state.legal_moves():
next_state = game_state.apply_move(possible_move)
our_best_outcome = -1 * self.best_result(next_state, capture_diff)
if best_move_so_far is None or our_best_outcome > best_move_so_far.outcome:
best_move_so_far = PonderedMove(possible_move, our_best_outcome)
return best_move_so_far.move
def best_result(self, game_state: GameState, eval_fn, max_depth: int = 2):
if game_state.is_over():
if game_state.next_player == game_state.winner():
return sys.maxsize
else:
return -sys.maxsize
if max_depth == 0:
return eval_fn(game_state)
best_so_far = -sys.maxsize
for candidate_move in game_state.legal_moves():
next_state = game_state.apply_move(candidate_move)
opponent_best_result = self.best_result(next_state, eval_fn, max_depth - 1)
our_result = -opponent_best_result
if our_result > best_so_far:
best_so_far = our_result
return best_so_far
I am nearly sure the problem does not come from the GUI, because the initial version of the program, given by the book , and entirely in console mode, is as slow as mine.
What is my request? Well, to be sure that this slow behavior is not normal, and maybe to have a clue to what goes wrong. The minimax algorithm comes from the book, so it's ok.
thank you
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to use the FastText Python API https://pypi.python.org/pypi/fasttext Although, from what I've read, this API can't load the newer .bin model files at https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md as suggested in https://github.com/salestock/fastText.py/issues/115
I've tried everything that is suggested at that issue, and furthermore https://github.com/Kyubyong/wordvectors doesn't have the .bin for English, otherwise the problem would be solved. Does anyone know of a work-around for this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm using spacy with python and its working fine for tagging each word but I was wondering if it was possible to find the most common words in a string. Also is it possible to get the most common nouns, verbs, adverbs and so on?
There's a count_by function included but I cant seem to get it to run in any meaningful way.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm new at pyspark and I was trying to do some tokenization on my data.
I have my first dataframe:
reviewID|text|stars
I made a tokenization on "text" according to the pyspark documentation:
tokenizer = Tokenizer(inputCol="text", outputCol="words")
countTokens = udf(lambda words: len(words), IntegerType())
tokenized = tokenizer.transform(df2)
tokenized.select("text", "words") \
.withColumn("howmanywords", countTokens(col("words"))).show(truncate=False)
I got my tokens but now I would like to have transformed dataframe that looks like that:
words|stars
"Words" are my tokens.
So I need to join my first dataframe and tokenized dataframe to get something like that.
Could you please help me? How can I add a column to the another dataframe?
| 1
| 1
| 0
| 0
| 0
| 0
|
Say I have two columns in my data set, State and Comments. This is basically the comments given by people from different state. I want to analyse the comments column, say I want to see the most used word by a particular state. For eg Comments of people belonging to Texas. I want to create a bar chart or a word cloud for these data and I want it to change correspondingly when i click or choose a particular state.
For eg. Say there is a word cloud showing responses of the entire data set. Now if i click on Texas, my word cloud should change correspondingly showing responses from Texas alone
So what is the best of doing it? can it be done in Power Bi or python? If so, kindly tell me how to go about it.
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to use spaCy's Matcher class on a new language (Hebrew) for which spaCy does not yet have a working model.
I found a working tokenizer + POS tagger (from Stanford NLP), yet I would prefer spaCy as its Matcher can help me do some rule-based NER.
Can the rule-based Matcher be fed with POS-tagged text instead of the standard NLP pipeline?
| 1
| 1
| 0
| 0
| 0
| 0
|
Newish to Python and even newer to StackOverflow. Still trying to suss out the best way to ask questions and receive constructive feedback. If I'm doing something wrong or need to provide more info, please let me know.
my_words = []
for i in range (0, 26):
def predict_more_words(first_word):
bimodel = build_bigram_model()
second_word = bimodel[first_word]
top10words = collections.Counter(second_word).most_common(10)
predicted_words = list(zip(*top10words))[0]
prob_score = list(zip(*top10words))[1]
x_pos = predicted_words
my_words.append(x_pos[0])
return(x_pos[0])
predict_more_words("is")
print(my_words)
I have the above code that I am trying to call recursively, such that every time
predict_more_words is called, it takes the word at x_pos[0] and feeds it into the function again until it hits a len of 26. I am storing these words/chars into a list that I will concat with another list I already generated. It does not accept x_pos[0] as an argument and giving me a nameError - not defined.
Any help is appreciated! Thanks in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
Using Naive Bayes Alorithm
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
The code is working till this line but when i fit the model then it shows error.
nb.fit(X_train, y_train)
Output:
ValueError: could not convert string to float: 'My fiance and
I tried the place because of a Groupon. We live in the same neighborhood
and see the place all the time but the look of the place was never enough
to draw us in. There is nothing eye catching about the business front at
all. It's in a strip mall and looks old..........
I'm using yelp.csv dataset for natural language processing
Expected answer should be like this
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
| 1
| 1
| 0
| 1
| 0
| 0
|
Here is the CSV tableThere are two columns in a CSV table. One is summaries and the other one is texts. Both columns were typeOfList before I combined them together, converted to data frame and saved as a CSV file. BTW, the texts in the table have already been cleaned (removed all marks and converted to lower cases):
I want to loop through each cell in the table, split summaries and texts into words and tokenize each word. How can I do it?
I tried with python CSV reader and df.apply(word_tokenize). I tried also newList=set(summaries+texts), but then I could not tokenize them.
Any solutions to solve the problem, no matter of using CSV file, data frame or list. Thanks for your help in advance!
note: The real table has more than 50,000 rows.
===some update==
here is the code I have tried.
import pandas as pd
data= pd.read_csv('test.csv')
data.head()
newTry=data.apply(lambda x: " ".join(x), axis=1)
type(newTry)
print (newTry)
import nltk
for sentence in newTry:
new=sentence.split()
print(new)
print(set(new))
enter image description here
Please refer to the output in the screenshot. There are duplicate words in the list, and some square bracket. How should I removed them? I tried with set, but it gives only one sentence value.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm working on an NLP task and I need to calculate the co-occurrence matrix over documents. The basic formulation is as below:
Here I have a matrix with shape (n, length), where each row represents a sentence composed by length words. So there are n sentences with same length in all. Then with a defined context size, e.g., window_size = 5, I want to calculate the co-occurrence matrix D, where the entry in the cth row and wth column is #(w,c), which means the number of times that a context word c appears in w's context.
An example can be referred here. How to calculate the co-occurrence between two words in a window of text?
I know it can be calculate by stacking loops, but I want to know if there exits an simple way or simple function? I have find some answers but they cannot work with a window sliding through the sentence. For example:word-word co-occurrence matrix
So could anyone tell me is there any function in Python can deal with this problem concisely? Cause I think this task is quite common in NLP things.
| 1
| 1
| 0
| 1
| 0
| 0
|
For example...
Chicken is an animal.
Burrito is a food.
WordNet allows you to do "is-a"...the hiearchy feature.
However, how do I know when to stop travelling up the tree? I want a LEVEL.
That is consistent.
For example, if presented with a bunch of words, I want wordNet to categorize all of them, but at a certain level, so it doesn't go too far up. Categorizing "burrito" as a "thing" is too broad, yet "mexican wrapped food" is too specific. I want to go up the hiearchy or down..until the right LEVEL.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have created a sparse matrix dataframe which has taken the values in a list and set them as column headers. A number of rows contain headers for example "000 bank". I want to remove the "000 " so it is just 'bank' for example.
000 bank 000 claim 000 confirmed 000 debit 000 delete 000 frequent 000 hashed ...
0 0.000000 0.0 0.0 0.0 0.0 0.0 0.00000 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.052024 0.0 0.0 0.0 0.0 0.0 0.00000 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 kddi
2 0.000000 0.0 0.0 0.0 0.0 0.0 0.00000 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 e
3 0.000000 0.0 0.0 0.0 0.0 0.0 0.00000 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
Index(['000', '000 000', '000 3rd', '000 bank', '000 claim', '000 confirmed',
'000 debit', '000 delete', '000 frequent', '000 hashed',
...
'years multiple', 'yet', 'yet confirm', 'yet evidence', 'yet expired',
'yet many', 'yet published', 'zarefarid', 'zarefarid wrote', 'Keyword'],
dtype='object', length=3831)
How can I get rid of the '000 '. Not all column headers have the 000 in them as you can see in the index above.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to modify an example from this post
that applies tf-idf.
from sklearn.datasets import fetch_20newsgroups
from gensim.corpora import Dictionary
from gensim.models.tfidfmodel import TfidfModel
from gensim.matutils import sparse2full
import numpy as np
import spacy
nlp = spacy.load('en_core_web_md')
def keep_token(t):
return (t.is_alpha and
not (t.is_space or t.is_punct or
t.is_stop or t.like_num))
def lemmatize_doc(doc):
return [ t.lemma_ for t in doc if keep_token(t)]
sentences = ['Pro USB and Analogue Microphone']
docs = [lemmatize_doc(nlp(doc)) for doc in sentences]
docs_dict = Dictionary(docs)
docs_dict.filter_extremes(no_below=20, no_above=0.2)
docs_dict.compactify()
docs_corpus = [docs_dict.doc2bow(doc) for doc in docs]
model_tfidf = TfidfModel(docs_corpus, id2word=docs_dict)
docs_tfidf = model_tfidf[docs_corpus]
docs_vecs = np.vstack([sparse2full(c, len(docs_dict)) for c in docs_tfidf])
tfidf_emb_vecs = np.vstack([nlp(docs_dict[i]).vector for i in range(len(docs_dict))])
docs_emb = np.dot(docs_vecs, tfidf_emb_vecs)
But I'm getting this error:
282 _warn_for_nonsequence(tup)
--> 283 return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
284
285
ValueError: need at least one array to concatenate
The reason is that this line is retuning an empty list:
docs_corpus = [docs_dict.doc2bow(doc) for doc in docs]
docs_corpus
This is because the dictionary is empty:
But I'm feeding the dic with a non empty list
That's the part I'm not finding the reason for which it fails
| 1
| 1
| 0
| 0
| 0
| 0
|
Edit 2: I thought better on my question and realized it was way to generalized and it is only a matter of something basic;
creating a new array from the Glove file (glove.6B.300d.txt) that contains ONLY the list of words that I have in my document.
I'm aware that this actually has nothing to do with this specific GloVe file and I should learn how to do it for any two lists of words...
I assume that I just don't know how properly look for this in order to learn how to execute this part. i.e what library to use/functions/buuzzwords I should look for.
Edit 1: I'm adding the code I used that works for the whole GloVe library;
from __future__ import division
from sklearn.cluster import KMeans
from numbers import Number
from pandas import DataFrame
import sys, codecs, numpy
class autovivify_list(dict):
def __missing__(self, key):
value = self[key] = []
return value
def __add__(self, x):
if not self and isinstance(x, Number):
return x
raise ValueError
def __sub__(self, x):
if not self and isinstance(x, Number):
return -1 * x
raise ValueError
def build_word_vector_matrix(vector_file, n_words):
numpy_arrays = []
labels_array = []
with codecs.open(vector_file, 'r', 'utf-8') as f:
for c, r in enumerate(f):
sr = r.split()
labels_array.append(sr[0])
numpy_arrays.append( numpy.array([float(i) for i in sr[1:]]) )
if c == n_words:
return numpy.array( numpy_arrays ), labels_array
return numpy.array( numpy_arrays ), labels_array
def find_word_clusters(labels_array, cluster_labels):
cluster_to_words = autovivify_list()
for c, i in enumerate(cluster_labels):
cluster_to_words[ i ].append( labels_array[c] )
return cluster_to_words
if __name__ == "__main__":
input_vector_file =
'/Users/.../Documents/GloVe/glove.6B/glove.6B.300d.txt'
n_words = 1000
reduction_factor = 0.5
n_clusters = int( n_words * reduction_factor )
df, labels_array = build_word_vector_matrix(input_vector_file,
n_words)
kmeans_model = KMeans(init='k-means++', n_clusters=n_clusters,
n_init=10)
kmeans_model.fit(df)
cluster_labels = kmeans_model.labels_
cluster_inertia = kmeans_model.inertia_
cluster_to_words = find_word_clusters(labels_array,
cluster_labels)
for c in cluster_to_words:
print cluster_to_words[c]
print "
"
Original question:
Let's say I have a specific text (say of 500 words).
I want to do the following:
Create an embedding of all the words in this text (i.e have a list of the GloVe vectors only of this 500 words)
Cluster it (*this I know how to do)
How do I do such a thing?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm doing a NLP project with my university, collecting data on words in Icelandic that exist both spelled with an i and with a y (they sound the same in Icelandic fyi) where the variants are both actual words but do not mean the same thing. Examples of this would include leyti (an approximation in time) and leiti (a grassy hill), or kirkja (church) and kyrkja (choke). I have a dataset of 2 million words. I have already collected two wordlists, one of which includes words spelled with a y and one includes the same words spelled with a i (although they don't seem to match up completely, as the y-list is a bit longer, but that's a separate issue). My problem is that I want to end up with pairs of words like leyti - leiti, kyrkja - kirkja, etc. But, as y is much later in the alphabet than i, it's no good just sorting the lists and pairing them up that way. I also tried zipping the lists while checking the first few letters to see if I can find a match but that leaves out all words that have y or i as the first letter. Do you have a suggestion on how I might implement this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I wonder how to deploy a doc2vec model in production to create word vectors as input features to a classifier. To be specific, let say, a doc2vec model is trained on a corpus as follows.
dataset['tagged_descriptions'] = datasetf.apply(lambda x: doc2vec.TaggedDocument(
words=x['text_columns'], tags=[str(x.ID)]), axis=1)
model = doc2vec.Doc2Vec(vector_size=100, min_count=1, epochs=150, workers=cores,
window=5, hs=0, negative=5, sample=1e-5, dm_concat=1)
corpus = dataset['tagged_descriptions'].tolist()
model.build_vocab(corpus)
model.train(corpus, total_examples=model.corpus_count, epochs=model.epochs)
and then it is dumped into a pickle file. The word vectors are used to train a classifier such as random forests to predict movies sentiment.
Now suppose that in production, there is a document entailing some totally new vocabularies. That being said, they were not among the ones present during the training of the doc2vec model. I wonder how to tackle such a case.
As a side note, I am aware of Updating training documents for gensim Doc2Vec model and Gensim: how to retrain doc2vec model using previous word2vec model. However, I would appreciate more lights to be shed on this matter.
| 1
| 1
| 0
| 0
| 0
| 0
|
I trained a model hand position classifier with Keras and I ended up saving the model with the code (model.save('model.h5') )
now i'm traying to predict an image using this model is it doable? if yes could you give me some examples please ?
PS:my data is provided as a CSV file
| 1
| 1
| 0
| 1
| 0
| 0
|
is there is any way to find the meaning of the string is similar or not,,, even though the words in the string are differentiated
Till now i tried fuzzy-wuzzy,levenstein distance,cosine similarity to match the string but all are matches the words not the meaning of the words
Str1 = "what are types of negotiation"
Str2 = "what are advantages of negotiation"
Str3 = "what are categories of negotiation"
Ratio = fuzz.ratio(Str1.lower(),Str2.lower())
Partial_Ratio = fuzz.partial_ratio(Str1.lower(),Str2.lower())
Token_Sort_Ratio = fuzz.token_sort_ratio(Str1,Str2)
Ratio1 = fuzz.ratio(Str1.lower(),Str3.lower())
Partial_Ratio1 = fuzz.partial_ratio(Str1.lower(),Str3.lower())
Token_Sort_Ratio1 = fuzz.token_sort_ratio(Str1,Str3)
print("fuzzywuzzy")
print(Str1," ",Str2," ",Ratio)
print(Str1," ",Str2," ",Partial_Ratio)
print(Str1," ",Str2," ",Token_Sort_Ratio)
print(Str1," ",Str3," ",Ratio1)
print(Str1," ",Str3," ",Partial_Ratio1)
print(Str1," ",Str3," ",Token_Sort_Ratio1)
print("levenshtein ratio")
Ratio = levenshtein_ratio_and_distance(Str1,Str2,ratio_calc = True)
Ratio1 = levenshtein_ratio_and_distance(Str1,Str3,ratio_calc = True)
print(Str1," ",Str2," ",Ratio)
print(Str1," ",Str3," ",Ratio)
output:
fuzzywuzzy
what are types of negotiation what are advantages of negotiation 86
what are types of negotiation what are advantages of negotiation 76
what are types of negotiation what are advantages of negotiation 73
what are types of negotiation what are categories of negotiation 86
what are types of negotiation what are categories of negotiation 76
what are types of negotiation what are categories of negotiation 73
levenshtein ratio
what are types of negotiation what are advantages of negotiation
0.8571428571428571
what are types of negotiation what are categories of negotiation
0.8571428571428571
expected output:
"what are the types of negotiation skill?"
"what are the categories in negotiation skill?"
output:similar
"what are the types of negotiation skill?"
"what are the advantages of negotiation skill?"
output:not similar
| 1
| 1
| 0
| 0
| 0
| 0
|
I encountered a coding problem. In my dataset, an instance includes several sentences (different amounts in different instances). They can not be concatenated to serve as a single one. How can I effectively process this kind of data with PyTorch? Or I have to process instance one by one?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to remove stopwords from a string but the condition I want to achieve is that the named entities in the string should not be removed.
import spacy
nlp = spacy.load('en_core_web_sm')
text = "The Bank of Australia has an agreement according to the Letter Of Offer which states that the deduction should be made at the last date of each month"
doc = nlp(text)
If i check the named entities in the text, i get the below
print(doc.ents)
(The Bank of Australia, the Letter Of Offer, the last date of each month)
The usual way to remove the stopwords would be like below
[token.text for token in doc if not token.is_stop]
['Bank',
'Australia',
'agreement',
'according',
'Letter',
'Offer',
'states',
'deduction',
'date',
'month']
The normal way completely took the meaning away which is needed for my task.
I would want to retain the Named Entities.
I tried adding the named entities with the same list.
list1 = [token.text for token in doc if not token.is_stop]
list2 = [str(a) for a in doc.ents]
list1 + list2
['Bank',
'Australia',
'agreement',
'according',
'Letter',
'Offer',
'states',
'deduction',
'date',
'month',
'The Bank of Australia',
'the Letter Of Offer',
'the last date of each month']
Is there any other approach to this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm looking for an answer like this but in python. How can I do text preprocessing on multiple columns? I have two text columns see screenshots. To do the cleaning work, I have to do twice to each column (see my code). Is there any clever way to do a similar task? Thanks!
import requests
from bs4 import BeautifulSoup #html.parser'
df['Summary'] = [BeautifulSoup(text).get_text() for text in df['Summary']]
df['Text'] = [BeautifulSoup(text).get_text() for text in df['Text']]
df.loc[:,"Text"] = df.Text.apply(lambda x : str.lower(x))
df.loc[:,"Summary"] = df.Summary.apply(lambda x : str.lower(x))
#remove punctuation.
df["Text"] = df['Text'].str.replace('[^\w\s]','')
df["Summary"] = df['Summary'].str.replace('[^\w\s]','')
| 1
| 1
| 0
| 0
| 0
| 0
|
Imagine I have a fasttext model that had been trained thanks to the Wikipedia articles (like explained on the official website).
Would it be possible to train it again with another corpus (scientific documents) that could add new / more pertinent links between words? especially for the scientific ones ?
To summarize, I would need the classic links that exist between all the English words coming from Wikipedia. But I would like to enhance this model with new documents about specific sectors. Is there a way to do that ? And if yes, is there a way to maybe 'ponderate' the trainings so relations coming from my custom documents would be 'more important'.
My final wish is to compute cosine similarity between documents that can be very scientific (that's why to have better results I thought about adding more scientific documents)
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a NLP project and I have two formats of input texts.
Format 1:
Some line
Some line
Name is <name> random text and numbers. age is <age> random text and numbers
Some line
Format 2:
Some line
Name
<name>. Random text and numbers
Some random line
Age
<age>. random text and numbers
What I want to do is to extract the name and age from the text. I want to write one tagger/regex that works on both formats. The name and age could be in any of the lines.
At the moment, I want to understand the technique or library that I can use. I am using python-3.6 and I am happy to use any library.
My current strategy is:
- I am planning to split each line by new line character.
- Then for each line, I look for (?:names is) (\w). The first match is the
- name. This works for first format.
My current code for name is :
import re
Pattern = '(?:names is) (\w)'
Text ='...'.split('
')
for t in Text:
Match = re.match(pattern, Text, re.I)
if match.group(1) is not None:
Name = match.group(1)
However it doesn't work for second format. Can you please let me know and ideas.
| 1
| 1
| 0
| 0
| 0
| 0
|
According to this link, target_vocab_size: int, approximate size of the vocabulary to create. The statement is pretty ambiguous for me. As far as I can understand, the encoder will map each vocabulary to a unique ID. What will happen if the corpus has vocab_size larger than the target_vocab_size?
| 1
| 1
| 0
| 0
| 0
| 0
|
Given a list of predefined terms that can be formed by one, two or even three words, the problem is to count their ocurrences in a set of documents with a free vocabulary (ie, much many words).
terms= [
[t1],
[t2, t3],
[t4, t5, t6],
[t7],...]
and the documents where this terms needs to be recognized are in the form of:
docs = [
[w1, w2, t1, w3, w4, t7], #d1
[w1, w4, t4, t5, t6, wi, ...], #d2
[wj, t7, ..] ..] #d3
The desired output should be
[2, 1, 1, ...]
This is, the first doc has two terms of interest, the second has 1 (formed of three words) and so on.
If the terms needed to be accounted for where 1 word length, then I could easily order each document alphabetically, remove repeted terms (set) and then intersect with the terms of size 1 word. Counting repeated words are the searched result.
But with terms of length >=2 things get tricky.
I've been using gensim to form a bag of words and detect the indexes when using a new phrase
e.g.
dict_terms = corpora.Dictionary(phrases)
sentence = unseen_docs[0]
idxs = dict_terms[sentence]
And then count the seend idxs considering if the indexes are sequential, that would mean that a single term has been seen and not 2 o 3 of them.
Any suggestions.
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to store the result in a data-frame in form of a tuple with (predictions, probabilities) in each tags. I can print fine the result at the line:
print(eachPrediction , " : " , eachProbability)
I'm getting the error for the line :
Error message:
temprow[i+1] = (predictions[i],probabilities[i])
IndexError: index 1 is out of bounds for axis 0 with size 1
from imageai.Prediction import ImagePrediction
import os
import pandas as pd
import numpy as np
from PIL import Image
execution_path = os.getcwd()
pred_array = np.empty((0,6),dtype=object)
TEST_PATH = '/home/guest/Documents/Aikomi'
for img in os.listdir(TEST_PATH):
if img.endswith('.jpg'):
image = Image.open(os.path.join(TEST_PATH, img))
image = image.convert("RGB")
image = np.array(image, dtype=np.uint8)
prediction = ImagePrediction()
prediction.setModelTypeAsDenseNet()
prediction.setModelPath(os.path.join(execution_path, "DenseNet.h5"))
prediction.loadModel()
predictions, probabilities = prediction.predictImage(os.path.join(TEST_PATH, img), result_count=5 )
temprow = np.zeros((1,pred_array.shape[1]),dtype=object)
temprow[0] = img
for i in range(len(predictions)):
temprow[i+1] = (predictions[i],probabilities[i])
for eachPrediction, eachProbability in zip(predictions, probabilities):
#print(eachPrediction , " : " , eachProbability)
pred_array = np.append(pred_array,temprow,axis=0)
df = pd.DataFrame(data=pred_array,columns=['File_name','Tag_1','Tag_2','Tag_3','Tag_4','Tag_5'])
print(df)
df.to_csv('Image_tags.csv')
| 1
| 1
| 0
| 1
| 0
| 0
|
I have got a list of about 300 image_id and bounding box position in a csv file. I also have a folder of about 300 images with each image id matching the name of each image. How do I compare the name of the image and the image_id if it matches me, I will crop it.
I use the python language and ubuntu os.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on a project to analyse the previous requests and if a new request comes, I need to match the earlier request and use the solution provided for the same.
For Example: if these are previous requests "Risk rating for Microsoft Inc", "Report for the month of September", etc and if new request is "Report for the month of September", I need to find the similarities and use the solution provided for one of the matching previous requests.
I am planing to implement in Python. I came across this algorithm for implementation - Topic Modelling and word2vec. Am I going in the right direction?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am working on this dataset [https://archive.ics.uci.edu/ml/datasets/Reuter_50_50] and trying to analyze text features.
I read the files and store it as follows in the documents variable:
documents=author_labels(raw_data_dir)
documents.to_csv(documents_filename,index_label="document_id")
documents=pd.read_csv(documents_filename,index_col="document_id")
documents.head()
Subsequently, I am trying to generate tf-idf vectors using sublinear growth and storing it in a variable called vectorizer.
vectorizer = TfidfVectorizer(input="filename",tokenizer=tokenizer,stop_words=stopwords_C50)
Then, I try to generate a matrix, X, of tfidf representations for each document in the corpus, using:
X = vectorizer.fit_transform(documents["filename"])
However, I am getting the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-152-8c01204baf0e> in <module>
----> 1 X = vectorizer.fit_transform(documents["filename"])
~\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y)
1611 """
1612 self._check_params()
-> 1613 X = super(TfidfVectorizer, self).fit_transform(raw_documents)
1614 self._tfidf.fit(X)
1615 # X is already a transformed view of raw_documents so
~\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y)
1029
1030 vocabulary, X = self._count_vocab(raw_documents,
-> 1031 self.fixed_vocabulary_)
1032
1033 if self.binary:
~\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in _count_vocab(self, raw_documents, fixed_vocab)
941 for doc in raw_documents:
942 feature_counter = {}
--> 943 for feature in analyze(doc):
944 try:
945 feature_idx = vocabulary[feature]
~\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in <lambda>(doc)
327 tokenize)
328 return lambda doc: self._word_ngrams(
--> 329 tokenize(preprocess(self.decode(doc))), stop_words)
330
331 else:
TypeError: 'list' object is not callable
How do I resolve this issue?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a text file with 30,000 sentences. How can I pad each sentence of this file with start and end symbols such as (s) and (/s) by Python?
A part of data is the following:
The jury further said in term-end presentments that the City Executive Committee , which had over-all charge of the election , `` deserves the praise and thanks of the City of Atlanta '' for the manner in which the election was conducted .
| 1
| 1
| 0
| 1
| 0
| 0
|
When I calculate Binary Crossentropy by hand I apply sigmoid to get probabilities, then use Cross-Entropy formula and mean the result:
logits = tf.constant([-1, -1, 0, 1, 2.])
labels = tf.constant([0, 0, 1, 1, 1.])
probs = tf.nn.sigmoid(logits)
loss = labels * (-tf.math.log(probs)) + (1 - labels) * (-tf.math.log(1 - probs))
print(tf.reduce_mean(loss).numpy()) # 0.35197204
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
loss = cross_entropy(labels, logits)
print(loss.numpy()) # 0.35197204
How to calculate Categorical Cross-Entropy when logits and labels have different sizes?
logits = tf.constant([[-3.27133679, -22.6687183, -4.15501118, -5.14916372, -5.94609261,
-6.93373299, -5.72364092, -9.75725174, -3.15748906, -4.84012318],
[-11.7642536, -45.3370094, -3.17252636, 4.34527206, -17.7164974,
-0.595088899, -17.6322937, -2.36941719, -6.82157373, -3.47369862],
[-4.55468369, -1.07379043, -3.73261762, -7.08982277, -0.0288562477,
-5.46847963, -0.979336262, -3.03667569, -3.29502845, -2.25880361]])
labels = tf.constant([2, 3, 4])
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction='none')
loss = loss_object(labels, logits)
print(loss.numpy()) # [2.0077195 0.00928135 0.6800677 ]
print(tf.reduce_mean(loss).numpy()) # 0.8990229
I mean how can I get the same result ([2.0077195 0.00928135 0.6800677 ]) by hand?
@OverLordGoldDragon answer is correct. In TF 2.0 it looks like this:
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
loss = loss_object(labels, logits)
print(f'{loss.numpy()}
{tf.math.reduce_sum(loss).numpy()}')
one_hot_labels = tf.one_hot(labels, 10)
preds = tf.nn.softmax(logits)
preds /= tf.math.reduce_sum(preds, axis=-1, keepdims=True)
loss = tf.math.reduce_sum(tf.math.multiply(one_hot_labels, -tf.math.log(preds)), axis=-1)
print(f'{loss.numpy()}
{tf.math.reduce_sum(loss).numpy()}')
# [2.0077195 0.00928135 0.6800677 ]
# 2.697068691253662
# [2.0077198 0.00928142 0.6800677 ]
# 2.697068929672241
For language models:
vocab_size = 9
seq_len = 6
batch_size = 2
labels = tf.reshape(tf.range(batch_size*seq_len), (batch_size,seq_len)) # (2, 6)
logits = tf.random.normal((batch_size,seq_len,vocab_size)) # (2, 6, 9)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
loss = loss_object(labels, logits)
print(f'{loss.numpy()}
{tf.math.reduce_sum(loss).numpy()}')
one_hot_labels = tf.one_hot(labels, vocab_size)
preds = tf.nn.softmax(logits)
preds /= tf.math.reduce_sum(preds, axis=-1, keepdims=True)
loss = tf.math.reduce_sum(tf.math.multiply(one_hot_labels, -tf.math.log(preds)), axis=-1)
print(f'{loss.numpy()}
{tf.math.reduce_sum(loss).numpy()}')
# [[1.341706 3.2518263 2.6482694 3.039099 1.5835983 4.3498387]
# [2.67237 3.3978183 2.8657475 nan nan nan]]
# nan
# [[1.341706 3.2518263 2.6482694 3.039099 1.5835984 4.3498387]
# [2.67237 3.3978183 2.8657475 0. 0. 0. ]]
# 25.1502742767334
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a BERT multilanguage model from Google. And I have a lot of text data in my language (Korean). I want BERT to make better vectors for texts in this language. So I want to additionally train BERT on that text corpus I have. Like if I would have w2v model trained on some data and would want to continue training it. Is it possible with BERT?
There are a lot of examples of "fine-tuning" BERT on some specific tasks like even the original one from Google where you can train BERT further on your data. But as far as I understand it (I might be wrong) we do it within our task-specified model (for classification task for example). So... we do it at the same time as training our classifier (??)
What I want is to train BERT further separately and then get fixed vectors for my data. Not to build it into some task-specified model. But just get vector representation for my data (using get_features function) like they do in here. I just need to train the BERT model additionally on more data of the specific language.
Would be endlessly grateful for any suggestions/links on how to train BURT model further (preferably Tensorflow). Thank you.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a function to get tfidf feature like this:
def get_tfidf_features(data, tfidf_vectorizer=None, ngram_range=(1,2)):
""" Creates tfidf features and returns them as sparse matrix. If no tfidf_vectorizer is given,
the function will train one."""
if tfidf_vectorizer is not None:
tfidf = tfidf_vectorizer.transform(data.Comment_text)
else:
# only add words to the vocabulary that appear at least 200 times
tfidf_vectorizer = TfidfVectorizer(min_df=700, ngram_range=ngram_range, stop_words='english')
tfidf = tfidf_vectorizer.fit_transform(data.Comment_text)
tfidf = pd.SparseDataFrame(tfidf.toarray()).to_sparse()
tfidf.applymap(lambda x: round(x, 4))
tfidf_features = ['tfidf_' + word for word in tfidf_vectorizer.get_feature_names()]
tfidf.columns = tfidf_features
data = data.reset_index().join(tfidf).set_index('index')
return data, tfidf_vectorizer, tfidf_features
X_train, tfidf_vectorizer, tfidf_features = get_tfidf_features(X_train)
I applied a simple logistic regression like this:
logit = LogisticRegression(random_state=0, solver='lbfgs', multi_class='ovr')
logit.fit(X_train.loc[:, features].fillna(0), X_train['Hateful_or_not'])
preds = logit.predict(X_test.loc[:, features].fillna(0))
I am getting feature importance like this:
logit.coef_
But this is giving me feature importance of "columns" not words
| 1
| 1
| 0
| 0
| 0
| 0
|
this might be a little naive question but bear with me.
I have a dataset like this.
Pretty O
bad O
storm O
here O
last O
evening O
. O
From O
Green O
Newsfeed O
: O
AHFA B-group
extends O
deadline O
for O
Sage O
Award O
to O
Nov O
. O
where O is tag for non entity, similarly B-group is tag for a group. similarly some other entities are there.
and I am trying to build an name entity recognition model. All the models I have came across has sentences and then they go on building a model. Like they directly get PoS tagging for all the words from API by processing them.
but if I want to train a model here.
Can someone suggest me an approach, or direct me towards a resource. Thanks in advance.
| 1
| 1
| 0
| 0
| 0
| 0
|
Currently i am working on a project using nlp and python. i have content and need to find the language. I am using spacy to detect the language. The libraries are providing only language as English language. i need to find whether it is British or American English? Any suggestions?
I tried with Spacy, NLTK, lang-detect. but this libraries provide only English. but i need to display as en-GB for British and en-US for american.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a dataset like this.
The 1st column is the word and 2nd column is the tag.
Pretty O
bad O
storm O
here O
last O
evening O
. O
From O
Green O
Newsfeed O
: O
AHFA B-group
extends O
deadline O
for O
Sage O
Award O
to O
Nov O
. O
I want to reconstruct the sentences,
so the output will be like
[[('Pretty', 'O'), ('bad', 'O'), ('storm','O'), ('here', 'O'), ('last', 'O'), ('evening', 'O'), ('.', 'B-geo')][(From, 'O'), ('Green', 'O'), ('Newsfeed', 'O'), ('storm:,'O'), ('AHFA', 'B-group'), ('extends', 'O'), ('deadline', 'O'), ('for', 'O'),('Sage', 'O'), ('Award', 'B-geo')][(to, 'O'), ('Nov', 'O'), ('.','O']]
Can someone help me making the sentences from this.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm working in a code for extract wrong words in a text, I'm using python with "textblob" library. In this library there is a function correction(), but it just returns the correct phrase based on the wrong phrase, for example:
in: b = TextBlob("I havv goood speling!")
in: print(b.correct())
out: I have good spelling!
I would like calculate the accuracy of the correction, i.e. obtain the percentage of the correction based on the original text or just obtain the quantity of wrong words in the text.
Someone can help me with that?
| 1
| 1
| 0
| 0
| 0
| 0
|
Datasets: I have two different text datasets(large text files for train and test that each one includes 30,000 sentences). a part of data is like the following:
"
the fulton county grand jury said friday an investigation of atlanta's recent primary election produced `` no evidence '' that any irregularities took place .
"
Question: How can I replace every word in the test data not seen in training with the word "unk" in Python?
My solution: Should I use the "nested for-loops" to compare all words of the train data with all words of the test data and also the "if-statement" to say if any word in test data is not in train data then replace with "unk" ?
#open text file and assign it to varaible with the name "readfile"
readfile1= open('train.txt','r')
#create the new empty text file with the new name and then assign it to variable
# with the name "writefile". now this file is ready for writing in that
writefile=open('test.txt','w')
for word1 in readfile1:
for word2 in readfile2:
if (word1!=word2):
word2='unk'
writefile.close()
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm building a Word2Vec model for a category-recommendation on a dataset consisting of ~35.000 sentences for a total of ~500.000 words but only ~3.000 distinct ones.
I build the model basically like this :
def train_w2v_model(df, epochs):
w2v_model = Word2Vec(min_count=5,
window=100,
size=230,
sample=0,
workers=cores-1,
batch_words=100)
vocab = df['sentences'].apply(list)
w2v_model.build_vocab(vocab)
w2v_model.train(vocab, total_examples=w2v_model.corpus_count, total_words=w2v_model.corpus_total_words, epochs=epochs, compute_loss=True)
return w2v_model.get_latest_training_loss()
I tried to find the right number of epochs for such a model like this :
print(train_w2v_model(1))
=>> 86898.2109375
print(train_w2v_model(100))
=>> 5025273.0
I find the results very counterintuitive.
I do not understand how increasing the number of epochs could lead to lower the performance.
It seems not to be a misunderstanding from the function get_latest_training_loss since I observe the results with the function most_similar way better with only 1 epoch :
100 epochs :
w2v_model.wv.most_similar(['machine_learning'])
=>> [('salesforce', 0.3464601933956146),
('marketing_relationnel', 0.3125850558280945),
('batiment', 0.30903393030166626),
('go', 0.29414454102516174),
('simulation', 0.2930642068386078),
('data_management', 0.28968319296836853),
('scraping', 0.28260597586631775),
('virtualisation', 0.27560457587242126),
('dataviz', 0.26913416385650635),
('pandas', 0.2685554623603821)]
1 epoch :
w2v_model.wv.most_similar(['machine_learning'])
=>> [('data_science', 0.9953729510307312),
('data_mining', 0.9930223822593689),
('big_data', 0.9894922375679016),
('spark', 0.9881765842437744),
('nlp', 0.9879133701324463),
('hadoop', 0.9834049344062805),
('deep_learning', 0.9831978678703308),
('r', 0.9827396273612976),
('data_visualisation', 0.9805369973182678),
('nltk', 0.9800992012023926)]
Any insight on why it behaves like this ? I would have think that increasing the number of epochs would have for sure a positive effect on the training loss.
| 1
| 1
| 0
| 1
| 0
| 0
|
Datasets: Two Large text files for train and test that all words of them are tokenized. a part of data is like the following: " the fulton county grand jury said friday an investigation of atlanta's recent primary election produced `` no evidence '' that any irregularities took place . "
Question: How can I replace every word in the test data not seen in training with the word "unk" in Python?
So far, I made the dictionary by the following codes to count the frequency of each word in the file:
#open text file and assign it to varible with the name "readfile"
readfile= open('C:/Users/amtol/Desktop/NLP/Homework_1/brown-train.txt','r')
writefile=open('C:/Users/amtol/Desktop/NLP/Homework_1/brown-trainReplaced.txt','w')
# Create an empty dictionary
d = dict()
# Loop through each line of the file
for line in readfile:
# Split the line into words
words = line.split(" ")
# Iterate over each word in line
for word in words:
# Check if the word is already in dictionary
if word in d:
# Increment count of word by 1
d[word] = d[word] + 1
else:
# Add the word to dictionary with count 1
d[word] = 1
#replace all words occurring in the training data once with the token<unk>.
for key in list(d.keys()):
line= d[key]
if (line==1):
line="<unk>"
writefile.write(str(d))
else:
writefile.write(str(d))
#close the file that we have created and we wrote the new data in that
writefile.close()
Honestly the above code doesn't work with writefile.write(str(d)) which I want to write the result in the new textfile, but by print(key, ":", line) it works and shows the frequency of each word but in the console which doesn't create the new file. if you also know the reason for this, please let me know.
| 1
| 1
| 0
| 1
| 0
| 0
|
I am using spacy library to build a chat bot. How do I check if a document is a question with a certain confidence? I know how to do relevance, but not sure how to filter statements from questions.
I am looking for something like below:
spacy.load('en_core_web_lg')('Is this a question?').is_question
| 1
| 1
| 0
| 0
| 0
| 0
|
I've tried to implement a Best First Search Algorithm on 8 puzzle problem.But I get the same path as in A* code no matter whatever matrix I take. Also, can someone help me to print the heuristics under each matrix? I only get "1" in the output.
Best First Search Code-
from copy import deepcopy
from collections import deque
class Node:
def __init__(self, state=None, parent=None, cost=0, depth=0, children=[]):
self.state = state
self.parent = parent
self.cost = cost
self.depth = depth
self.children = children
def is_goal(self, goal_state):
return is_goal_state(self.state, goal_state)
def expand(self):
new_states = operator(self.state)
self.children = []
for state in new_states:
self.children.append(Node(state, self, self.cost + 1, self.depth + 1))
def parents(self):
current_node = self
while current_node.parent:
yield current_node.parent
current_node = current_node.parent
def gn(self):
costs = self.cost
for parent in self.parents():
costs += parent.cost
return costs
def is_goal_state(state, goal_state):
for i in range(len(state)):
for j in range(len(state)):
if state[i][j] != goal_state[i][j]:
return False
return True
def operator(state):
states = []
zero_i = None
zero_j = None
for i in range(len(state)):
for j in range(len(state)):
if state[i][j] == 0:
zero_i = i
zero_j = j
break
def add_swap(i, j):
new_state = deepcopy(state)
new_state[i][j], new_state[zero_i][zero_j] = new_state[zero_i][zero_j], new_state[i][j]
states.append(new_state)
if zero_i != 0:
add_swap(zero_i - 1, zero_j)
if zero_j != 0:
add_swap(zero_i, zero_j - 1)
if zero_i != len(state) - 1:
add_swap(zero_i + 1, zero_j)
if zero_j != len(state) - 1:
add_swap(zero_i, zero_j + 1)
return states
R = int(input("Enter the number of rows:"))
C = int(input("Enter the number of columns:"))
# Initialize matrix
inital = []
print("Enter the entries rowwise:")
# For user input
for i in range(R): # A for loop for row entries
a =[]
for j in range(C): # A for loop for column entries
a.append(int(input()))
inital.append(a)
# For printing the matrix
for i in range(R):
for j in range(C):
print(inital[i][j], end = " ")
print()
R = int(input("Enter the number of rows:"))
C = int(input("Enter the number of columns:"))
# Initialize matrix
final = []
print("Enter the entries rowwise:")
# For user input
for i in range(R): # A for loop for row entries
a =[]
for j in range(C): # A for loop for column entries
a.append(int(input()))
final.append(a)
# For printing the matrix
for i in range(R):
for j in range(C):
print(final[i][j], end = " ")
print()
def search(state, goal_state):
def gn(node):
return node.gn()
tiles_places = []
for i in range(len(goal_state)):
for j in range(len(goal_state)):
heapq.heappush(tiles_places, (goal_state[i][j], (i, j)))
def hn(node):
cost = 0
for i in range(len(node.state)):
for j in range(len(node.state)):
tile_i, tile_j = tiles_places[node.state[i][j]][1]
if i != tile_i or j != tile_j:
cost += abs(tile_i - i) + abs(tile_j - j)
return cost
def fn(node):
return 1
return bfs_search(state, goal_state, fn)
def bfs_search(state, goal_state, fn):
queue = []
entrance = 0
node = Node(state)
while not node.is_goal(goal_state):
node.expand()
for child in node.children:
#print(child)
#print(fn(child))
queue_item = (fn(child), entrance, child)
heapq.heappush(queue, queue_item)
entrance += 1
node = heapq.heappop(queue)[2]
output = []
output.append(node.state)
for parent in node.parents():
output.append(parent.state)
output.reverse()
return (output,fn)
l , n = search(inital,final)
for i in l:
for j in i:
print(j)
print(n(Node(i)))
print("
")
Here's the output-
Enter the number of columns:3
Enter the entries rowwise:
2
8
3
1
6
4
7
0
5
2 8 3
1 6 4
7 0 5
Enter the number of rows:3
Enter the number of columns:3
Enter the entries rowwise:
8
0
3
2
6
4
1
7
5
8 0 3
2 6 4
1 7 5
[2, 8, 3]
[1, 6, 4]
[7, 0, 5]
1
[2, 8, 3]
[1, 6, 4]
[0, 7, 5]
1
[2, 8, 3]
[0, 6, 4]
[1, 7, 5]
1
[0, 8, 3]
[2, 6, 4]
[1, 7, 5]
1
[8, 0, 3]
[2, 6, 4]
[1, 7, 5]
1
Though I reach the correct goal node with all intermediate steps, I'm unable to understand on what basis of heuristics it's considering.
| 1
| 1
| 0
| 0
| 0
| 0
|
I would be interested to extract the weights, biases, number of nodes and number of hidden layers from an MLP/neural network built in pytorch. I wonder if anyone may be able to point me in the right direction?
Many thanks,
Max
| 1
| 1
| 0
| 1
| 0
| 0
|
I want to use the concept of spam classification and apply it to a business problem where we identify if a vision statement for a company is good or not. Here's a rough outline of what I've come up with for the project. Does this seem feasible?
Prepare dataset by collecting vision statements from top leading companies (i.e. Fortune 5000)
Let features = most frequent words (excluding non-alphanumerics, to, the, etc)
Create feature vector (dictionary) x of all words listed above
Use supervised learning algorithm (logistic regression) to train and test data
Let y = good vision statement and return the value 1; y = 0 if not good
| 1
| 1
| 0
| 0
| 0
| 0
|
I have done a lot of research on how to create chat bots (the responding part) however I can't find a way to make it more advanced. For example, I keep seeing NLTK reflections but I want to know if there are more advanced methods in NLTK (or other modules) that allow me to create a learning bot, smart bot or even an AI but I am struggling in finding modules, tutorials or documentation that help with getting started and proceeding that way. Reflections don't always work well like responding in context unless you have many lines of code pre-written for content which is inefficient and may not always be accurate. Note: I don't want to be spoon fed, I just want to be pointed in the right direction of stuff that I can do and look at.
a solution would be
e.g. user asks: "who is your favourite actor?"
bot replies with: "Brad Pitt"
(only though of Brad because of the ad astra advertisements xD)
Below is the code that I am trying to stay away from.
pairs = [
[
r"my name is (.*)",
["Hello %1, How are you today ?",]
],
[
r"what is your name ?",
["My name is Chatty and I'm a chatbot ?",]
],
[
r"how are you ?",
["I'm doing good
How about You ?",]
],
[
r"sorry (.*)",
["Its alright","Its OK, never mind",]
],
[
r"i'm (.*) doing good",
["Nice to hear that","Alright :)",]
]```
| 1
| 1
| 0
| 0
| 0
| 0
|
In my studies on NLP, more specifically the spacy library, I was confused with that,
what is the difference between from spacy.lang.en import English() and spacy.load('en') and how it works? Someone can help me explain this and if possible with some example of this difference? Thanks in advance.
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to compare the two sentences. As a example,
sentence1="football is good,cricket is bad"
sentence2="cricket is good,football is bad"
Generally these senteces have no relationship that means they are different meaning. But when I compare with python nltk tools it will give 100% similarity. How can I fix this Issue? I need Help.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm writing a program to analyze the usage of color in text. I want to search for color words such as "apricot" or "orange". For example, an author might write "the apricot sundress billowed in the wind." However, I want to only count the apricots/oranges that actually describe color, not something like "I ate an apricot" or "I drank orange juice."
Is there anyway to do this, perhaps using context() in NLTK?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am writing a program to detect collocations of bigrams (2 words that appear together more often than by chance, ex: hot dog). To do this properly, I have to remove all punctuation marks that would be stored as their own element but keep punctuations that are part of a word. For example, the bigram ['U.S. flag'] should keep the periods in U.S. but ['U.S. ,'] should have the comma removed. I've written a for loop that iterates through a list of punctuations and should remove the matching element, but that doesn't change anything. Additionally, I've used regex to remove most punctuations but if I remove periods then words with periods in them also get ruined. Any suggestions for an efficient way to remove these would be deeply appreciated!
Here's my code so far:
f = open('Collocations.txt').read()
punctuation = [',', '.', '!', '?', '"', ':', "'", ';', '@', '&', '$', '#', '*', '^', '%', '{', '}']
filteredf = re.sub(r'[,":@#?!&$%}{]', '', f)
f = f.split()
print(len(f))
for i, j in zip (punctuation, f):
if i == j:
ind = f.index(j)
f.remove(f[ind])
print(len(f))
# removes first element in the temp list to prepare to make bigrams
temp = list()
temp2 = list()
temp = filteredf.split()
temp2 = filteredf.split()
temp2.remove(temp2[0])
# forms a list of bigrams
bi = list()
for i, j in zip(temp, temp2):
x = i + " " + j
bi.append(x)
#print(len(bi))
unigrams = dict()
for i in temp:
unigrams[i] = unigrams.get(i, 0) + 1
#print(len(unigrams))
bigrams = dict()
for i in bi:
bigrams[i] = bigrams.get(i, 0) + 1
#print(len(bigramenter code here`
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to use Spacy's Japanese tokenizer.
import spacy
Question= 'すぺいんへ いきました。'
nlp(Question.decode('utf8'))
I am getting the below error,
TypeError: Expected unicode, got spacy.tokens.token.Token
Any ideas on how to fix this?
Thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a MCQ dataset which has 2 input variables question and answer, and output variable distractor ( string of 3 independent sub strings separated by comma).
The aim is build a NLP model that generates 3 distractors for each question and answer, that is separated by comma and must be place between double quotes " ".
Could anyone please help me on achieving this.
Ex :
Question : We feel unhappy when
answer : we have a fight with our classmates
distractor : "we get good grades","we become popular","when we become rich"
| 1
| 1
| 0
| 1
| 0
| 0
|
I need to print only 'NN' and 'VB' words from an entered sentence.
import nltk
import re
import time
var = raw_input("Please enter something: ")
exampleArray = [var]
def processLanguage():
try:
for item in exampleArray:
tokenized = nltk.word_tokenize(item)
tagged = nltk.pos_tag(tokenized)
print tagged
time.sleep(555)
except Exception, e:
print str(e)
processLanguage()
| 1
| 1
| 0
| 0
| 0
| 0
|
enter image description here
This is my code :
config = Config(mode='conv')
if config.mode == 'conv':
X, y = build_rand_feat()
y_flat = np.argmax(y, axis=1)
model=get_conv_model()
elif config.mode == 'time':
X, y = build_rand_feat()
y_flat = np.argmax(y,axis=1)
input_shape = (X.shape[1], X.shape[2])
model = get_recurrent_model()
Please help me to fix this.
| 1
| 1
| 0
| 0
| 0
| 0
|
I want to apply the svm using the following approach but apparently the "Bunch" type is not appropriate.
Usually, with Bunch (Dictionary-like object), the interesting attributes are: ‘data’, the data to learn and ‘target’, the classification labels. You can access the .data and the .target information accordingly. How can I make it work as I have the code below?
import pandas as pd
from sklearn import preprocessing
#Call the data below using scikit learn which stores them in Bunch
newsgroups_train = fetch_20newsgroups(subset='train',remove=('headers', 'footers', 'quotes'), categories = cats)
newsgroups_test = fetch_20newsgroups(subset='test',remove=('headers', 'footers', 'quotes'), categories = cats)
vectorizer = TfidfVectorizer( stop_words = 'english') #new
vectors = vectorizer.fit_transform(newsgroups_train.data) #new
vectors_test = vectorizer.transform(newsgroups_test.data) #new
max_abs_scaler = preprocessing.MaxAbsScaler()
scaled_train_data = max_abs_scaler.fit_transform(vectors)#corrected
scaled_test_data = max_abs_scaler.transform(vectors_test)
clf=CalibratedClassifierCV(OneVsRestClassifier(SVC(C=1)))
clf.fit(scaled_train_data, train_labels)
predictions=clf.predict(scaled_test_data)
proba=clf.predict_proba(scaled_test_data)
in the clf.fit line in the position of "trained_labels" I put "vectorizer.vocabulary_.keys()" but it gives: ValueError: bad input shape (). What should I do to get the trained labels and make it work?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a data set. One of its columns - "Keyword" - contains categorical data. The machine learning algorithm that I am trying to use takes only numeric data. I want to convert "Keyword" column into numeric values - How can I do that? Using NLP? Bag of words?
I tried the following but I got ValueError: Expected 2D array, got 1D array instead.
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer()
dataset['Keyword'] = count_vector.fit_transform(dataset['Keyword'])
from sklearn.model_selection import train_test_split
y=dataset['C']
x=dataset(['Keyword','A','B'])
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0)
from sklearn.linear_model import LinearRegression
regressor=LinearRegression()
regressor.fit(x_train,y_train)
| 1
| 1
| 0
| 1
| 0
| 0
|
I want to train a word2vec model on the english wikipedia using python with gensim. I closely followed https://groups.google.com/forum/#!topic/gensim/MJWrDw_IvXw for that.
It works for me but what I don't like about the resulting word2vec model is that named entities are split which makes the model unusable for my specific application. The model I need has to represent named entities as a single vector.
Thats why I planned to parse the wikipedia articles with spacy and merge entities like "north carolina" into "north_carolina", so that word2vec would represent them as a single vector. So far so good.
The spacy parsing has to be part of the preprocessing, which I originally did as recommended in the linked discussion using:
...
wiki = WikiCorpus(wiki_bz2_file, dictionary={})
for text in wiki.get_texts():
article = " ".join(text) + "
"
output.write(article)
...
This removes punctuation, stop words, numbers and capitalization and saves each article in a separate line in the resulting output file. The problem is that spacy's NER doesn't really work on this preprocessed text, since I guess it relies on punctuation and capitalization for NER (?).
Does anyone know if I can "disable" gensim's preprocessing so that it doesn't remove punctuation etc. but still parses the wikipedia articles to text directly from the compressed wikipedia dump? Or does someone know a better way to accomplish this? Thanks in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a list of word pairs in Icelandic that are spelled similarly but mean different things (for example leyti and leiti, kyrkja and kirkja). The list is just a single element list, not a list of tuples (so just [leyti, leiti, kyrkja, kirkja]). I'm using a big corpus to get each word's frequency, so I could end up with for example leyti = frequency 3000, leiti = frequency 500 etc. I want to keep these pairs while getting the frequency from the corpus. At the moment I'm iterating through the list of words and comparing each word to the frequency list I have from the big corpus, which results in a dictionary of f.ex. {leyti: 3000, leiti:500} etc. So basically I'm doing this:
def findfreq():
freqdic = findfreq() # a dictionary with all the words in the corpus and their frequencies
ywords = listofwords() # the list of words
yfreq = {} # resulting dictionary with the word from the wordlist and it's frequency as it is in the corpus
for i in ywords:
for key, value in freqdic.items():
if i == key:
yfreq[i] = value
return yfreq
But I don't want just a dictionary with all the words separately, I want something (tuple?) that represents the pair with both frequencies (so for example: (leyti:3000, leiti:500), (kyrkja:400, kirkja:600)). How can I do this?
| 1
| 1
| 0
| 0
| 0
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.