text
stringlengths
0
27.6k
python
int64
0
1
DeepLearning or NLP
int64
0
1
Other
int64
0
1
Machine Learning
int64
0
1
Mathematics
int64
0
1
Trash
int64
0
1
I have a code which is supposed to pre-process a list of text documents. That is: Given a list of text documents, it returns a list with each text document pre-processed. But for some reason, it is not working to remove punctuation. import nltk from nltk.tokenize import word_tokenize from nltk.corpus import stopwords nltk.download("stopwords") nltk.download('punkt') nltk.download('wordnet') def preprocess(docs): """ Given a list of documents, return each documents as a string of tokens, stripping out punctuation """ clean_docs = [clean_text(i) for i in docs] tokenized_docs = [tokenize(i) for i in clean_docs] return tokenized_docs def tokenize(text): """ Tokenizes text -- returning the tokens as a string """ stop_words = stopwords.words("english") nltk_tokenizer = nltk.WordPunctTokenizer().tokenize tokens = nltk_tokenizer(text) result = " ".join([i for i in tokens if not i in stop_words]) return result def clean_text(text): """ Cleans text by removing case and stripping out punctuation. """ new_text = make_lowercase(text) new_text = remove_punct(new_text) return new_text def make_lowercase(text): new_text = text.lower() return new_text def remove_punct(text): text = text.split() punct = string.punctuation new_text = " ".join(word for word in text if word not in string.punctuation) return new_text # Get a list of titles s1 = "[UPDATE] I am tired" s2 = "I am cold." clean_docs = preprocess([s1, s2]) print(clean_docs) This prints out: ['[ update ] tired', 'cold .'] In other words, it does not strip out punctuation because "[", "]", and "." all appear in the final product.
1
1
0
0
0
0
I am using spacy for a NLP project. when creating a doc with Spacy you can find out the noun chunks in the text (also known as "noun phrases") in the following way: import spacy nlp = spacy.load("en_core_web_sm") doc = nlp(u"The companies building cars do not want to spend more money in improving diesel engines because the government will not subsidise such engines anymore.") for chunk in doc.noun_chunks: print(chunk.text) This will give a list of the noun phrases. In this case for instance the first noun phrase is "The companies". Suppose you have a text where noun chunks are referenced with a number. like: doc=nlp(the Window (23) is closed because the wall (34) of the beautiful building (45) is not covered by the insurance (45)) assume I have the code to identify the references for instance tagging them: myprocessedtext=the Window <ref>(23)</ref> is closed because the wall <ref>(34)</ref> of the beautiful building <ref>(45)</ref> is not covered by the insurance <ref>(45)</ref> How could I get the noun chunks (noun phrases) immediately preceding the references? my idea: passing the 10 words preceding every reference to a spacy doc object, extract the noun chunks and getting the last one. This is highly inefficient since creating the doc objects is very high time consuming. Any other idea without having to create extra nlp objects? thanks.
1
1
0
0
0
0
I would like to remove the stopwords that are in a list of a list while keeping the format the same (i.e. a list of a list) Following is the code that I have already tried sent1 = 'I have a sentence which is a list' sent2 = 'I have a sentence which is another list' from nltk.corpus import stopwords stop_words = stopwords.words('english') lst = [sent1, sent2] sent_lower = [t.lower() for t in lst] filtered_words=[] for i in sent_lower: i_split = i.split() lst = [] for j in i_split: if j not in stop_words: lst.append(j) " ".join(lst) filtered_words.append(lst) Current Output of filtered_words: filtered_words [['sentence', 'list'], ['sentence', 'list'], ['sentence', 'another', 'list'], ['sentence', 'another', 'list'], ['sentence', 'another', 'list']] Desired Output of filtered_words: filtered_words [['sentence', 'list'], ['sentence', 'another', 'list']] I am getting a duplicate of list. What might I be doing wrong in the loop? Also is there a better way of doing this rather than writing so many for loops?
1
1
0
0
0
0
I'm trying to practice with LSTM and Pytorch. I took IMDB movie review dataset to predict whether the review is positive or negative. I use 80% of the dataset for my training, remove punctuations, use GloVe (with 200 dims) as an embedding layer. Before training, I also exclude too short (reviews with length smaller than 50 symbols) and too long (reviews with longer than 1000 symbols) reviews. For the LSTM layer I use hidden dimension 256, num_layers 2 and one directional parameters with 0.5 dropout. Afterwards, I have fully connected layer. For the training I used nn.BCELoss function with Adam optimizer (lr=0.001). Currently I get 85% validation accuracy with 98% training accuracy after 7 epochs. I did following steps for preventing overfitting and getting higher accuracy: used weight_decay for Adam optimizer, tried SGD (lr=0.1, 0.001) instead of Adam, tried to increase num_layers of LSTM, In all of these cases model didn't learn at all, giving 50% of accuracy for both training and validation sets. class CustomLSTM(nn.Module): def __init__(self, vocab_size, use_embed=False, embed=None, embedding_size=200, hidden_size=256, num_lstm_layers=2, bidirectional=False, dropout=0.5, output_dims=2): super().__init__() self.vocab_size = vocab_size self.embedding_size = embedding_size self.hidden_size = hidden_size self.num_lstm_layers = num_lstm_layers self.bidirectional = bidirectional self.dropout = dropout self.embedding = nn.Embedding(vocab_size, embedding_size) if use_embed: self.embedding.weight.data.copy_(torch.from_numpy(embed)) # self.embedding.requires_grad = False self.lstm = nn.LSTM(input_size=embedding_size, hidden_size=hidden_size, num_layers=num_lstm_layers, batch_first=True, dropout=dropout, bidirectional=bidirectional) # print('output dims value ', output_dims) self.drop_fc = nn.Dropout(0.5) self.fc = nn.Linear(hidden_size, output_dims) self.sig = nn.Sigmoid() I want to understand: Why the model doesn't learn at all with that changes applied? How to increase the accuracy?
1
1
0
1
0
0
I've written a chatbot in python which connects to discord, and is able to fulfil some tasks. One of the tasks is to query a list of resources of a specific computer game, and return the detailed location of the queried resource. Now I want to integrate the functionality into the chat, as seemlessly as possible. So I thought I could use NLP techniques for it. To give an example: User 1 wants to know where he/she can find the resource "wood". So he/she asks in the discord chat: "Where can I find wood?" My program shall now be able to identify this question as a valid query for a resource location, and respond with the location for resource "wood". This might involve several steps: Determine that in fact a question is asked Determine the name of the resource which was asked for ??? I am not new to programming, however I am new to NLP. Also I'm a beginner in deep learning / already developed RNN models using tensorflow/keras. For this project, I found nltk and spaCy, both of which are python modules used for NLP. I've learned already that text analysis consists of several distinct jobs, and not all of them might be of interest for my project. But it seems that both tokenization and pos tagging might be of interest. But somehow I am struggling to find a viable approach for the task. It already starts with how to identify if a text message is actually a question. My research indicates this is no functionality which is provided by NLP libraries out of the box, and pre-trained deep learning models are usually used to categorize sentences like that. Ideas I've had so far: 1) Analyze every chat message sentence by sentence Tokenize the sentence, use stemming, then pos tagging, then iterate all tokens to find out if: The verbs "find" (Where can I find ...) or "get" (Where can I get ...)" or "is" (Where is ...) are contained Check if a noun is contained, and if so, if this noun is a valid resource name (a better approach would probably to find out of the noun is actually the object related to the verb. is this possible even?) Check if the sentence if a question by checking if the last token is a ? 2) Use some kind of matching, like the spaCy's rule based matching Build several patterns which can identify the desired question/question types Match the patterns on every chat message If matched, extract the resource name 3) Use non-NLP techniques If everything else should be unviable/too complicated, I can still come up with a hardcoded approach where I would just pre-define a couple of question types, and string-search their occurence within chat messages, and try to manually extract the resource names by using string operations. This will probably be the most error prone and unflexible solution, but I'll keep it as a fallback. Of course, I do want to implement a solution which is working as flexible as possible, so it can detect various forms and types of questions, without hardcoding all possible types of questions beforehand. It should be as close to "the bot just understands the chat and answers the question" as possible. Could someone guide me towards a good solution? (not asking for complete code, but rather the techniques/steps/libraries I shall use) Maybe as a sidenote: In a later version I want to extend the functionality. Then, it shall be possible that other users name the location of a resource in the discord chat, and the bot shall add this location to its database, if its not already contained. So the chat conversation might look like: User 1: Where can I find cryptonite? User 2: It can be found in lex luthors lab Bot: Shall I add "lex luthors lab" as location for resource "cryptonite"? User 2: @bot: yes Bot: Done.
1
1
0
0
0
0
I've written a program that takes a twitter data that contains tweets and labels (0 for neutral sentiment and 1 for negative sentiment) and predicts which category the tweet belongs to. The program works well on the training and test Set. However I'm having problem in applying prediction function with a string. I'm not sure how to do that. I have tried cleaning the string the way I cleaned the dataset before calling the predict function but the values returned are in wrong shape. import numpy as np import pandas as pd from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer ps = PorterStemmer() import re #Loading dataset dataset = pd.read_csv('tweet.csv') #List to hold cleaned tweets clean_tweet = [] #Cleaning tweets for i in range(len(dataset)): tweet = re.sub('[^a-zA-Z]', ' ', dataset['tweet'][i]) tweet = re.sub('@[\w]*',' ',dataset['tweet'][i]) tweet = tweet.lower() tweet = tweet.split() tweet = [ps.stem(token) for token in tweet if not token in set(stopwords.words('english'))] tweet = ' '.join(tweet) clean_tweet.append(tweet) from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(max_features = 3000) X = cv.fit_transform(clean_tweet) X = X.toarray() y = dataset.iloc[:, 1].values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) from sklearn.naive_bayes import GaussianNB n_b = GaussianNB() n_b.fit(X_train, y_train) y_pred = n_b.predict(X_test) some_tweet = "this is a mean tweet" # How to apply predict function to this string
1
1
0
1
0
0
I am working on a NLP task that requires using a corpus of the language called Yoruba. Yoruba is a language that has diacritics in its alphabets. If I read any text/corpus into the python environment, some of the upper diacritics gets displaced/shifted especially for alphabets ẹ and ọ: for characters ẹ with diacritics at the top they get displaced. to have:ẹ́ ẹ̀ also for ọ the same thing occurs.( ọ́ ọ̀ ) def readCorpus(directory="news_sites.txt"): with open(directory, 'r',encoding="utf8", errors='replace') as doc: data = doc.readlines() return data The expected result is having the diacritics rightly placed at the top (I am surprised stackoverflow was able to fix the diacritics). Later the diacritics that have been displaced are seen as a punctuation and therefore removed (by my NLP processing function) thus affecting the whole task.
1
1
0
0
0
0
I'm learning the simplest neural networks using Dense layers using Keras. I'm trying to implement face recognition on a relatively small dataset (In total ~250 images with 50 images per class). I've downloaded the images from google images and resized them to 100 * 100 png files. Then I've read those files into a numpy array and also created a one hot label array for training my model. Here is my code for processing the training data: X, Y = [], [] feature_map = { 'Alia Bhatt': 0, 'Dipika Padukon': 1, 'Shahrukh khan': 2, 'amitabh bachchan': 3, 'ayushmann khurrana': 4 } for each_dir in os.listdir('.'): if os.path.isdir(each_dir): for each_file in os.listdir(each_dir): X.append(cv2.imread(os.path.join(each_dir, each_file), -1).reshape(1, -1)) Y.append(feature_map[os.path.basename(each_file).split('-')[0]]) X = np.squeeze(X) X = X / 255.0 # normalize the training data Y = np.array(Y) Y = np.eye(5)[Y] print (X.shape) print (Y.shape) This is printing (244, 40000) and (244, 5). Here is my model: model = Sequential() model.add(Dense(8000, input_dim = 40000, activation = 'relu')) model.add(Dense(1200, activation = 'relu')) model.add(Dense(700, activation = 'relu')) model.add(Dense(100, activation = 'relu')) model.add(Dense(5, activation = 'softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit the model model.fit(X, Y, epochs=25, batch_size=15) When I train the model, It stuck at the accuracy 0.2172, which is almost the same as random predictions (0.20). I've also tried to train mode with grayscale images but still not getting expected accuracy. Also tried with different network architectures by changing the number of hidden layers and neurons in hidden layers. What am I missing here? Is my dataset too small? or am I missing any other technical detail? For more details of code, here is my notebook: https://colab.research.google.com/drive/1hSVirKYO5NFH3VWtXfr1h6y0sxHjI5Ey
1
1
0
1
0
0
I am following a tutorial for TensorFlow and I am having problems during the model prediction phase. The final bit of code is : import cv2 import tensorflow as tf CATEGORIES = ["bishopB", "bishopW", "empty", "kingB", "kingW", "knightB", "knightW", "pawnB", "pawnW", "queenB", "queenW", "rookB", "rookW"] def prepare(file): IMG_SIZE = 50 img_array = cv2.imread(file, cv2.IMREAD_GRAYSCALE) new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE)) return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1) model = tf.keras.models.load_model("CNN.model") image = "test.jpg" #your image path prediction = model.predict([image]) prediction = list(prediction[0]) print(CATEGORIES[prediction.index(max(prediction))]) This should allow me to get a prediction based upon a file input. However when I run it, I get the following error: prediction = model.predict([image]) File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py", line 1060, in predict x, check_steps=True, steps_name='steps', steps=steps) File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py", line 2651, in _standardize_user_data exception_prefix='input') File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training_utils.py", line 334, in standardize_input_data standardize_single_array(x, shape) for (x, shape) in zip(data, shapes) File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training_utils.py", line 265, in standardize_single_array if (x.shape is not None and len(x.shape) == 1 and AttributeError: 'str' object has no attribute 'shape' Can anybody please help me understand what I have done wrong here? I don't believe it is even getting to the point where it processes my test image.
1
1
0
0
0
0
I am following a tutorial on TensorFlow image classifications. My use case differs slightly from the tutorial, it uses Chess pieces, whereas I am using traffic lights, and want to detect if its red, green or amber. I am finding that the results of my tests are poor,and wonder if it is to do with the cv2.IMREAD_GRAYSCALE I see in the CreateData section of the tutorial. Of course colour matters in my classifier, so I wonder if the tutorial is converting to greyscale, hence my lack of accurate results. I therefore changed all references of cv2.IMREAD_GRAYSCALE to cv2.IMREAD_COLOR, reran the CreateData routines, then tried to run the NeuralNetwork creation program, but that then fails with error: File "CreateNeuralNetwork.py", line 54, in <module> history = model.fit(X, y, batch_size=32, epochs=40, validation_split=0.1) File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py", line 709, in fit shuffle=shuffle) File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py", line 2688, in _standardize_user_data training_utils.check_array_lengths(x, y, sample_weights) File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training_utils.py", line 483, in check_array_lengths 'and ' + str(list(set_y)[0]) + ' target samples.') ValueError: Input arrays should have the same number of samples as target arrays. Found 195 input samples and 65 target samples. I am guessing this has changed the size / complexity of my network and thus something is now wrong in the network creation, can anybody help me track where that would be(I have no changed any part of it from the blog post I linked to above). I bet there is changes needed in this bit: # normalizing data (a pixel goes from 0 to 255) X = X/255.0 # Building the model model = Sequential() # 3 convolutional layers model.add(Conv2D(32, (3, 3), input_shape = X.shape[1:])) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(64, (3, 3))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(64, (3, 3))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) Given that a pixel in grayscale would be 0 to 255, but a colour pixel would be a lot more than that, and more likely to be a RGB vector. I am not sure where to go , or what to change. I may be way of track, thoughts would be appreciated. Additionally, when training the model with the line: history = model.fit(X, y, batch_size=32, epochs=40, validation_split=0.1) it seems epochs is how many times to train the model. Is there an advantage to doing this 400 times over 40? Will these other parameters be of importance? How will I know if I have 'overtrained' the model? What is the tipping point?
1
1
0
0
0
0
I know that NLP Categorization is when we classify the whole text as (Health, Sports, Social, Business, etc.) (LONDON) -- Rafael Nadal offered a pointed criticism of All-England Club's unique seeding rules on Saturday, two days before the start of Wimbledon. "I respect the Wimbledon rules," Nadal told reporters, "...If I believe that is fair or not, that's another story. I really personally believe [it] is not." Wimbledon uses a special formula to develop the seedings for the tournament, which sometimes depart from standard ATP rankings. The formula gives extra weight to a player's ATP record on grass courts, elevating or penalizing players who play less often or have less success on grass. This year, the Wimbledon rankings bumped Nadal down to the tournament's third seed, with Roger Federer hopping above him. That would require Nadal to beat both players seeded above him to win the title. It also sets up the possibility of a matchup with Nik Kyrgios in the second round. Kyrgios has defeated Nadal at this tournament before. "The system is the way it is," Federer said at his own press conference. "At the end of the day, if you want to win the tournament, you got to go through all the players that are in front of you." Copyright © 2019, ABC Radio. All rights reserved. This will be considered as "Sports" text But this not what I want I want to identify words or multiple words in the text like in the text above if I want to Identify players (LONDON) -- Rafael Nadal offered a pointed criticism of All-England Club's unique seeding rules on Saturday, two days before the start of Wimbledon. "I respect the Wimbledon rules," Nadal told reporters, "...If I believe that is fair or not, that's another story. I really personally believe [it] is not." Wimbledon uses a special formula to develop the seedings for the tournament, which sometimes depart from standard ATP rankings. The formula gives extra weight to a player's ATP record on grass courts, elevating or penalizing players who play less often or have less success on grass. This year, the Wimbledon rankings bumped Nadal down to the tournament's third seed, with Roger Federer hopping above him. That would require Nadal to beat both players seeded above him to win the title. It also sets up the possibility of a matchup with Nik Kyrgios in the second round. Kyrgios has defeated Nadal at this tournament before. "The system is the way it is," Federer said at his own press conference. "At the end of the day, if you want to win the tournament, you got to go through all the players that are in front of you." Copyright © 2019, ABC Radio. All rights reserved. What is this method called and is there any python libraries specified for it?
1
1
0
1
0
0
I am following this tutorial on image classification using TensorFlow. I do need a bit of further explanation on certain parts. The first question is, am I right in saying that the first Pickle X contains my images data, and the Pickle y contains the class names for my data? How does the references in X tie up to references in Y? My main question is the article says: In line 37, modify the parameter of Dense() to the number of classes you have. This is the number of possible output by the neural network. If I have 3 classes, should I change every Dense() to Dense(3)? Does it mean change all references of: model.add(Dense(x)) model.add(Dense(x)) is written in 3 places in this code. Am I to change just the last entry of this? What does each one do? As a conclusion, for 3 classes is the following code correct for the final layer?: # The output layer with 3 neurons, for 3 classes model.add(Dense(3)) model.add(Activation("softmax"))
1
1
0
0
0
0
I am having a data-set consisting of faculty id and the feedback of students regarding the respective faculty. There are multiple comments for each faculty and therefore the comments regarding each faculty are present in the form of a list. I want to apply gensim summarization on the "comments" column of the data-set to generate the summary of faculty performance according to the student feedback. Just for a trial I tried to summarize the feedbacks corresponding to the first faculty id. There are 8 distinct comments (sentences) in that particular feedback, still gensim throws an error ValueError: input must have more than one sentence. df_test.head() csf_id comments 0 9 [' good subject knowledge.', ' he has good kn... 1 10 [' good knowledge of subject. ', ' good subjec... 2 11 [' good at clearing the concepts interactive w... 3 12 [' clears concepts very nicely interactive wit... 4 13 [' good teaching ability.', ' subject knowledg... from gensim.summarization import summarize text = df_test["comments"][0] print("Text") print(text) print("Summary") print(summarize(text)) ValueError: input must have more than one sentence what changes shold i make so that the summarizer reads all the sentenses and summarizes them.
1
1
0
0
0
0
I guess I'm trying to navigate SpaCy's parse tree in a more blunt way than is provided. For instance, if I have sentences like: "He was a genius" or "The dog was green," I want to be able to save the objects to variables ("a genius" and "green"). token.children provides the IMMEDIATE syntactic dependents, so, for the first example, the children of "was" are "he" and "genius," and then "a" is a child of "genius." This isn't so helpful if I just want the entire constituent "a genius." I'm not sure how to reconstruct it from the token.children or if there's a better way. I can figure out how to match "is" and "was" using token.text (part of what I'm trying to do), but I can't figure out how to return the whole constituent "a genius" using the info provided about children. import spacy nlp = spacy.load('en_core_web_sm') sent = nlp("He was a genius.") for token in sent: print(token.text, token.tag_, token.dep_, [child for child in token.children]) This is the output: He PRP nsubj [] was VBD ROOT [He, genius, .] a DT det [] genius NN attr [a] . . punct []
1
1
0
0
0
0
I am trying to extract human names from text. Does anyone have a method that they would recommend? This is what I tried (code is below): I am using nltk to find everything marked as a person and then generating a list of all the NNP parts of that person. I am skipping persons where there is only one NNP which avoids grabbing a lone surname. I am getting decent results but was wondering if there are better ways to go about solving this problem. Code: import nltk from nameparser.parser import HumanName def get_human_names(text): tokens = nltk.tokenize.word_tokenize(text) pos = nltk.pos_tag(tokens) sentt = nltk.ne_chunk(pos, binary = False) person_list = [] person = [] name = "" for subtree in sentt.subtrees(filter=lambda t: t.node == 'PERSON'): for leaf in subtree.leaves(): person.append(leaf[0]) if len(person) > 1: #avoid grabbing lone surnames for part in person: name += part + ' ' if name[:-1] not in person_list: person_list.append(name[:-1]) name = '' person = [] return (person_list) text = """ Some economists have responded positively to Bitcoin, including Francois R. Velde, senior economist of the Federal Reserve in Chicago who described it as "an elegant solution to the problem of creating a digital currency." In November 2013 Richard Branson announced that Virgin Galactic would accept Bitcoin as payment, saying that he had invested in Bitcoin and found it "fascinating how a whole new global currency has been created", encouraging others to also invest in Bitcoin. Other economists commenting on Bitcoin have been critical. Economist Paul Krugman has suggested that the structure of the currency incentivizes hoarding and that its value derives from the expectation that others will accept it as payment. Economist Larry Summers has expressed a "wait and see" attitude when it comes to Bitcoin. Nick Colas, a market strategist for ConvergEx Group, has remarked on the effect of increasing use of Bitcoin and its restricted supply, noting, "When incremental adoption meets relatively fixed supply, it should be no surprise that prices go up. And that’s exactly what is happening to BTC prices." """ names = get_human_names(text) print "LAST, FIRST" for name in names: last_first = HumanName(name).last + ', ' + HumanName(name).first print last_first Output: LAST, FIRST Velde, Francois Branson, Richard Galactic, Virgin Krugman, Paul Summers, Larry Colas, Nick Apart from Virgin Galactic, this is all valid output. Of course, knowing that Virgin Galactic isn't a human name in the context of this article is the hard (maybe impossible) part.
1
1
0
0
0
0
my intention was to train a custom POS-Tagger and Dependency Parser in spaCy for the swedish language. I followed the instructions on https://spacy.io/usage/training and trained the models on the Swedish-Talbanken treebank conllu files. These steps went well and I ended up with a custom model. Then I loaded the model and tried a little example: nlp = spacy.load(name=os.path.join(spacy_path, 'models/model-best')) doc = nlp(u'Jag heter Alex Nilsson. Hon heter Lina') # My name is Alex Nilsson. Her name is Lina for token in doc: print(token.text, token.pos_, token.dep_) # OUTPUT: #Jag PRON nsubj #heter VERB ROOT #Alex PROPN obj #Nilsson PROPN flat:name #. PUNCT punct #Hon PRON nsubj #heter VERB parataxis #Lina PROPN obj Both POS-Tagger and Dependency Parser seem to work. What didn’t work was the sentence segmentation and the noun chunks. for sent in doc.sents: print(sent.text) # OUTPUT: #Jag heter Alex. Hon heter Lina for chunk in doc.noun_chunks: print(chunk.text, chunk.root.text, chunk.root.dep_, chunk.root.head.text) # OUTPUT: # So, no splitting for the sentences and no output for noun chunks. As far as I understand spaCy uses the Dependency Parser for both functionalities. But as shown above the Dependency Parser should work just fine. Is there something more that it required for these two to work? Maybe I am missing something obvious? I am thankful for any help!
1
1
0
0
0
0
The function below (which I found in this blog post of Chris van den Berg) extracts all n-grams of 3 contiguous characters in a string: import re def ngrams(string, n = 3): string = re.sub(r'[,-./]|\sBD', r'', string) ngrams = zip(*[string[i:] for i in range(n)]) return [''.join(ngram) for ngram in ngrams] As an example, passing the string Stack Overflow to the function defined above will return the following list: print(ngrams('Stack Overflow', n = 3)) ['Sta', 'tac', 'ack', 'ck ', 'k O', ' Ov', 'Ove', 'ver', 'erf', 'rfl', 'flo', 'low'] My goal is to modify this function so that it includes both n-grams of 3 contiguous characters and words. That is, for the same example shown above, I would like the output to be the following: ['Stack', 'Overflow', 'Sta', 'tac', 'ack', 'ck ', 'k O', ' Ov', 'Ove', 'ver', 'erf', 'rfl', 'flo', 'low']
1
1
0
0
0
0
I have set of doctors opinions about patients that may or may not have certain disses. lets say a doctor opinion about patient A is: The patient does not show sign of ms or No focal or epileptiform features were noted and for patient B is the patient show signs of ms or complex partial seizures I want to categorize A as ill but not B. Is it possible using NLTK lib? I tried to extract tags of the sentence using following code, but don't know were to go from here! text = 'No focal or epileptiform features were noted' tokens = nltk.word_tokenize(text) tagged = nltk.pos_tag(tokens) print(tagged) [('No', 'DT'), ('focal', 'JJ'), ('or', 'CC'), ('epileptiform', 'JJ'), ('features', 'NNS'), ('were', 'VBD'), ('noted', 'VBN')]
1
1
0
1
0
0
I have to calculate readability score of a text document. Is there a package or inbuilt function. Everything on internet seems too complex. Can any one help me with that or how to write my own function? I have done pre processing of text, calculated the tfidf of document but I want to find the readability score or fog index of the document. I tried using code available on other platform but it didn't work def text_process(mess): nopunc = [char for char in mess if char not in string.punctuation] #nopunc = [char for char in mess if char not in string.punctuation] nopunc = ''.join(nopunc) text = [word for word in tokens if word not in stops] text = [wl.lemmatize(word) for word in mess] return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')] from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd vect = TfidfVectorizer() tfidf_matrix = vect.fit_transform(df["comments"].head(10000)) df1 = pd.DataFrame(tfidf_matrix.toarray(),columns=vect.get_feature_names()) print(df1) I don't know how to get the desired results of readability scores. I would appreciate if someone would help me
1
1
0
1
0
0
So I have finetuned a Resnet50 model with the following architecture: model = models.Sequential() model.add(resnet) model.add(Conv2D(512, (3, 3), activation='relu')) model.add(Conv2D(512, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Flatten()) model.add(layers.Dense(2048, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(4096, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(736, activation='softmax')) # Output layer So now I have a saved model (.h5) which I want to use as input into another model. But I don't want the last layer. I would normally do it like this with a base resnet50 model: def base_model(): resnet = resnet50.ResNet50(weights="imagenet", include_top=False) x = resnet.output x = GlobalAveragePooling2D()(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x) return Model(inputs=resnet.input, outputs=x) but that does not work for the model as it gives me an error. I am trying it like this right now but still, it does not work. def base_model(): resnet = load_model("../Models/fine_tuned_model/fine_tuned_resnet50.h5") x = resnet.layers.pop() #resnet = resnet50.ResNet50(weights="imagenet", include_top=False) #x = resnet.output #x = GlobalAveragePooling2D()(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x) return Model(inputs=resnet.input, outputs=x) enhanced_resent = base_model() This is the error that it gives me. Layer dense_3 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.core.Dense'>. Full input: [<keras.layers.core.Dense object at 0x000001C61E68E2E8>]. All inputs to the layer should be tensors. I don't know if I can do this or not.
1
1
0
0
0
0
I wanted to make texts readable for BERT-embeddings by inserting the [CLS] and [SEP] tokens. I tokenized my text so I have a list with every word and punctuation mark as element, however, I don't know how exactly I can add elements after every occurrence of '.' in my text. Does anyone know what I can do? Or do you know if there is something that prepares BERT-readable-texts?
1
1
0
0
0
0
I am currently performing data cleaning on this spam text message dataset. There are many ellipses in these text message, for example: mystr = 'Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...' As you can see, there are ellipses with 2 periods (..) or 3 periods (...) My initial solution was to write a function spacy_tokenizer that tokenizes my strings, removes stopwords as well as punctuations: import spacy nlp = spacy.load('en_core_web_sm') from nltk.corpus import stopwords stopWords = set(stopwords.words('english')) print(stopWords) import string punctuations = string.punctuation def spacy_tokenizer(sentence): # Create token object mytokens = nlp(sentence) # Case normalization and Lemmatization mytokens = [ word.lemma_.lower() if word.lemma_ != "-PRON-" else word.lower_ for word in mytokens ] # Remove stop words and punctuations mytokens = [ word.strip(".") for word in mytokens if word not in stopWords and word not in punctuations ] # return preprocessed list of tokens return mytokens However, this function doesn't get rid of the ellipses IN: print(spacy_tokenizer(mystr)) OUT: ['go', 'jurong', 'point', 'crazy', '', 'available', 'bugis', 'n', 'great', 'world', 'la', 'e', 'buffet', '', 'cine', 'get', 'amore', 'wat', ''] As you can see, there are tokens with len(token) = 0 that appear as '' My workaround is to add another list comprehension to spacy_tokenizer that looks something like this: [ word for word in mytokens if len(word) > 0] def spacy_tokenizer(sentence): # Create token object mytokens = nlp(sentence) # Case normalization and Lemmatization mytokens = [ word.lemma_.lower() if word.lemma_ != "-PRON-" else word.lower_ for word in mytokens ] # Remove stop words and punctuations mytokens = [ word.strip(".") for word in mytokens if word not in stopWords and word not in punctuations ] # remove empty strings mytokens = [ word for word in mytokens if len(word) > 0] return mytokens IN: print(spacy_tokenizer(mystr)) OUT: ['go', 'jurong', 'point', 'crazy', 'available', 'bugis', 'n', 'great', 'world', 'la', 'e', 'buffet', 'cine', 'get', 'amore', 'wat'] So the new function gives the expect result, bu it's not the most elegant solution I think. Does anyone have any alternative ideas?
1
1
0
0
0
0
I have a dataset in a CSV format that looks like this: 1,dont like the natives 2,Keep it local always 2,Karibu kenya The label 1 indicates a hate speech while 2 indicates a positive. Here is my code: import numpy as np import csv import tensorflow as tf from tensorflow.keras.layers import ( Masking, LSTM, Dense, TimeDistributed, Activation) def tokenize(text): """ Change text string into number and make sure they resulting np.array is of the same size """ Tokenizer = tf.keras.preprocessing.text.Tokenizer t = Tokenizer() t.fit_on_texts(text) tokenized_text = t.texts_to_sequences(text) tokenized_text = [item for sublist in tokenized_text for item in sublist] return np.resize(np.array(tokenized_text), (1, 30)) x_train = [] y_train = [] # Reading data from CSV with open('data.csv') as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') line_count = 0 for row in csv_reader: line_count = line_count+1 if line_count == 1: continue # Tokenize input data tokenized = tokenize(row[1]) x_train.append(tokenized) y_train.append(row[0]) x_train = np.array(x_train).astype('float32') y_train = np.array(y_train).astype('float32') x_test = x_train[:3] y_test = y_train[:3] input_shape = x_train[0].shape output_shape = y_train.shape batch_size = len(y_train) model = tf.keras.models.Sequential() model.add(Masking(mask_value=-1, input_shape=input_shape)) model.add(LSTM(batch_size, dropout=0.2)) model.add(Dense(input_dim=batch_size, units=output_shape[-1])) model.add(Activation('softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=100, batch_size=batch_size) model.evaluate(x_test, y_test) for text in ["Karibu kenya", ]: tokenized_text = tokenize(text) prediction = model.predict(tokenized_text, batch_size=1, verbose=1) # Results print("Text: {}: Prediction: {}".format(text, prediction)) The rest of the code seems to be running well but I'm not able to run the model.predict(tokenized_text, batch_size=1, verboze=1) I get the following error instead: Epoch 97/100 19/19 [==============================] - 0s 196us/sample - loss: 0.8753 - accuracy: 0.5789 Epoch 98/100 19/19 [==============================] - 0s 246us/sample - loss: 0.8525 - accuracy: 0.6842 Epoch 99/100 19/19 [==============================] - 0s 169us/sample - loss: 0.7961 - accuracy: 0.6842 Epoch 100/100 19/19 [==============================] - 0s 191us/sample - loss: 0.7745 - accuracy: 0.7368 3/3 [==============================] - 0s 115ms/sample - loss: 0.5518 - accuracy: 1.0000 Traceback (most recent call last): File "start.py", line 65, in <module> prediction = model.predict(tokenized_text, batch_size=1, verbose=1) File "/home/felix/Projects/keras/.env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 821, in predict use_multiprocessing=use_multiprocessing) File "/home/felix/Projects/keras/.env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 705, in predict x, check_steps=True, steps_name='steps', steps=steps) File "/home/felix/Projects/keras/.env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 2428, in _standardize_user_data exception_prefix='input') File "/home/felix/Projects/keras/.env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py", line 512, in standardize_input_data 'with shape ' + str(data_shape)) ValueError: Error when checking input: expected masking_input to have 3 dimensions, but got array with shape (1, 30) Not sure what I'm doing wrong. I have tried to change the data shape but still not working. Thanks in advance.
1
1
0
0
0
0
I am trying to generate a matrix of pairwise distances from a list strings (newspaper articles). WMD distance is not implemented in scipy.spatial.distance.pdist so I hook this implementation: https://github.com/src-d/wmd-relax to SpaCy. However, I cannot figure out how to iterate over my list to generate the distance matrix.
1
1
0
0
0
0
Is it possible to train a Kmeans ML model using a multidimensional feature matrix? I'm using sklearn and KmeansClass for clustering, Word2Vec for extracting the bag of words, and TreeTagger for the text pre-processing from gensim.models import Word2Vec from sklearn.cluster import KMeans lemmatized_words = [["be", "information", "contract", "residential"], ["can", "send", "package", "recovery"] w2v_model = Word2Vec.load(wiki_path_model) bag_of_words = [w2v_model.wv(phrase) for phrase in lemmatized_words] # # # bag_of_words = [array([[-0.08796783, 0.08373307, 0.04610106, ..., 0.41964772, # -0.1733183 , 0.09438939], # [ 0.11526374, 0.09092105, -0.2086806 , ..., 0.5205145 , # -0.11455593, -0.05190944], # [-0.05140354, 0.09938619, 0.07485678, ..., 0.73840886, # -0.17298238, 0.09994634], # ..., # [-0.01144416, -0.17129216, -0.04012141, ..., 0.05281362, # -0.23109615, 0.02297313], # [-0.08355679, 0.24799444, 0.04348441, ..., 0.27940673, # -0.14400786, -0.09187686], # [ 0.11022831, 0.11035886, 0.19900796, ..., 0.12891224, # -0.09379898, 0.10538024]],dtype=float32) # array([[ 1.73330009e-01, 1.26429915e-01, -3.47578406e-01, ..., # 8.09064806e-02, -3.02738965e-01, -1.61911864e-02], # [ 2.47227158e-02, -6.48087710e-02, -1.97364464e-01, ..., # 1.35158226e-01, 1.72204189e-02, -1.14456110e-01], # [ 8.07424933e-02, 2.69261692e-02, -4.22120057e-02, ..., # 1.01349883e-01, -1.94084793e-01, -2.64464412e-04], # ..., # [ 1.36009008e-01, 1.50609210e-01, -2.59797573e-01, ..., # 1.84113771e-01, -6.85161874e-02, -1.04138054e-01], # [ 4.83367145e-02, 1.17820159e-01, -2.43335906e-02, ..., # 1.33836940e-01, -1.55749675e-02, -1.18981823e-01], # [-6.68482706e-02, 4.57039356e-01, -2.20365867e-01, ..., # 2.95841128e-01, -1.55933857e-01, 7.39804050e-03]], dtype=float32) # ] # # model = KMeans(algorithm='auto0', max_iter=300, n_clusters=2) model.fit(bag_of_words) I expect that the Kmeans is trained, so I can store the model and use for predictions, but I receive this error message: ValueError: setting an array element with a sequence.
1
1
0
0
0
0
My code references the one pasted here on google's website: https://cloud.google.com/storage/docs/uploading-objects I am attempting to make a python program that records microphone mono audio, creates a WAV file out of it, and then uploads it GCS where it is then analyzed. The part where I am stuck at is the uploading to GC part. I don't know whats supposed to replace as I don't even know how to find that file path. I do, however, know what the mybucket name is. It's "gcspeechstorage" (I made that). Also, the block of code that uploads a file to the bucket is very vague to me and I realize now that Google's boilerplate code is not working for me. I am getting a "google.api_core.exceptions.NotFound: 404 requested entity was not found" error. If there is any way to get around this so I can upload a 1+ minute clip and have it analyzed that would be great. My NLTK works fine. I defined the gcs_uri to equal os.path.join('gs://<gcspeechstorage>/<file_path_inside_bucket>') but I know that is only partially complete. I do not know how to complete that 2nd argument. I'm not even sure if the code is in the right order, to be honest. import pyaudio import wave import pprint import argparse import datetime import io import json import os import nltk from nltk.sentiment.vader import SentimentIntensityAnalyzer from google.cloud import storage import sys from oauth2client.service_account import ServiceAccountCredentials CHUNK = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 10 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) print("* recording") frames = [] for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)): data = stream.read(CHUNK) frames.append(data) print("* done recording") stream.stop_stream() stream.close() p.terminate() wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(b''.join(frames)) wf.close() os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'C:/Users/Dave/Desktop/mizu/Project Mizu-7e2ecd8c5804.json' bucket_name = "C:/Users/Dave/Desktop/mizu/output.wav" source_file_name = "gcspeechstorage" destination_blob_name = "output.wav" gcs_uri = "gs://gcspeechstorage/output.wav" def create_bucket(bucket_name): """Creates a new bucket.""" storage_client = storage.Client() bucket = storage_client.create_bucket(bucket_name) print('Bucket {} created'.format(bucket.name)) def upload_blob(bucket_name, source_file_name, destination_blob_name): """Uploads a file to the bucket.""" storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) blob = bucket.blob(destination_blob_name) blob.upload_from_filename(source_file_name) print('File {} uploaded to {}.'.format( source_file_name, destination_blob_name)) # [START speech_transcribe_async_gcs] def transcribe_gcs(gcs_uri): """Asynchronously transcribes the audio file specified by the gcs_uri.""" from google.cloud import speech from google.cloud.speech import enums from google.cloud.speech import types client = speech.SpeechClient() audio = types.RecognitionAudio(uri=gcs_uri) config = types.RecognitionConfig( encoding= 'LINEAR16', sample_rate_hertz=44100, language_code='en-US') operation = client.long_running_recognize(config, audio) print('Waiting for operation to complete...') response = operation.result(timeout=90) # Each result is for a consecutive portion of the audio. Iterate through # them to get the transcripts for the entire audio file. for result in response.results: # The first alternative is the most likely one for this portion. print(u'Transcript: {}'.format(result.alternatives[0].transcript)) transcribedSpeechFile = open('speechToAnalyze.txt', 'a+') # this is where a text file is made with the transcribed speech transcribedSpeechFile.write(format(result.alternatives[0].transcript)) transcribedSpeechFile.close() print('Confidence: {}'.format(result.alternatives[0].confidence)) # [END speech_transcribe_async_gcs] if __name__ == '__main__': transcribe_gcs(gcs_uri) audio_rec = open('speechToAnalyze.txt', 'r') sid = SentimentIntensityAnalyzer() for sentence in audio_rec: ss = sid.polarity_scores(sentence) for k in ss: print('{0}: {1}, '.format(k, ss[k]), end='') print() Expected results: uploads WAV file to GCS, then retrieves it to transcribe, then analyzes the sentiment. Actual results: records audio, then crashes giving me the aforementioned 404 error. Error: Traceback (most recent call last): File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\google\api_core\grpc_helpers.py", line 57, in error_remapped_callable return callable_(*args, **kwargs) File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\grpc\_channel.py", line 565, in __call__ return _end_unary_response_blocking(state, call, False, None) File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\grpc\_channel.py", line 467, in _end_unary_response_blocking raise _Rendezvous(state, None, None, deadline) grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.NOT_FOUND details = "Requested entity was not found." debug_error_string = "{"created":"@1562714798.427000000","description":"Error received from peer ipv6:[2607:f8b0:4000:804::200a]:443","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Requested entity was not found.","grpc_status":5}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:/Users/Dave/Desktop/mizu/FrankensteinedFile.py", line 100, in <module> transcribe_gcs('C:/Users/Dave/Desktop/mizu/output.wav') File "C:/Users/Dave/Desktop/mizu/FrankensteinedFile.py", line 79, in transcribe_gcs operation = client.long_running_recognize(config, audio) File "C:\Users\Dave\AppData\Local\Programs\Python\Python37\lib\site-packages\google\cloud\speech_v1\gapic\speech_client.py", line 326, in long_running_recognize request, retry=retry, timeout=timeout, metadata=metadata File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\google\api_core\gapic_v1\method.py", line 143, in __call__ return wrapped_func(*args, **kwargs) File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\google\api_core\retry.py", line 273, in retry_wrapped_func on_error=on_error, File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\google\api_core\retry.py", line 182, in retry_target return target() File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\google\api_core\timeout.py", line 214, in func_with_timeout return func(*args, **kwargs) File "C:\Users\Dave\AppData\Roaming\Python\Python37\site-packages\google\api_core\grpc_helpers.py", line 59, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File "<string>", line 3, in raise_from google.api_core.exceptions.NotFound: 404 Requested entity was not found
1
1
0
0
0
0
I'm using SpaCy to find sentences that contain 'is' or 'was' that have pronouns as their subjects and return the object of the sentence. My code works, but I feel like there must be a much better way to do this. import spacy nlp = spacy.load('en_core_web_sm') ex_phrase = nlp("He was a genius. I really liked working with him. He is a dog owner. She is very kind to animals.") #create an empty list to hold any instance of this particular construction list_of_responses = [] #split into sentences for sent in ex_phrase.sents: for token in sent: #check to see if the word 'was' or 'is' is in each sentence, if so, make a list of the verb's constituents if token.text == 'was' or token.text == 'is': dependency = [child for child in token.children] #if the first constituent is a pronoun, make sent_object equal to the item at index 1 in the list of constituents if dependency[0].pos_ == 'PRON': sent_object = dependency[1] #create a string of the entire object of the verb. For instance, if sent_object = 'genius', this would create a string 'a genius' for token in sent: if token == sent_object: whole_constituent = [t.text for t in token.subtree] whole_constituent = " ".join(whole_constituent) #check to see what the pronoun was, and depending on if it was 'he' or 'she', construct a coherent followup sentence if dependency[0].text.lower() == 'he': returning_phrase = f"Why do you think him being {whole_constituent} helped the two of you get along?" elif dependency[0].text.lower() == 'she': returning_phrase = f"Why do you think her being {whole_constituent} helped the two of you get along?" #add each followup sentence to the list. For some reason it creates a lot of duplicates, so I have to use set list_of_responses.append(returning_phrase) list_of_responses = list(set(list_of_responses))
1
1
0
0
0
0
I'm using sklearn.feature_extraction.text.TfidfVectorizer. I'm processing text. It seems standard to remove stop words. However, it seems to me that if I already have a ceiling on document frequency, meaning I will not include tokens that are in a large percent of the document (eg max_df=0.8), dropping stop words doesn't seem necessary. Theoretically, stop words are words that appear often and should be excluded. This way, we don't have to debate on what to include in our list of stop words, right? It's my understanding that there is disagreement over what words are used often enough that they should be considered stop words, right? For example, scikit-learn includes "whereby" in its built-in list of English stop words.
1
1
0
0
0
0
I am loading a CSV into a pandas data frame. One of the columns in the dataframe is "reviews" which contain strings of text. I need to identify all the adjectives in this column in all the rows of the dataframe and then create a new column "adjectives" that contains a list of all the adjectives from that review. I've tried using TextBlobs and was able to tag the parts of speech for each case using the code posted. import pandas as pd from textblob import TextBlob df=pd.read_csv('./data.csv') def pos_tag(text): try: return TextBlob(text).tags except: return None df['pos'] = df['reviews'].apply(pos_tag) df.to_csv('dataadj.csv', index=False)
1
1
0
0
0
0
Finding term frequency for documents in a list using python l=['cat sat besides dog'] I have tried finding the term frequency for each word in the corpus. term freq=(no of times word occurred in document/total number of words in a document). I tried doing it for one document, but I'm getting an error when there's more than one document in the list. def tf(corpus): dic={} for document in corpus: for word in document.split(): if word in dic: dic[word]+=1 else: dic[word]=1 for word,freq in dic.items(): print(word,freq) dic[word]=freq/len(document.split()) return dic tf(d) I want to pass this list and want find tf for words in each document. But I'm getting the wrong tf values. l=['cat sat besides dog','the dog sat on bed']
1
1
0
1
0
0
I am building a very simple DNN binary model which I define as: def __build_model(self, vocabulary_size): model = Sequential() model.add(Embedding(vocabulary_size, 12, input_length=vocabulary_size)) model.add(Flatten()) model.add(Dense(16, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) return model with training like: def __train_model(self, model, model_data, training_data, labels): hist = model.fit(training_data, labels, epochs=20, verbose=True, validation_split=0.2) model.save('models/' + model_data['Key'] + '.h5') return model The idea is to feed tfidf vectorized text after training and predict whenever it belongs to class 1 or 0. Sadly when I run predict against it, I get an array of predictions instead of expected 1 probability for the article belonging to class 1. The array values seem very uniform. I assume this comes from some mistake in the model. I try popping prediction like so: self._tokenizer.fit_on_texts(asset_article_data.content) predicted_post_vector = self._tokenizer.texts_to_matrix(post, mode='tfidf') return model.predict(predicted_post_vector) > 0.60 // here return array instead of true/false The training data is vectorized text itself. What might be off?
1
1
0
0
0
0
When I put the following command in anaconda prompt conda install -c anaconda gensim Python stops working and shows the following error message: How do I deal with this problem?
1
1
0
1
0
0
I try to lemmatize a text using spaCy 2.0.12 with the French model fr_core_news_sm. Morevoer, I want to replace people names by an arbitrary sequence of characters, detecting such names using token.ent_type_ == 'PER'. Example outcome would be "Pierre aime les chiens" -> "~PER~ aimer chien". The problem is I can't find a way to do both. I only have these two partial options: I can feed the pipeline with the original text: doc = nlp(text). Then, the NER will recognize most people names but the lemmas of words starting with a capital won't be correct. For example, the lemmas of the simple question "Pouvons-nous faire ça?" would be ['Pouvons', '-', 'se', 'faire', 'ça', '?'], where "Pouvons" is still an inflected form. I can feed the pipeline with the lower case text: doc = nlp(text.lower()). Then my previous example would correctly display ['pouvoir', '-', 'se', 'faire', 'ça', '?'], but most people names wouldn't be recognized as entities by the NER, as I guess a starting capital is a useful indicator for finding entities. My idea would be to perform the standard pipeline (tagger, parser, NER), then lowercase, and then lemmatize only at the end. However, lemmatization doesn't seem to have its own pipeline component and the documentation doesn't explain how and where it is performed. This answer seem to imply that lemmatization is performed independent of any pipeline component and possibly at different stages of it. So my question is: how to choose when to perform the lemmatization and which input to give to it?
1
1
0
0
0
0
I'm writing an AI for the game 2048 using Python. It's going a lot slower than I expected. I set the depth limit to just 5 and it still took several seconds to get an answer. At first I thought my implementations of all the functions were crap, but I figured out the real reason why. There are way more leaves on the search tree than there even possibly should be. Here is a typical result (I counted the leaves, branches, and number of expansions): 111640 leaves, 543296 branches, 120936 expansions Branching factor: 4.49242574585 Expected max leaves = 4.49242574585^5 = 1829.80385192 leaves and another, for good measure: 99072 leaves, 488876 branches, 107292 expansions Branching factor: 4.55650001864 Expected max leaves = 4.55650001864^5 = 1964.06963743 leaves As you can see, there are way more leaves on the search tree than how many there would be if I used naive minimax. What is going on here? My algorithm is posted below: # Generate constants import sys posInfinity = sys.float_info.max negInfinity = -sys.float_info.max # Returns the direction of the best move given current state and depth limit def bestMove(grid, depthLimit): global limit limit = depthLimit moveValues = {} # Match each move to its minimax value for move in Utils2048.validMoves(grid): gridCopy = [row[:] for row in grid] Utils2048.slide(gridCopy, move) moveValues[move] = minValue(grid, negInfinity, posInfinity, 1) # Return move that have maximum value return max(moveValues, key = moveValues.get) # Returns the maximum utility when the player moves def maxValue(grid, a, b, depth): successors = Utils2048.maxSuccessors(grid) if len(successors) == 0 or limit < depth: return Evaluator.evaluate(grid) value = negInfinity for successor in successors: value = max(value, minValue(successor, a, b, depth + 1)) if value >= b: return value a = max(a, value) return value # Returns the minimum utility when the computer moves def minValue(grid, a, b, depth): successors = Utils2048.minSuccessors(grid) if len(successors) == 0 or limit < depth: return Evaluator.evaluate(grid) value = posInfinity for successor in successors: value = min(value, maxValue(successor, a, b, depth + 1)) if value <= a: return value b = min(b, value) return value Someone please help me out. I looked over this code several times and I can't pin down what's wrong.
1
1
0
0
0
0
When I try to do a simple query using wolfram alpha I am getting these errors. This is my code: import wolframalpha input = raw_input("Question: ") app_id = "**************" client = wolframalpha.Client(app_id) res = client.query(input) answer = next(res.results).text print answer The error is : Can you help me figure this one?
1
1
0
0
0
0
I have five decision variables, each having a specific range. I need to find a combination of these variables so as to maximize one of my objectives while minimizing the other at the same time. I have prepared a datasheet of randomly generated variables with respective values of the 2 objective functions. Please suggest me how to approach the solution using neural networks. My objective function involves thermodynamic calculations. If interested you can have a look at the objective functions here :
1
1
0
0
0
0
I want to extract all the possible meaningful phrases from a sentence For example: "Food was fantastic in the local restaurant and the restaurant was perfectly romantic." I want: Food was fantastic Food was fantastic in the local restaurant the restaurant was perfectly romantic etc I don't mind if there are some additional phrases that come up as I am planning to use Vader sentiment analysis to remove neutral phrases. Another approach that would work for me is if there is a way to extract phrases around a keyword, then I can use python rake to get the keywords This is a project to extract all possible positive and negative phrases for UGC reviews that we collect, our initial approach was to use regex patterns to extract phrases and then pass them through Vader to get sentiments but this was omitting a lot of phrases, now we are trying to shortlist sentences with a sentiment and then extracting phrases from it,
1
1
0
0
0
0
Lets consider a couple of sentence like Jason live California I live in California Jason and Robert are friend the both live in California Both of the above sentences are about California By using nltk how can I extract California from above sentences I am kind of new in nlp any help will be highly appreciated .
1
1
0
0
0
0
I am trying to import pyLDAvis but it gives the error ModuleNotFoundError: No module named '_contextvars' although I installed both pyLDAvis and contextvars. The error is as follows Traceback (most recent call last): File "C:/Users/ebru/Documents/Arda Docs/Mydocs/ITLS/Research/Tüpraş/Python Codes/Tupras_NLPv04.py", line 249, in <module> import pyLDAvis File "C:\Users\ebru\PycharmProjects\Tuprasv01\venv\lib\site-packages\pyLDAvis\__init__.py", line 44, in <module> from ._display import * File "C:\Users\ebru\PycharmProjects\Tuprasv01\venv\lib\site-packages\pyLDAvis\_display.py", line 7, in <module> import jinja2 File "C:\Users\ebru\PycharmProjects\Tuprasv01\venv\lib\site-packages\jinja2\__init__.py", line 82, in <module> _patch_async() File "C:\Users\ebru\PycharmProjects\Tuprasv01\venv\lib\site-packages\jinja2\__init__.py", line 78, in _patch_async from jinja2.asyncsupport import patch_all File "C:\Users\ebru\PycharmProjects\Tuprasv01\venv\lib\site-packages\jinja2\asyncsupport.py", line 13, in <module> import asyncio File "C:\Users\ebru\AppData\Local\Programs\Python\Python37-32\lib\asyncio\__init__.py", line 8, in <module> from .base_events import * File "C:\Users\ebru\AppData\Local\Programs\Python\Python37-32\lib\asyncio\base_events.py", line 39, in <module> from . import events File "C:\Users\ebru\AppData\Local\Programs\Python\Python37-32\lib\asyncio\events.py", line 14, in <module> import contextvars File "C:\Users\ebru\AppData\Local\Programs\Python\Python37-32\lib\contextvars.py", line 1, in <module> from _contextvars import Context, ContextVar, Token, copy_context ModuleNotFoundError: No module named '_contextvars' I tried to delete the underscore in contextvars.py but it did not work. Plotting tools import pyLDAvis import pyLDAvis.sklearn
1
1
0
1
0
0
I want to scan text for the presence of words from a list of words. This would be straightforward if the text were unformatted, but it is markdown-formatted. At the moment, I'm accomplishing this with regex: import re text = 'A long text string with **markdown** formatting.' words = ['markdown', 'markup', 'marksideways'] found_words = [] for word in words: word_pattern = re.compile(r'(^|[ \*_])' + word + r'($|[ \*_.!?])', (re.I | re.M)) match = word_pattern.search(text) if match: found_words.append(word) I'm working with a very long list of words (a sort of denylist) and very large candidate texts, so speed is important to me. Is this a relatively efficient and speedy way to do this? Is there a better approach?
1
1
0
0
0
0
I am trying to take a person's ailment, and return what they should do (from a predetermined set of "solutions"). For example, person's ailment My head is not bleeding predetermined set of "solutions" [take medicine, go to a doctor, call the doctor] I know I need to first remove common words from the sentence (such as 'my' and 'is') but also preserve "common" words such as 'not,' which are crucial to the solution and important to the context. Next, I'm pretty sure I'll need to train a set of processed inputs and match them to outputs to train a model which will attempt to identify the "solution" for the given string. Are there any other libraries I should be using (other than nltk, and scikit-learn)?
1
1
0
0
0
0
I am using Textrazor and want to figure out the sentence from which the keywords are identified, which I am not able to. The documentation doesn't contain much information on it and nor found anywhere on the internet. How can I extract the sentence related to the keyword identified. import textrazor key = "key" textrazor.api_key = key client = textrazor.TextRazor(extractors=["word","entities", "topics","sentence","words"]) for entity,sentence in zip(response.entities(),response.sentences()): print(sentence.words) The print statement do results into words of the sentence but in textRazor class format and are not interpritable by python. Output is as follow: [TextRazor Word:"b'If'" at position 196, TextRazor Word:"b'aggression'" at position 197, TextRazor Word:"b'helps'" at position 198, TextRazor Word:"b'in'" at position 199, TextRazor Word:"b'the'" at position 200, TextRazor Word:"b'survival'" at position 201, TextRazor Word:"b'of'" at position 202, TextRazor Word:"b'our'" at position 203, TextRazor Word:"b'genes'" at position 204, TextRazor Word:"b','" at position 205, TextRazor Word:"b'then'" at position 206, TextRazor Word:"b'the'" at position 207, TextRazor Word:"b'process'" at position 208, TextRazor Word:"b'of'" at position 209, TextRazor Word:"b'natural'" at position 210, TextRazor Word:"b'selection'" at position 211, TextRazor Word:"b'may'" at position 212, TextRazor Word:"b'well'" at position 213, TextRazor Word:"b'have'" at position 214, TextRazor Word:"b'caused'" at position 215, TextRazor Word:"b'humans'" at position 216, TextRazor Word:"b','" at position 217, TextRazor Word:"b'as'" at position 218, TextRazor Word:"b'it'" at position 219, TextRazor Word:"b'would'" at position 220, TextRazor Word:"b'any'" at position 221, TextRazor Word:"b'other'" at position 222, TextRazor Word:"b'animal'" at position 223, TextRazor Word:"b','" at position 224, TextRazor Word:"b'to'" at position 225, TextRazor Word:"b'be'" at position 226, TextRazor Word:"b'aggressive'" at position 227, TextRazor Word:"b'-LRB-'" at position 228, TextRazor Word:"b'Buss'" at position 229, TextRazor Word:"b'&'" at position 230, TextRazor Word:"b'Duntley'" at position 231, TextRazor Word:"b','" at position 232, TextRazor Word:"b'2006'" at position 233, TextRazor Word:"b'-RRB-'" at position 234, TextRazor Word:"b'.'" at position 235]
1
1
0
0
0
0
Stanford CoreNLP provides coreference resolution as mentioned here, also this thread, this, provides some insights about its implementation in Java. However, I am using python and NLTK and I am not sure how can I use Coreference resolution functionality of CoreNLP in my python code. I have been able to set up StanfordParser in NLTK, this is my code so far. from nltk.parse.stanford import StanfordDependencyParser stanford_parser_dir = 'stanford-parser/' eng_model_path = stanford_parser_dir + "stanford-parser-models/edu/stanford/nlp/models/lexparser/englishRNN.ser.gz" my_path_to_models_jar = stanford_parser_dir + "stanford-parser-3.5.2-models.jar" my_path_to_jar = stanford_parser_dir + "stanford-parser.jar" How can I use coreference resolution of CoreNLP in python?
1
1
0
0
0
0
I have gotten this code online that one hot encodes an array of label encoded values. I particularly don't understand the last line. Please help I initially thought that where ever y is 1, it replaces the value of that index with 1, but, how? def read_dataset(): df = pd.read_csv("sonar.all-data.csv") x = df[df.columns[0:60]].values y = df[df.columns[60]] encoder = LabelEncoder() encoder.fit(y) y = oneHotEncode(y) return(x, y) def oneHotEncode(labels): n_labels = len(labels) n_unique_labels = len(np.unique(labels)) oneHE = np.zeros((n_labels, n_unique_labels)) oneHE[np.arange(n_labels), labels] = 1 return oneHE I am expecting to under how this code works but I don't understand that line with np.arange
1
1
0
1
0
0
I'm trying to extract some numbers out of this sentence, but I want to verify that the right number is match up to the right text. nlp = spacy.load('en_core_web_sm') s2 = 'Revenue from the advertising and subscription business for the first quarter of 2019 was RMB897.0 million (US$133.7 million), representing a 13.9% increase from RMB787.5 million (US$117.3 million) in the corresponding period in 2018.' doc = nlp(s2) for w in doc.ents: print(w.text, w.label_, w.root) for i in w.subtree: print(" ", i, i.head) for a in i.ancestors: print(" ", a, a.head) I want to relate RMB897.0 million to advertising and subscription but not sure how to do it. Also tried noun chunking. for chunk in doc.noun_chunks: print(chunk.text, chunk.root.text, chunk.root.dep_, chunk.root.head.text) for c in chunk.subtree: print(" ", c, c.head)
1
1
0
0
0
0
I have dataset as follows: text size bold label xxxx 5 1 0.0 yyyy 15 0 1.0 . . . . . . . . where label is target variable, text column is having string and bold and size are having int and label is having float. Now I have convert text column to array using tf-idf vectorizer. data['tf_idf_q1'] = tfidf_vect.fit_transform(data["text"]) Now for training and testing I'm using 3 and 1 column respectively: X = data[['tf_idf_q1', 'size', 'bold']].as_matrix() y = data['label'].as_matrix() Now when I try to fit data to svm model: clf = svm.LinearSVC().fit(X, y) It's showing me error: ValueError: setting an array element with a sequence. I tried to convert my X and y to dtype=float but it's not working. I'm new to nlp and all please help me out.
1
1
0
1
0
0
So this is the problem: I have a list of names for various merchandise. The list(python 2.7) generally looks like: ''' ['10 Apple phones','20W LED light bulb','Insignia™ - 450 Sq. Ft. Portable Air Conditioner','Jack Black Double-Duty Face Moisturizer SPF 20','apple'] ''' All the items are strings. Items in the list are completely random and have no obvious connection to each other. Now what I want to extract from each string is the item itself, without the descriptions. For example, "10 Apple phones" becomes "phones"; "Insignia™ - 450 Sq. Ft. Portable Air Conditioner" becomes "Air Conditioner" and "apple" from the list is just "apple" (because that's exactly what it is). The list after proper extraction looks like this (ideally): ''' ['phones','light bulb','Air Conditioner','Face Moisturizer','apple'] ''' My frist approach was to find all the items that are similar and put them in one group (there are about 500k words in the dataframe). I then extracted the similar parts of the words in one group. For example, "iphone XS Max", "3 iPhone 4", "two iPhone 7s" and "iPhone 3g" would be put in one group, and the algorithm would extract the similar part, which is "iPhone" in this case. This algorithm kind of worked in about 60% of the cases (I think it might get better if I optimize the algorithm a little bit more). But I'm looking for a different approach that will increase the accuracy. Any help will be greatly appreciated. Thanks guys!
1
1
0
1
0
0
I'm using the NLTK to find word in a text. I need to save result of concordance function into a list. The question is already asked here but i cannot see the changes. I try to find the type of returnde value of the function by : type(text.concordance('myword')) the result was : <class 'NoneType'>
1
1
0
0
0
0
I am trying to remove all lines in a string that start with some characters. I tried the block of code below but it was not working. My code should only print "Any thanks" as the result, but it outputs the entire original text. text = """Any thanks first_************ last_************ has """ from io import StringIO s = StringIO(text) for line in (s): if not line.startswith(' first_') \ or not line.startswith(' last_'): print(line)
1
1
0
0
0
0
I'm trying to get the jaccard distance between two strings of keywords extracted from books. For some reason, the nltk.jaccard_distance() function almost always outputs 1.0 Here is how I preprocess the keywords: def preprocess(text): # make sure to use the right encoding text = text.encode("utf-8") # remove digits and punctuation text = re.sub('[^A-Za-z]+', ' ', text) # remove duplicate words # note that these aren't sentences, they are strings of keywords text = set(text.split()) text = ' '.join(text) # tokenize text = nltk.word_tokenize(text) # create sets of n-grams text = set(nltk.ngrams(text, n=3)) return text Here is where I do the comparison: def getJaccardSimilarity(keyword_list_1, keyword_list_2): keywordstokens_2 = preprocess(keyword_list_2) keywordstokens_1 = preprocess(keyword_list_1) if len(keywordstokens_1) > 0 and len(keywordstokens_2) > 0: return nltk.jaccard_distance(keywordstokens_1, keywordstokens_2) else: return 0 When I look at the results, the similarity is almost always 1.0, which I thought meant that the n-grams between the two books are identical. Here is some sample data I've just printed out: KEYWORDS_1: set([('laser', 'structur', 'high'), ('high', 'electron', 'halo'), ('atom', 'nuclei', 'helium'), ('nuclei', 'helium', 'neutron'), ('halo', 'atom', 'nuclei'), ('precis', 'laser', 'structur'), ('structur', 'high', 'electron'), ('electron', 'halo', 'atom')]) KEYWORDS_2: set([('quantum', 'line', 'experi'), ('bench', 'magnet', 'survey'), ('trap', 'tabl', 'quantum'), ('tabl', 'quantum', 'line'), ('use', 'optic', 'trace'), ('line', 'experi', 'cold'), ('trace', 'straight', 'becaus'), ('survey', 'trap', 'tabl'), ('magnet', 'survey', 'trap'), ('straight', 'becaus', 'bench'), ('experi', 'cold', 'requir'), ('optic', 'trace', 'straight'), ('becaus', 'bench', 'magnet')]) SIMILARITY: 1.0 I'm not really sure what I'm missing. Any help is appreciated.
1
1
0
0
0
0
Earlier token 'Modi' is recognised as an Org by spacy to I retrain it with the following code: import spacy import random nlp = spacy.load('en') nlp.entity.add_label('CELEBRITY') TRAIN_DATA = [ (u"Modi", {"entities": [(0, 4, "PERSON")]}), (u"India", {"entities": [(0, 5, "GPE")]})] optimizer = nlp.begin_training() for i in range(20): random.shuffle(TRAIN_DATA) for text, annotations in TRAIN_DATA: nlp.update([text], [annotations],drop=0.3, sgd=optimizer) text = "But Modi is starting India. The company made a late push into hardware, and Apple’s Siri and Google available on iPhones, and Amazon’s Alexa software, which runs on its Echo and Dot devices, have clear leads in consumer adoption." doc = nlp(text) for ent in doc.ents: print(ent.text,ent.label_) And I got the following answer: Modi PERSON India GPE Apple’s Siri ORG Google ORG iPhones ORG Amazon GPE Echo PERSON Dot PERSON It changes the Modi to the person at the same time it doing incorrect NER as compare to the previous mode. In the previous model, Amazon was recognized as ORG but now change to GPE. Now I add the extra-label CELEBRITY and categorize Modi to CELEBRITY with this following code import spacy import random nlp = spacy.load('en') nlp.entity.add_label('CELEBRITY') TRAIN_DATA = [ (u"Modi", {"entities": [(0, 4, "CELEBRITY")]})] optimizer = nlp.begin_training() for i in range(20): random.shuffle(TRAIN_DATA) for text, annotations in TRAIN_DATA: nlp.update([text], [annotations],drop=0.3, sgd=optimizer) text = "But Modi is starting India. The company made a late push into hardware, and Apple’s Siri and Google available on iPhones, and Amazon’s Alexa software, which runs on its Echo and Dot devices, have clear leads in consumer adoption." doc = nlp(text) for ent in doc.ents: print(ent.text,ent.label_) But looks like it crashes my model and getting the following result: But CELEBRITY Modi CELEBRITY is CELEBRITY starting CELEBRITY India GPE . CELEBRITY The CELEBRITY company CELEBRITY made CELEBRITY a CELEBRITY late CELEBRITY push CELEBRITY into CELEBRITY hardware CELEBRITY , CELEBRITY and CELEBRITY Apple CELEBRITY Please let me know the behind the seen reason and also how can I achieve that only entity which I label should change while all other should be according to spacy.
1
1
0
1
0
0
Please can someone help me interpret what these codes are saying? df['date'] = df['text'].apply(lambda x:re.findall(r'\d{1,2}\/\d{1,2}\/\d{2,4}|\d{1,2}\-\d{1,2}\-\d{2,4}|[A-Z][a-z]+\-\d{1,2}\-\d{4}|[A-Z][a-z]+[,.]? \d{2}[a-z]*,? \d{4}|\d{1,2} [A-Z][a-z,.]+ \d{4}|[A-Z][a-z]{2}[,.]? \d{4}|'+pattern+r'|\d{1,2}\/\d{4}|\d{4}',x)) df['date'][271] = [df['date'][271][1]] df['date'] = df['date'].apply(lambda x: x[0]) df['date'][461] = re.findall(r'\d{4}',df['date'][461])[0] df['date'][465] = re.findall(r'\d{4}',df['date'][465])[0]
1
1
0
0
0
0
I've been learning NLP text classification via book "Text Analytics with Python". It's required several modules to be installed in a virtual environment. I use Anaconda env. I created a blank env with Python 3.7 and installed required pandas, numpy, nltk, gensim, sklearn... then, I have to install Pattern. The first problem is that I can't install Pattern via conda because of a conflict between Pattern and mkl_random. (nlp) D:\Python\Text_classification>conda install -c mickc pattern Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - mkl_random - pattern Use "conda info <package>" to see the dependencies for each package. It's impossible to remove mkl_random because there're related packages: gensim, numpy, scikit-learn etc. I don't know what to do, I didn't find any suitable conda installations for Pattern that is accepted in my case. Then, I installed Pattern using pip. Installation was successful. Is it okay to have packages from conda and from pip at the same time? The second problem, I think, is connected with the first one. I downloaded the book's example codes from https://github.com/dipanjanS/text-analytics-with-python/tree/master/Old-First-Edition/source_code/Ch04_Text_Classification, added brackets to Python 2.x 'print' functions and run classification.py The program raised an exception: Traceback (most recent call last): File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 609, in _read raise StopIteration StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "classification.py", line 50, in <module> norm_train_corpus = normalize_corpus(train_corpus) File "D:\Python\Text_classification ormalization.py", line 96, in normalize_corpus text = lemmatize_text(text) File "D:\Python\Text_classification ormalization.py", line 67, in lemmatize_text pos_tagged_text = pos_tag_text(text) File "D:\Python\Text_classification ormalization.py", line 58, in pos_tag_text tagged_text = tag(text) File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\en\__init__.py", line 188, in tag for sentence in parse(s, tokenize, True, False, False, False, encoding, **kwargs).split(): File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\en\__init__.py", line 169, in parse return parser.parse(s, *args, **kwargs) File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 1172, in parse s[i] = self.find_tags(s[i], **kwargs) File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\en\__init__.py", line 114, in find_tags return _Parser.find_tags(self, tokens, **kwargs) File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 1113, in find_tags lexicon = kwargs.get("lexicon", self.lexicon or {}), File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 376, in __len__ return self._lazy("__len__") File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 368, in _lazy self.load() File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 625, in load dict.update(self, (x.split(" ")[:2] for x in _read(self._path) if len(x.split(" ")) > 1)) File "C:\Users\PC\Anaconda3\envs lp\lib\site-packages\pattern\text\__init__.py", line 625, in <genexpr> dict.update(self, (x.split(" ")[:2] for x in _read(self._path) if len(x.split(" ")) > 1)) RuntimeError: generator raised StopIteration I don't understand what is happening. Is the exception raised because my installation with pip, or the problem is in the wrong or deprecated code in the book... and is it possible to install Pattern in conda with all other necessary packages. Thank you in advance!
1
1
0
0
0
0
I am using Python I have this code that analyse Text documents tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=10000) # split dataset into training and validation set xtrain, xval, ytrain, yval = train_test_split(movies_new['clean_plot'], y, test_size=0.2, random_state=9) # create TF-IDF features xtrain_tfidf = tfidf_vectorizer.fit_transform(xtrain) xval_tfidf = tfidf_vectorizer.transform(xval) I know that TF-IDF assigns a value to each word. Is there a way that let me see what are the values of inside xtrain_tfidf ?
1
1
0
0
0
0
I want to use fuzzywuzzy package on the following table x Reference amount 121 TOR1234 500 121 T0R1234 500 121 W7QWER 500 121 W1QWER 500 141 TRYCATC 700 141 TRYCATC 700 151 I678MKV 300 151 1678MKV 300 I want to group the table where the columns 'x' and 'amount' match. for each reference in the group i. Compare(fuzzywuzzy) with other references in that group. a. where the match is 100%, delete them b. where the match is 90-99.99%, keep them c. delete anything below 90% match for that particular row the expected output- x y amount 151 I678MKV 300 151 1678MKV 300 121 TOR1234 500 121 T0R1234 500 121 W7QWER 500 121 W1QWER 500 This is to detect the fraud entries, Like in the tables, '1' is replaced by 'I' and '0' is replaced by 'O'. If you any alternative solution, please suggest.
1
1
0
0
0
0
I am working with one project in python using Tensorflow. But I am very beginner in Tensorflow and OpenCV. Last day I tried to custom objects. But while training I am always getting one status. "I0725 10:26:31.453798 5176 supervisor.py:1117] Saving checkpoint to path traini ng/model.ckpt". I don't know I what exactly happening, Is this error or not? I already waited around 10 hours. But now also getting this same status.
1
1
0
0
0
0
I am having input sequences with the following shape. shape(1434, 185, 37) There are total 1434 sequences, each with the length of 185 characters and the total number of unique characters is 37. So in a way, we have the vocab size as follows. vocab_size=37 Now when I define my keras input to an embedded layer as follows, user_input = keras.layers.Input(shape=((185,37)), name='Input_1') user_vec = keras.layers.Flatten()(keras.layers.Embedding(vocab_size, 50, input_length=185, name='Input_1_embed')(user_input)) I get the following error. Error: ValueError: "input_length" is 185, but received input has shape (None, 185, 37) Now when I do the following, I don't get any error but I have doubt if it is right or not. user_input = keras.layers.Input(shape=((185, )), name='Input_1') user_vec = keras.layers.Flatten()(keras.layers.Embedding(vocab_size, 50, input_length=185, name='Input_1_embed')(user_input))
1
1
0
0
0
0
So I basically have a huge dataset to work with, its almost made up of 1,200,000 rows, and my target class count is about 20,000 labels. I am performing text classifiaction on my data, so I first cleaned it, and then performed tfidf vectorzation on it. The problem lies whenever I try to pick a model and fit the data, it gives me a Memory Error My current PC is Core i7 with 16GB of RAM vectorizer = feature_extraction.text.TfidfVectorizer(ngram_range=(1, 1), analyzer='word', stop_words= fr_stopwords) datavec = vectorizer.fit_transform(data.values.astype('U')) X_train, X_test, y_train, y_test = train_test_split(datavec,target,test_size=0.2,random_state=0) print(type(X_train)) print(X_train.shape) Output: class 'scipy.sparse.csr.csr_matrix' (963993, 125441) clf.fit(X_train, y_train) This is where the Memory Error is happening I have tried: 1 - to take a sample of the data, but the error is persisting. 2 - to fit many different models, but only the KNN model was working (but with a low accuracy score) 3- to convert datavec to an array, but this process is also causing a Memory Error 4- to use multi processing on different models 5 - I have been through every similar question on SO, but either an answer was unclear, or did not relate to my problem exactly This is a part of my code: vectorizer = feature_extraction.text.TfidfVectorizer(ngram_range=(1, 1), analyzer='word', stop_words= fr_stopwords) df = pd.read_csv("C:\\Users\\user\\Desktop\\CLEAN_ALL_DATA.csv", encoding='latin-1') classes = np.unique(df['BENEFITITEMCODEID'].str[1:]) vec = vectorizer.fit(df['NEWSERVICEITEMNAME'].values.astype('U')) del df clf = [KNeighborsClassifier(n_neighbors=5), MultinomialNB(), LogisticRegression(solver='lbfgs', multi_class='multinomial'), SGDClassifier(loss="log", n_jobs=-1), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(n_jobs=-1), LinearDiscriminantAnalysis(), LinearSVC(multi_class='crammer_singer'), NearestCentroid(), ] data = pd.Series([]) for chunk in pd.read_csv(datafile, chunksize=100000): data = chunk['NEWSERVICEITEMNAME'] target = chunk['BENEFITITEMCODEID'].str[1:] datavec = vectorizer.transform(data.values.astype('U')) clf[3].partial_fit(datavec, target,classes = classes) print("**CHUNK DONE**") s = "this is a testing sentence" svec = vectorizer.transform([s]) clf[3].predict(svec) --> memory error clf[3].predict(svec).todense() --> taking a lot of time to finish clf[3].predict(svec).toarrray() --> taking a lot of time to finish as well Anything else I could try?
1
1
0
1
0
0
I am implementing a simple multitask model in Keras. I used the code given in the documentation under the heading of shared layers. I know that in multitask learning, we share some of the initial layers in our model and the final layers are made individual to the specific tasks as per the link. I have following two cases in keras API where in the first, I am using keras.layers.concatenate while in the other, I am not using any keras.layers.concatenate. I am posting the codes as well as the models for each case as follows. Case-1 code import keras from keras.layers import Input, LSTM, Dense from keras.models import Model from keras.models import Sequential from keras.layers import Dense from keras.utils.vis_utils import plot_model tweet_a = Input(shape=(280, 256)) tweet_b = Input(shape=(280, 256)) # This layer can take as input a matrix # and will return a vector of size 64 shared_lstm = LSTM(64) # When we reuse the same layer instance # multiple times, the weights of the layer # are also being reused # (it is effectively *the same* layer) encoded_a = shared_lstm(tweet_a) encoded_b = shared_lstm(tweet_b) # We can then concatenate the two vectors: merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1) # And add a logistic regression on top predictions1 = Dense(1, activation='sigmoid')(merged_vector) predictions2 = Dense(1, activation='sigmoid')(merged_vector) # We define a trainable model linking the # tweet inputs to the predictions model = Model(inputs=[tweet_a, tweet_b], outputs=[predictions1, predictions2]) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) Case-1 Model Case-2 code import keras from keras.layers import Input, LSTM, Dense from keras.models import Model from keras.models import Sequential from keras.layers import Dense from keras.utils.vis_utils import plot_model tweet_a = Input(shape=(280, 256)) tweet_b = Input(shape=(280, 256)) # This layer can take as input a matrix # and will return a vector of size 64 shared_lstm = LSTM(64) # When we reuse the same layer instance # multiple times, the weights of the layer # are also being reused # (it is effectively *the same* layer) encoded_a = shared_lstm(tweet_a) encoded_b = shared_lstm(tweet_b) # And add a logistic regression on top predictions1 = Dense(1, activation='sigmoid')(encoded_a ) predictions2 = Dense(1, activation='sigmoid')(encoded_b) # We define a trainable model linking the # tweet inputs to the predictions model = Model(inputs=[tweet_a, tweet_b], outputs=[predictions1, predictions2]) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) Case-2 Model In both cases, the LSTMlayer is shared only. In case-1, we have keras.layers.concatenate but in case-2, we don't have any keras.layers.concatenate. My question is, which one is multitasking, case-1 or case-2? Morover, what is the function of keras.layers.concatenate in case-1?
1
1
0
0
0
0
I have a string and a list of words which i want to check in case those are present in given text string . I am using the below logic.....is there any other way to optimize it :- import re text=""" Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development""" tokens_text=re.split(" ",text) list_words=["programming","Application"] if (len(set(list_words).intersection(set(tokens_text)))==len(list_words)): print("Match_Found")
1
1
0
0
0
0
I'm pretty familiar with Python at this point, but new to NLP. I've printed the result out and it seems to be doing what I want, but how can I verify that? from nltk.corpus import stopwords stop_words = stopwords.words("english") function_words = [] for word in tokens: if word.lower() not in stop_words: function_words.append(word) 'tokens' is an array I've defined earlier in my code.
1
1
0
0
0
0
I wanted to check the connection between 2 words in text analytics in python.currently using NLTK package in python. For example "Text = "There are thousands of types of specific networks proposed by researchers as modifications or tweaks to existing models" here if i input as networks and researchers, then i should get output as "Proposed by" or "networks proposed by researchers"
1
1
0
0
0
0
I have a problem with Tensorflow. I retrain inception model according to this tutorial [https://www.tensorflow.org/hub/tutorials/image_retraining][1] and i want to live classify images from camera. The problem is with changing image to tensor. I modyfi a function from this tutorial to load images not from file but directly from camera. With every iteration of my code method session.run() takes longer and longer and i don't know why. Here is my code: def read_tensor_from_camera(image, input_height=299, input_width=299, input_mean=0, input_std=255): float_caster = tf.cast(image, tf.float32) dims_expander = tf.expand_dims(float_caster, 0) resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width]) normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std]) start = time.time() sess = tf.compat.v1.Session() result = sess.run(normalized) stop = time.time() print(stop - start) return result cap = cv2.VideoCapture(0) while (True): ret, frame = cap.read() image = cv2.resize(frame, (input_height, input_width)) t = read_tensor_from_camera(image) cv2.imshow('frame', image) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() 0.024958372116088867 0.021515846252441406 0.024405956268310547 0.024140119552612305 0.02186441421508789 0.023257970809936523 0.02323007583618164 0.024866819381713867 0.030565977096557617 0.025953292846679688 0.025441408157348633 0.026473522186279297 0.023244380950927734 0.025677204132080078 0.024083375930786133 0.024756908416748047 0.024300098419189453 0.023919343948364258 0.026715993881225586 0.02456498146057129 0.027322769165039062 0.02640247344970703 0.02555561065673828 0.0270078182220459 0.0286102294921875 0.02633523941040039 0.02658367156982422 0.02969074249267578 0.026103973388671875 0.02613973617553711 0.02724480628967285 0.026676654815673828 0.02712845802307129 0.02947235107421875 0.030956745147705078 0.03170061111450195 0.027563095092773438 0.03021693229675293 0.028293848037719727 0.03078293800354004 0.02852654457092285 0.03080129623413086 0.032123565673828125 0.03287243843078613
1
1
0
1
0
0
I am trying to install spaCy inside virtualenv on Windows. I am running python 3.7.0 ([MSC v.1914 32 bit (Intel)]), have pip 19.2.1. Whenever I tried to run with 'pip install -U spacy', and 'pip3 install -U spacy' but I get the following: cwd: C:\Users\enest\AppData\Local\Temp\pip-install-rttb8k9j\blis\ Complete output (25 lines): BLIS_COMPILER? None running install running build running build_py creating build creating build\lib.win32-3.7 creating build\lib.win32-3.7\blis copying blis\about.py -> build\lib.win32-3.7\blis copying blis\benchmark.py -> build\lib.win32-3.7\blis copying blis\__init__.py -> build\lib.win32-3.7\blis creating build\lib.win32-3.7\blis\tests copying blis\tests\common.py -> build\lib.win32-3.7\blis\tests copying blis\tests\test_dotv.py -> build\lib.win32-3.7\blis\tests copying blis\tests\test_gemm.py -> build\lib.win32-3.7\blis\tests copying blis\tests\__init__.py -> build\lib.win32-3.7\blis\tests copying blis\cy.pyx -> build\lib.win32-3.7\blis copying blis\py.pyx -> build\lib.win32-3.7\blis copying blis\cy.pxd -> build\lib.win32-3.7\blis copying blis\__init__.pxd -> build\lib.win32-3.7\blis running build_ext error: [WinError 2] Sistem belirtilen dosyayı bulamıyor msvc py_compiler msvc {'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'HOSTTYPE': 'x86_64', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANG': 'C.UTF-8', 'OLDPWD': '/home/matt/repos/flame-blis', 'VIRTUAL_ENV': '/home/matt/repos/cython-blis/env3.6', 'USER': 'matt', 'PWD': '/home/matt/repos/cython-blis', 'HOME': '/home/matt', 'NAME': 'LAPTOP-OMKOB3VM', 'XDG_DATA_DIRS': '/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'SHELL': '/bin/bash', 'TERM': 'xterm-256color', 'SHLVL': '1', 'LOGNAME': 'matt', 'PATH': '/home/matt/repos/cython-blis/env3.6/bin:/tmp/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Users/matt/Documents/cmder/vendor/conemu-maximus5/ConEmu/Scripts:/mnt/c/Users/matt/Documents/cmder/vendor/conemu-maximus5:/mnt/c/Users/matt/Documents/cmder/vendor/conemu-maximus5/ConEmu:/mnt/c/Python37/Scripts:/mnt/c/Python37:/mnt/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/iCLS:/mnt/c/Program Files/Intel/Intel(R) Management Engine Components/iCLS:/mnt/c/Windows/System32:/mnt/c/Windows:/mnt/c/Windows/System32/wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0:/mnt/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/DAL:/mnt/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/mnt/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/IPT:/mnt/c/Program Files/Intel/Intel(R) Management Engine Components/IPT:/mnt/c/Program Files/Intel/WiFi/bin:/mnt/c/Program Files/Common Files/Intel/WirelessCommon:/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Program Files/Git/cmd:/mnt/c/Program Files/LLVM/bin:/mnt/c/Windows/System32:/mnt/c/Windows:/mnt/c/Windows/System32/wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0:/mnt/c/Windows/System32/OpenSSH:/mnt/c/Program Files/nodejs:/mnt/c/Users/matt/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/matt/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Users/matt/AppData/Roaming/npm:/snap/bin:/mnt/c/Program Files/Oracle/VirtualBox', 'PS1': '(env3.6) \\[\\e]0;\\u@\\h: \\w\\a\\]${debian_chroot:+($debian_chroot)}\\[\\033[01;32m\\]\\u@\\h\\[\\033[00m\\]:\\[\\033[01;34m\\]\\w\\[\\033[00m\\]\\$ ', 'VAGRANT_HOME': '/home/matt/.vagrant.d/', 'LESSOPEN': '| /usr/bin/lesspipe %s', '_': '/home/matt/repos/cython-blis/env3.6/bin/python'} clang -c C:\Users\enest\AppData\Local\Temp\pip-install-rttb8k9j\blis\blis\_src\config\bulldozer\bli_cntx_init_bulldozer.c -o C:\Users\enest\AppData\Local\Temp\tmp8rx7kona\bli_cntx_init_bulldozer.o -O2 -funroll-all-loops -std=c99 -D_POSIX_C_SOURCE=200112L -DBLIS_VERSION_STRING="0.5.0-6" -DBLIS_IS_BUILDING_LIBRARY -Iinclude\windows-x86_64 -I.\frame\3\ -I.\frame\ind\ukernels\ -I.\frame\1m\ -I.\frame\1f\ -I.\frame\1\ -I.\frame\include -IC:\Users\enest\AppData\Local\Temp\pip-install-rttb8k9j\blis\blis\_src\include\windows-x86_64 ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\enest\desktop\spacy-chat-bot\env\scripts\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\enest\\AppData\\Local\\Temp\\pip-install-rttb8k9j\\blis\\setup.py'"'"'; __file__='"'"'C:\\Users\\enest\\AppData\\Local\\Temp\\pip-install-rttb8k9j\\blis\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r '"'"', '"'"' '"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\enest\AppData\Local\Temp\pip-record-0u7c_9bw\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\enest\AppData\Local\Temp\pip-build-env-s5yq983m\overlay' --compile --install-headers 'c:\users\enest\desktop\spacy-chat-bot\env\include\site\python3.7\blis' Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\enest\desktop\spacy-chat-bot\env\scripts\python.exe' 'c:\users\enest\desktop\spacy-chat-bot\env\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\enest\AppData\Local\Temp\pip-build-env-s5yq983m\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0.<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0' Check the logs for full command output.
1
1
0
0
0
0
I want to build negation detection for Malay text, it is to tackle a problem like 'not beautiful' detected as a positive word. so here is some coding that I modified but the result is not something that I wanted it to be. The result is text= "is not good, danish died," se=negate(self=None,text=text) print(se) ['is', 'not', 'not_good', 'not_danish', 'not_died'] I wanted it to be ['is', 'not', 'not_good', 'danish', 'died'] only word after "not" is changed to "not_" form. this is the function that I use, any advice to change and add in order to get the result as i wanted? def negate(self,text): negation = False result = [] words = text.split() for word in words: # stripped = word.strip(delchars) stripped = word.strip(delims).lower() negated = "not_" + stripped if negation else stripped result.append(negated) if any(neg in word for neg in ["not", "n't", "no"]): negation = not negation return result
1
1
0
0
0
0
I have a ndarray with words and their corresponding vector (with the size of 100 per word). For example: Computer 0.11 0.41 ... 0.56 Ball 0.31 0.87 ... 0.32 And so on. I want to create a word2vec model from it: model = load_from_ndarray(arr) How can it be done? I saw KeyedVectors but it only takes file and not array
1
1
0
0
0
0
I have a quite long text document describing behaviours of different animals. I want to extract text about a specific animal and haven't figured out how this can be done. So for example, if the document descibes 15 different animals, I want my alorithm to output all information from the input file that related to lions. Lions described and discussed in several different places of the document - how do I do "selective extraction" for text that is only related to lions, does anyone know? EDIT - inputs and outputs Inputs: (1) Text file (e.g. "document.txt") (2) Key word(s) (e.g. "lion") Output (example): "Lions are large felines that are traditionally depicted as the 'king of the jungle.' These big cats once roamed Africa, Asia and Europe. [...] Males are generally larger than females and have a distinctive mane of hair around their heads [...] Asiatic lions eat large animals as well, such as goats, nilgai, chital, sambhar and buffaloes. [...] Females have a gestation period of around four months. She will give birth to her young away from others and hide the cubs for the first six weeks of their lives."
1
1
0
0
0
0
I'm working on a NLP application, where I have a corpus of text files. I would like to create word vectors using the Gensim word2vec algorithm. I did a 90% training and 10% testing split. I trained the model on the appropriate set, but I would like to assess the accuracy of the model on the testing set. I have surfed the internet for any documentation on accuracy assessment, but I could not find any methods that allowed me to do so. Does anyone know of a function that does accuracy analysis? The way I processed my test data was that I extracted all the sentences from the text files in the test folder, and I turned it into a giant list of sentences. After that, I used a function that I though was the right one (turns out it wasn't as it gave me this error: TypeError: don't know how to handle uri). Here is how I went about doing this: test_filenames = glob.glob('./testing/*.txt') print("Found corpus of %s safety/incident reports:" %len(test_filenames)) test_corpus_raw = u"" for text_file in test_filenames: txt_file = open(text_file, 'r') test_corpus_raw += unicode(txt_file.readlines()) print("Test Corpus is now {0} characters long".format(len(test_corpus_raw))) test_raw_sentences = tokenizer.tokenize(test_corpus_raw) def sentence_to_wordlist(raw): clean = re.sub("[^a-zA-Z]"," ", raw) words = clean.split() return words test_sentences = [] for raw_sentence in test_raw_sentences: if len(raw_sentence) > 0: test_sentences.append(sentence_to_wordlist(raw_sentence)) test_token_count = sum([len(sentence) for sentence in test_sentences]) print("The test corpus contains {0:,} tokens".format(test_token_count)) ####### THIS LAST LINE PRODUCES AN ERROR: TypeError: don't know how to handle uri texts2vec.wv.accuracy(test_sentences, case_insensitive=True) I have no idea how to fix this last part. Please help. Thanks in advance!
1
1
0
0
0
0
To get to grips with PyTorch (and deep learning in general) I started by working through some basic classification examples. One such example was classifying a non-linear dataset created using sklearn (full code available as notebook here) n_pts = 500 X, y = datasets.make_circles(n_samples=n_pts, random_state=123, noise=0.1, factor=0.2) x_data = torch.FloatTensor(X) y_data = torch.FloatTensor(y.reshape(500, 1)) This is then accurately classified using a pretty basic neural net class Model(nn.Module): def __init__(self, input_size, H1, output_size): super().__init__() self.linear = nn.Linear(input_size, H1) self.linear2 = nn.Linear(H1, output_size) def forward(self, x): x = torch.sigmoid(self.linear(x)) x = torch.sigmoid(self.linear2(x)) return x def predict(self, x): pred = self.forward(x) if pred >= 0.5: return 1 else: return 0 As I have an interest in health data I then decided to try and use the same network structure to classify some a basic real-world dataset. I took heart rate data for one patient from here, and altered it so all values > 91 would be labelled as anomalies (e.g. a 1 and everything <= 91 labelled a 0). This is completely arbitrary, but I just wanted to see how the classification would work. The complete notebook for this example is here. What is not intuitive to me is why the first example reaches a loss of 0.0016 after 1,000 epochs, whereas the second example only reaches a loss of 0.4296 after 10,000 epochs Perhaps I am being naive in thinking that the heart rate example would be much easier to classify. Any insights to help me understand why this is not what I am seeing would be great!
1
1
0
1
0
0
I'm currently trying to tokenize some language data using Python and was curious if there was an efficient or built-in method for splitting strings of sentences into separate words and also separate punctuation characters. For example: 'Hello, my name is John. What's your name?' If I used split() on this sentence then I would get ['Hello,', 'my', 'name', 'is', 'John.', "What's", 'your', 'name?'] What I want to get is: ['Hello', ',', 'my', 'name', 'is', 'John', '.', "What's", 'your', 'name', '?'] I've tried to use methods such as searching the string, finding punctuation, storing their indices, removing them from the string and then splitting the string, and inserting the punctuation accordingly but this method seems too inefficient especially when dealing with large corpora. Does anybody know if there's a more efficient way to do this? Thank you.
1
1
0
0
0
0
The problem is to find a time efficient function that receives as inputs a sentence of words and a list of sequences of varying amounts of words (also known as ngrams) and returns for every sequence a list of indexes indicating where they occur in the sentence, and do it as efficiently as possible for large amounts of sequences. What I ultimately want is to replace the occurrences of ngrams in the sentence for a concatenation of the words in the sequence by "_". For example if my sequences are ["hello", "world"] and ["my", "problem"], and the sentence is "hello world this is my problem can you solve it please?" the function should return "hello_world this is my_problem can you solve it please?" What I did is group the sequences by the amount of words each have and save that in a dictionary where the key is the amount and the value is a list of the sequences of that length. The variable ngrams is this dictionary: def replaceNgrams(line, ngrams): words = line.split() #Iterates backwards in the length of the sequences for n in list(ngrams.keys())[::-1]: #O(L*T) newWords = [] if len(words) >= n: terms = ngrams[n] i = 0 while i < len(words)+1-n: #O(L*Tn) #Gets a sequences of words from the sentences of the same length of the ngrams currently checking nwords = words[i:i+n].copy() #Checks if that sequence is in my list of sequences if nwords in terms: #O(Tn) newWords.append("_".join(nwords)) i+=n else: newWords.append(words[i]) i+=1 newWords += words[i:].copy() words = newWords.copy() return " ".join(words) This works as desired but I have too many sequences and too many lines to apply this too and this is way too slow for me (it would take a month to finish).
1
1
0
0
0
0
I am training a custom NER Model using Spacy on a sample of 5000 text entries with 6 entities. While evaluating the trained model on an unseen sample (500 text entries), the F Score that I get for the overall model (93.8) has a large difference between F Score for any individual entities. Can someone help me understand how is the overall F Score calculates and why is there so much difference between overall F Score and individual entity Score? I built my own custom named entity recognition (NER) model using Spacy. The size of my training data set was 5000 with 6 entities. Further, I tested my model on 500 samples and evaluated the model using the Scorer and GoldParse. Here is my code for evaluating performance on my test data - def evaluate(ner_model, examples): scorer = Scorer() for input_, annot in examples: doc_gold_text = ner_model.make_doc(input_) gold = GoldParse(doc_gold_text, entities=annot.get('entities')) pred_value = ner_model(input_) scorer.score(pred_value, gold) return scorer.scores Here is the result that I get - {'uas': 0.0, 'las': 0.0, 'ents_p': 93.62838106164233, 'ents_r': 93.95728476332452, 'ents_f': 93.79254457050243, 'ents_per_type': { 'ENTITY1': {'p': 6.467595956926736, 'r': 54.51002227171492, 'f': 11.563219748420247}, 'ENTITY2': {'p': 6.272470243289469, 'r': 49.219391947411665, 'f': 11.126934984520123}, 'ENTITY3': {'p': 18.741109530583213, 'r': 85.02742820264602, 'f': 30.712745497989392}, 'ENTITY4': {'p': 13.413228854574788, 'r': 70.58823529411765, 'f': 22.54284884283916}, 'ENTITY5': {'p': 19.481765834932823, 'r': 82.85714285714286, 'f': 31.546231546231546}, 'ENTITY6': {'p': 24.822695035460992, 'r': 64.02439024390245, 'f': 35.77512776831346}}, 'tags_acc': 0.0, 'token_acc': 100.0} Here you can see a large difference between ents_f and f for any other entity type. What is the relationship of the overall F Score of the model with individual entity scores?
1
1
0
1
0
0
I am a newbie to NLP, I have a text with labels 0 and 1. How do I separate the labels and create a new column? Please help me. Here is my text with labels: Everything from acting to cinematography was solid. 1 Definitely worth checking out. 1 I purchased this and within 2 days it was no longer working!!!!!!!!! 0
1
1
0
0
0
0
Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA estimators = [('reduce_dim', PCA()), ('svm', SVC())] clf = Pipeline(estimators) clf Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object?
1
1
0
1
0
0
When running my script, I got the following feedbacks, 1 Traceback (most recent call last): File "testcore.py", line 20, in print(f'\tScore: {score[0]}, Value: {score[1]}') NameError: name 'score' is not defined The exact same script can run perfectly on another computer, I just don't get what is wrong. Here is my code: from pycorenlp import StanfordCoreNLP nlp = StanfordCoreNLP('http://localhost:9000') fhand=open('airbnbuk.txt', encoding='utf-8') count=0 for sentence in fhand: print(sentence) count=count+1 print(count) result = nlp.annotate(sentence, properties={ 'annotators': 'sentiment', 'outputFormat': 'json', 'timeout': '5000' }) for s in result['sentences']: score = (s['sentimentValue'], s['sentiment']) print(f'\tScore: {score[0]}, Value: {score[1]}') nlp.close()
1
1
0
0
0
0
I want to know about the parenthesis after the Object's name. I am learning AI and building the AI model, now in the Tutorial's code the author has written a line which is containing the Parenthesis right after the object's name which is : self.model(...) Where self.model is the Object of the Network class. How the objects are having parenthesis being an object, not a function? Now I want to know about the parenthesis after the Object's name. class Network(nn.Module): def __init__(self, input_size, nb_action): super(Network, self).__init__() self.input_size = input_size self.nb_action = nb_action self.fc1 = nn.Linear(input_size, 30) self.fc2 = nn.Linear(30, nb_action) def forward(self, state): x = F.relu(self.fc1(state)) q_values = self.fc2(x) return q_values class Dqn(): def __init__(self, input_size, nb_action, gamma): self.gamma = gamma self.reward_window = [] self.model = Network(input_size, nb_action) self.memory = ReplayMemory(100000) self.optimizer = optim.Adam(self.model.parameters(), lr = 0.001) self.last_state = torch.Tensor(input_size).unsqueeze(0) self.last_action = 0 self.last_reward = 0 def select_action(self, state): probs = F.softmax(self.model(Variable(state, volatile = True))*100) # <-- The problem is here where the self.model object is CALLED with Parenthesis. action = probs.multinomial(10) return action.data[0,0]
1
1
0
0
0
0
I have around 20k documents with 60 - 150 words. Out of these 20K documents, there are 400 documents for which the similar document are known. These 400 documents serve as my test data. At present I am removing those 400 documents and using remaining 19600 documents for training the doc2vec. Then I extract the vectors of train and test data. Now for each test data document, I find it's cosine distance with all the 19600 train documents and select the top 5 with least cosine distance. If the similar document marked is present in these top 5 then take it to be accurate. Accuracy% = No. of Accurate records / Total number of Records. The other way I find similar documents is by using the doc2Vec most similiar method. Then calculate accuracy using the above formula. The above two accuracy doesn't match. With each epoch one increases other decreases. I am using the code given here: https://medium.com/scaleabout/a-gentle-introduction-to-doc2vec-db3e8c0cce5e. For training the Doc2Vec. I would like to know how to tune the hyperparameters so that I can get making accuracy by using above-mentioned formula. Should I use cosine distance to find the most similar documents or shall I use the gensim's most similar function?
1
1
0
0
0
0
I'm new to NLP and gensim, currently trying to solve some NLP problems with gensim word2vec module. I my current understanding of word2vec, the result vectors/matrix should have all entries between -1 and 1. However, trying a simple one results into a vector which has entries greater than 1. I'm not sure which part is wrong, could anyone give some suggestions, please? I've used gensim utils.simple_preprocess to generate a list of list of token. The list looks like: [['buffer', 'overflow', 'in', 'client', 'mysql', 'cc', 'in', 'oracle', 'mysql', 'and', 'mariadb', 'before', 'allows', 'remote', 'database', 'servers', 'to', 'cause', 'denial', 'of', 'service', 'crash', 'and', 'possibly', 'execute', 'arbitrary', 'code', 'via', 'long', 'server', 'version', 'string'], ['the', 'xslt', 'component', 'in', 'apache', 'camel', 'before', 'and', 'before', 'allows', 'remote', 'attackers', 'to', 'read', 'arbitrary', 'files', 'and', 'possibly', 'have', 'other', 'unspecified', 'impact', 'via', 'an', 'xml', 'document', 'containing', 'an', 'external', 'entity', 'declaration', 'in', 'conjunction', 'with', 'an', 'entity', 'reference', 'related', 'to', 'an', 'xml', 'external', 'entity', 'xxe', 'issue']] I believe this is the correct input format for gensim word2vec. word2vec = models.word2vec.Word2Vec(sentences, size=50, window=5, min_count=1, workers=3, sg=1) vector = word2vec['overflow'] print(vector) I expect the output to be a vector containing probabilities (i.e., all between -1 and 1), but it actually turned out to be the following: [ 0.12800379 -0.7405527 -0.85575 0.25480416 -0.2535793 0.142656 -0.6361196 -0.13117172 1.1251501 0.5350017 0.05962601 -0.58876884 0.02858278 0.46106443 -0.22623934 1.6473309 0.5096218 -0.06609935 -0.70007527 1.0663376 -0.5668168 0.96070313 -1.180383 -0.58649933 -0.09380565 -0.22683378 0.71361005 0.01779896 0.19778453 0.74370056 -0.62354785 0.11807996 -0.54997736 0.10106519 0.23364201 -0.11299669 -0.28960565 -0.54400533 0.10737313 0.3354464 -0.5992898 0.57183135 -0.67273194 0.6867607 0.2173506 0.15364875 0.7696457 -0.24330224 0.46414775 0.98163396] You can see there are 1.6473309 and -1.180383 in the above vector.
1
1
0
0
0
0
I have a dictionary of foods: foods={ "chicken masala" : "curry", "chicken burger" : "burger", "beef burger" : "burger", "chicken soup" : "appetizer", "vegetable" : "curry" } Now I have a list of strings: queries = ["best burger", "something else"] I have to find out if there is any string in queries that has and entry in our food dictionary. Like in the above example it should return True for best burger. Currently, I am calculating cosine similarity between each string in the list for all the entries in the foods.keys(). It works but it's very time inefficient. The food dictionary has almost 1000 entries. Is there any efficient way to do so? Edit: Here the best burger should be returned because there is burger in it and burger is also present in chicken burger in foods.keys(). I am basically trying to find out if there is any query which is a food type. This is how I am calculating : import re, math from collections import Counter WORD = re.compile(r'\w+') def get_cosine(text1, text2): vec1 = text_to_vector(text1.lower()) vec2 = text_to_vector(text2.lower()) intersection = set(vec1.keys()) & set(vec2.keys()) numerator = sum([vec1[x] * vec2[x] for x in intersection]) sum1 = sum([vec1[x]**2 for x in vec1.keys()]) sum2 = sum([vec2[x]**2 for x in vec2.keys()]) denominator = math.sqrt(sum1) * math.sqrt(sum2) if not denominator: return 0.0 else: return (float(numerator) / denominator) * 100 foods={ "chicken masala" : "curry", "chicken burger" : "burger", "beef burger" : "burger", "chicken soup" : "appetizer", "vegetable" : "curry" } queries = ["best burger", "something else"] flag = False food = [] for phrase in queries: for k in foods.keys(): cosine = get_cosine(phrase, k) if int(cosine) > 40: flag = True food.append(phrase) break print('Foods:', food) OUTPUT: Foods: ['best burger'] Solution: Though @Black Thunder's solution works for the example I have provided in the example but it doesn't work for queries like best burgers. But this solution works in that case. Which is a major concern for me. Thanks @Andrej Kesely. This was the reason I went for the cosine similarity in my solution. But i think SequenceMatcher works better here.
1
1
0
0
0
0
I am writing a mini-chatbot and I can't seem to figure out a way to determine whether a user has answered "yes" or "no". For example, if the user has typed "okay", I'd like to know that they have essentially answered "yes". Or, if they've written "nope", I'd like to know that they've essentially answered "no". Using nltk's wordnet hasn't been much of a help. Here is what I tried: import nltk from nltk.corpus import wordnet as wn for syn in wn.synsets('yes'): print(syn.name(), syn.lemma_names()) I was hoping to get back something like yes.n.01 ['yes', 'okay', 'sure', 'yup', 'yeah'], but instead all I get is yes.n.01 ['yes']. I'm looking for a solution in Python, though it doesn't necessarily need to be through the nltk package.
1
1
0
0
0
0
(Reputation too low to post images, sorry) Essentially, for rows whose work_height, work_width, work_depth dimensions are missing but there's a description of those dimensions in the work_dimensions column, I want to parse the said description into the work_height, work_width, work_depth columns. There are a few types of structures available based on my exploration: __ unit x __ unit x __ unit. This one should be easy. __ unit x __ unit ewline __ unit x __ unit, I believe these are two different image dimension settings possible for the same image. I want to create a new image item (row) with the second setting (or third or whatever). The written out mixed fractions, e.g. 16 7/8 in (42.8 cm). How is this supposed to be parsed? This is one of the hard ones. Since the unit column work_measurement_unit is generally mm, that's the unit to parse I presume (and even then I have to convert from cm to mm). Measurement Description, followed by the mixed fraction and other unit in parentheses above, i.e. Diameter: 19 3/7 in (72.5 cm). If I can learn these cases I will probably have no issue with any other case that could come up, so any help would be appreciated? I've only been able to come up with a solution for the first type of structure listed above. To access the df above I used: mask = (df['work_dimensions'] != '-1') & (df['work_dimensions'].notnull()) & ((df[['work_height','work_width','work_depth']] == -1.0).sum(axis=1) == 3) df[['work_dimensions','work_height','work_width','work_depth','work_measurement_unit']][mask] I don't see anything wrong with the results yet but I could be missing stuff.
1
1
0
0
0
0
import sys, os parent = os.path.dirname(os.path.realpath("cobaie")) sys.path.append(parent + '/../../mitielib') from mitie import * The training process for a binary relation detector requires a MITIE NER object as input. So we load the saved NER model first. ner = named_entity_extractor("../../MITIE-models/english/ner_model.dat")
1
1
0
0
0
0
With Matcher() rules is there a way to tag/set a "label" on the token direcly in the rule f.e. : [{ 'DEP' : 'ROOT', 'SET_LABEL' : 'ACTION' }], ......... many more .... and then in python code : if token.label == 'ACTION' : ........ using on_match is not useful if you have many more patterns everyone using different LABEL, because there is no feedback which MATCH occurred !? The ideal thing will be a sort of post-spacy "parser" that act on the tagging info that spacy provide.
1
1
0
0
0
0
I am dealing with text from audio transcripts, and there are some unknown words. There are markers for each unknown word (e.g. "He unknown to the store"). I'm looking for the best way to represent the "unknown" word so as to mess up spacy's sentence dependency parsing the least. What is the best replacement for to increase odds that spacy's sentence dependency parser works the best across the widest range of sentences? Is a space/' ' or a '___' or a '...' or does it not matter? There is no structure to when/where the \ occur. thanks!
1
1
0
0
0
0
I have some code below that generates bigrams for my data frame column. import nltk import collections counts = collections.Counter() for sent in df["message"]: words = nltk.word_tokenize(sent) counts.update(nltk.bigrams(words)) counts = {k: v for k, v in counts.items() if v > 25} This works great for generating my most common bigrams in the 'message' column of my dataframe, BUT, I want to get bigrams that contain one verb and one noun per pair of bigrams only. Any help doing this with spaCy or nltk would be appreciated!
1
1
0
0
0
0
I am trying to apply GridSearchCV on the LatentDirichletAllocation using the sklearn library. Current pipeline looks like so: vectorizer = CountVectorizer(analyzer='word', min_df=10, stop_words='english', lowercase=True, token_pattern='[a-zA-Z0-9]{3,}' ) data_vectorized = vectorizer.fit_transform(doc_clean) #where doc_clean is processed text. lda_model = LatentDirichletAllocation(n_components =number_of_topics, max_iter=10, learning_method='online', random_state=100, batch_size=128, evaluate_every = -1, n_jobs = -1, ) search_params = {'n_components': [10, 15, 20, 25, 30], 'learning_decay': [.5, .7, .9]} model = GridSearchCV(lda_model, param_grid=search_params) model.fit(data_vectorized) Current the GridSearchCV uses the approximate log-likelihood as score to determine which is the best model. What I would like to do is to change my scoring method to be based on the approximate perplexity of the model instead. According to sklearn's documentation of GridSearchCV, there is a scoring argument that I can use. However, I do not know how to apply perplexity as a scoring method, and I cannot find any examples online of people applying it. Is this possible?
1
1
0
0
0
0
how can I untokenize the output of this code? class Core: def __init__(self, user_input): pos = pop(user_input) subject = "" for token in pos: if token.dep == nsubj: subject = untokenize.untokenize(token) subject = S(subject) I tried: https://pypi.org/project/untokenize/ MosesDetokenizer .join() But I have this error for my last code (from this post): TypeError: 'spacy.tokens.token.Token' object is not iterable This error for .join(): AttributeError: 'spacy.tokens.token.Token' object has no attribute 'join' And for MosesDetokenizer: text = u" {} ".format(" ".join(tokens)) TypeError: can only join an iterable
1
1
0
0
0
0
So I was trying to tag a bunch of words in a list (POS tagging to be exact) like so: pos = [nltk.pos_tag(i,tagset='universal') for i in lw] where lw is a list of words (it's really long or I would have posted it but it's like [['hello'],['world']] (aka a list of lists which each list containing one word) but when I try and run it I get: Traceback (most recent call last): File "<pyshell#183>", line 1, in <module> pos = [nltk.pos_tag(i,tagset='universal') for i in lw] File "<pyshell#183>", line 1, in <listcomp> pos = [nltk.pos_tag(i,tagset='universal') for i in lw] File "C:\Users\my system\AppData\Local\Programs\Python\Python35\lib\site-packages ltk\tag\__init__.py", line 134, in pos_tag return _pos_tag(tokens, tagset, tagger) File "C:\Users\my system\AppData\Local\Programs\Python\Python35\lib\site-packages ltk\tag\__init__.py", line 102, in _pos_tag tagged_tokens = tagger.tag(tokens) File "C:\Users\my system\AppData\Local\Programs\Python\Python35\lib\site-packages ltk\tag\perceptron.py", line 152, in tag context = self.START + [self.normalize(w) for w in tokens] + self.END File "C:\Users\my system\AppData\Local\Programs\Python\Python35\lib\site-packages ltk\tag\perceptron.py", line 152, in <listcomp> context = self.START + [self.normalize(w) for w in tokens] + self.END File "C:\Users\my system\AppData\Local\Programs\Python\Python35\lib\site-packages ltk\tag\perceptron.py", line 240, in normalize elif word[0].isdigit(): IndexError: string index out of range Can someone tell me why and how I get this error and how to fix it? Many thanks.
1
1
0
0
0
0
I am implementing multitask regression model using code from the Keras API under the shared layers section. There are two data sets, Let's call them data_1 and data_2 as follows. data_1 : shape(1434, 185, 37) data_2 : shape(283, 185, 37) data_1 is consists of 1434 samples, each sample is 185 characters long and 37 shows total number of unique characters is 37 or in another words the vocab_size. Comparatively data_2 consists of 283 characters. I convert the data_1 and data_2 into two dimensional numpy array as follows before giving it to the Embedding layer. data_1=np.argmax(data_1, axis=2) data_2=np.argmax(data_2, axis=2) That makes the shape of the data as follows. print(np.shape(data_1)) (1434, 185) print(np.shape(data_2)) (283, 185) Each number in the matrix represents index integer. The multitask model is as under. user_input = keras.layers.Input(shape=((185, )), name='Input_1') products_input = keras.layers.Input(shape=((185, )), name='Input_2') shared_embed=(keras.layers.Embedding(vocab_size, 50, input_length=185)) user_vec_1 = shared_embed(user_input ) user_vec_2 = shared_embed(products_input ) input_vecs = keras.layers.concatenate([user_vec_1, user_vec_2], name='concat') input_vecs_1=keras.layers.Flatten()(input_vecs) input_vecs_2=keras.layers.Flatten()(input_vecs) # Task 1 FC layers nn = keras.layers.Dense(90, activation='relu',name='layer_1')(input_vecs_1) result_a = keras.layers.Dense(1, activation='linear', name='output_1')(nn) # Task 2 FC layers nn1 = keras.layers.Dense(90, activation='relu', name='layer_2')(input_vecs_2) result_b = keras.layers.Dense(1, activation='linear',name='output_2')(nn1) model = Model(inputs=[user_input , products_input], outputs=[result_a, result_b]) model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy']) The model is visualized as follows. Then I fit the model as follows. model.fit([data_1, data_2], [Y_1,Y_2], epochs=10) Error: ValueError: All input arrays (x) should have the same number of samples. Got array shapes: [(1434, 185), (283, 185)] Is there any way in Keras where I can use two different sample size inputs or to some trick to avoid this error to achieve my goal of multitasking regression. Here is the minimum working code for testing. data_1=np.array([[25, 5, 11, 24, 6], [25, 5, 11, 24, 6], [25, 0, 11, 24, 6], [25, 11, 28, 11, 24], [25, 11, 6, 11, 11]]) data_2=np.array([[25, 11, 31, 6, 11], [25, 11, 28, 11, 31], [25, 11, 11, 11, 31]]) Y_1=np.array([[2.33], [2.59], [2.59], [2.54], [4.06]]) Y_2=np.array([[2.9], [2.54], [4.06]]) user_input = keras.layers.Input(shape=((5, )), name='Input_1') products_input = keras.layers.Input(shape=((5, )), name='Input_2') shared_embed=(keras.layers.Embedding(37, 3, input_length=5)) user_vec_1 = shared_embed(user_input ) user_vec_2 = shared_embed(products_input ) input_vecs = keras.layers.concatenate([user_vec_1, user_vec_2], name='concat') input_vecs_1=keras.layers.Flatten()(input_vecs) input_vecs_2=keras.layers.Flatten()(input_vecs) nn = keras.layers.Dense(90, activation='relu',name='layer_1')(input_vecs_1) result_a = keras.layers.Dense(1, activation='linear', name='output_1')(nn) # Task 2 FC layers nn1 = keras.layers.Dense(90, activation='relu', name='layer_2')(input_vecs_2) result_b = keras.layers.Dense(1, activation='linear',name='output_2')(nn1) model = Model(inputs=[user_input , products_input], outputs=[result_a, result_b]) model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy']) model.fit([data_1, data_2], [Y_1,Y_2], epochs=10)
1
1
0
1
0
0
I am using Sk Learn CountVectorizer on strings but CountVectorizer discards all the emojis in the text. For instance, Welcome should give us: ["\xf0\x9f\x91\x8b", "welcome"] However, when running: vect = CountVectorizer() test.fit_transform([' Welcome']) I only get: ["welcome"] This has to do with the token_pattern which does not count the encoded emoji as a word, but is there a custom token_pattern to deal with emojis?
1
1
0
0
0
0
I have text where some sentences start with lowercase. i need to find them and replace with correct sentence case.some punctuations are incorrect. i.e. sentence starting after full stop without space. i.e. .this sentence and this.also this. and this.This one is not. replace with -> .This sentence And this.Also this. And this.This one is not. sublime text 3 solution, regex , or python nltk solution is suitable. i tried this solution. but it is slow and does not find sentences without space after full stop. import nltk.data from nltk.tokenize import sent_tokenize text = """kjdshkjhf. this sentence and this.also this. and this. This one is not.""" aa=sent_tokenize(text) for a in aa: if (a[0].islower()): print a print "****"
1
1
0
0
0
0
I'm trying to tokenize the A.txt and save that into B.txt file the string i'm trying to process is persian and i want to save that word by word in persian, this is my code this is main.py import LevelOne import save_file import nltk original_data = " ".join(open("A.txt"))print('Processing') save_file.saving(LevelOne.spliter(original_data)) print('Done') this is LevelOne import re import persian import stop_word def spliter(text): data = re.split(r'\W+',text) tokenized = [word for word in data if word not in stop_word.stop_words] return tokenized and this is saving part # -*- coding: utf-8 -*- def saving(infile): outfile = open('B.txt', 'w') replacements = {'پ':'\u067e', 'چ':'\u0686','ج':'\u062c', 'ح':'\u062d','خ':'\u062e', 'ه':'\u0647','ع':'\u0639', 'غ':'\u063a','ف':'\u0641', 'ق':'\u0642','ث':'\u062b', 'ص':'\u0635','ض':'\u0636', 'گ':'\u06af','ک':'\u06a9', 'م':'\u0645','ن':'\u0646', 'ت':'\u062a','ا':'\u0627', 'ل':'\u0644','ب':'\u0628', 'ي':'\u06cc','س':'\u0633', 'ش':'\u0634','و':'\u0648', 'ئ':'\u0626','د':'\u062f', 'ذ':'\u0630','ر':'\u0631', 'ز':'\u0632','ط':'\u0637', 'ظ':'\u0638','ژ':'\u0698', 'آ':'\u0622','ی':'\u064a', '؟':'\u061f'} data = " ".join(infile) print(data) for line in data: for src, target in replacements.items() : line = line.replace(src, target) outfile.write(line) outfile.close() but when i open the B.text file , i See this Ú Ù Ù¾Ø³Ø Ø³Ù Ø Ù Ø ÙˆØ ÛŒ Ú Ù Ø Ø Ø ØŸ the original file look like this گل پسر سلام خوبی چه خبر؟
1
1
0
0
0
0
C:\Users\CVL-Acoustics\Documents\bangla-sentence-correction-master>python train.py Sit back and relax, it will take some time to train the model... Vocabulary size 250000 WARNING:tensorflow:From C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py:417: calling reverse_sequence (from tensorflow.python.ops.array_ops) with seq_dim is deprecated and will be removed in a future version. Instructions for updating: seq_dim is deprecated, use seq_axis instead WARNING:tensorflow:From C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py:432: calling reverse_sequence (from tensorflow.python.ops.array_ops) with batch_dim is deprecated and will be removed in a future version. Instructions for updating: batch_dim is deprecated, use batch_axis instead WARNING:tensorflow:From train.py:228: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See @{tf.nn.softmax_cross_entropy_with_logits_v2}. epoch 1 training Traceback (most recent call last): File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1322, in _do_call return fn(*args) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[6656,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Reshape, Variable_1/read)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[Node: rnn/while/cond/Add/_87 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_421_rnn/while/cond/Add", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_clooprnn/while/cond/ArgMax/dimension/_1)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train.py", line 321, in _, l = sess.run([train_op, loss], fd) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[6656,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Reshape, Variable_1/read)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[Node: rnn/while/cond/Add/_87 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_421_rnn/while/cond/Add", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_clooprnn/while/cond/ArgMax/dimension/_1)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. Caused by op 'MatMul', defined at: File "train.py", line 218, in decoder_logits_flat = tf.add(tf.matmul(decoder_outputs_flat, W), b) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2014, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 4278, in mat_mul name=name) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3414, in create_op op_def=op_def) File "C:\Users\CVL-Acoustics\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1740, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[6656,250000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Reshape, Variable_1/read)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[Node: rnn/while/cond/Add/_87 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_421_rnn/while/cond/Add", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_clooprnn/while/cond/ArgMax/dimension/_1)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
1
1
0
0
0
0
I have a TextLineDataset that reads lines from a text file. This dataset reads the file and returns it in a sliding window manner, so for example if my text file contains: I am going to school School is far from home My dataset returns: I am going am going to going to school ... (Assuming I want 3 words at a time, sliding from one word at each step) I am happy with that. But now I want, for each sentence returned by the dataset, to extract the first 2 words and say they are my features, and to extract the last word and say it is my label Of course I want it to be part of the computation graph (like my dataset) and not at running time Here is my code: sentences = tf.data.TextLineDataset("data/train.src") words = sentences.map(lambda string: tf.string_split([string]).values) flat_words = words.flat_map(tf.data.Dataset.from_tensor_slices) flat_words = flat_words.window(3, 1, 1, True).flat_map(lambda x: x.batch(3)).batch(4) iterator = flat_words.make_initializable_iterator() next_element = iterator.get_next() sess = tf.Session() sess.run(tf.tables_initializer()) sess.run(iterator.initializer) print(sess.run(next_element)) Thanks in advance
1
1
0
0
0
0
I want to learn NLP with python,I have some questions what ide is better? what is jupyter notebook and why every tutorial use this? (should i use this as ide for python) what package is better for persian language?
1
1
0
0
0
0
I am trying to solve a text classification problem using SVC on sklearn. I also wanted to check which vectorizer would work best for my data: Bag of Words CountVectorizer() or TF-IDF TfidfVectorizer() What I've been doing so far is using these two vectorizers separately, one after the other, then comparing their results. # Bag of Words (BoW) from sklearn.feature_extraction.text import CountVectorizer count_vectorizer = CountVectorizer() features_train_cv = count_vectorizer.fit_transform(features_train) # TF-IDF from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vec = TfidfVectorizer() features_train_tfidf = tfidf_vec.fit_transform(features_train) # Instantiate SVC classifier_linear = SVC(random_state=1, class_weight='balanced', kernel = "linear", C=1000) # Fit SVC with BoW features classifier_linear.fit(features_train_cv,target_train) features_test_cv = count_vectorizer.transform(features_test) target_test_pred_cv = classifier_linear.predict(features_test_cv) # Confusion matrix: SVC with BoW features from sklearn.metrics import confusion_matrix print(confusion_matrix(target_test, target_test_pred_cv)) [[ 689 517] [ 697 4890]] # Fit SVC with TF-IDF features classifier_linear.fit(features_train_tfidf,target_train) features_test_tfidf = tfidf_vec.transform(features_test) target_test_pred_tfidf = classifier_linear.predict(features_test_tfidf) # Confusion matrix: SVC with TF-IDF features [[ 701 505] [ 673 4914]] I thought that maybe using Pipeline would make my code look more organized. But I noticed that in the suggested Pipeline code in sklearn tutorial from the module official page includes two vectorizers: both CountVectorizer() (Bag of Words) and TfidfVectorizer() # from sklearn official tutorial from sklearn.pipeline import Pipeline >>> text_clf = Pipeline([ ... ('vect', CountVectorizer()), ... ('tfidf', TfidfTransformer()), ... ('clf', MultinomialNB()), My impression was, you only need to do choose one vectorizer for your features. Wouldn't that mean that the data gets vectorized twice, once with simple term frequency then next with TF-IDF? How would this code work?
1
1
0
0
0
0
I have a csv data file containing column 'notes' with satisfaction answers in Hebrew. I would like to use Sentiment analysis in order to assign a score for each word or bigrm in the data and receive positive/negative probability using logistic regression. My code so far: PYTHONIOENCODING="UTF-8" df= pd.read_csv('keep.csv', encoding='utf-8' , usecols=['notes']) txt = df.notes.str.lower().str.replace(r'\|', ' ').str.cat(sep=' ') words = nltk.tokenize.word_tokenize(txt) tokens=[word.lower() for word in words if word.isalpha()] bigrm = list(nltk.bigrams(tokens)) word_index = {} current_index = 0 for token in tokens: if token not in word_index: word_index[token] = current_index current_index += 1 def tokens_to_vector(tokens, label): x = np.zeros(len(word_index) + 1) for t in tokens: i = word_index[t] x[i] += 1 x = x / x.sum() x[-1] = label return x N= len(word_index) data = np.zeros((N, len(word_index) + 1)) i = 0 for token in tokens: xy = tokens_to_vector(tokens, 1) data[i,:] = xy i += 1 This loop isn't working. How can I generate the data and then receive positive/negative probabilities for each bigrm?
1
1
0
0
0
0
In NLTK we can convert a parentheses tree into an actual Tree object. However, when a token contains parentheses, the parsing is not what you would expect since NLTK parses those parentheses as a new node. As an example, take the sentence They like(d) it a lot This could be parsed as (S (NP (PRP They)) (VP like(d) (NP (PRP it)) (NP (DT a) (NN lot))) (. .)) But if you parse this with NLTK into a tree, and output it - it is clear that the (d) is parsed as a new node, which is no surprise. from nltk import Tree s = '(S (NP (PRP They)) (VP like(d) (NP (PRP it)) (NP (DT a) (NN lot))) (. .))' tree = Tree.fromstring(s) print(tree) The result is (S (NP (PRP They)) (VP like (d ) (NP (PRP it)) (NP (DT a) (NN lot))) (. .)) So (d ) is a node inside the VP rather than part of the token like. Is there a way in the tree parser to escape parentheses?
1
1
0
0
0
0
I have around 7.000 sentences, for which I have done a refined Name-Entity-Recognition (i.e., for specific entities) using SpaCy. Now I want to do relationship extraction (basically causal inference) and I do not know how to use NER to provide training set. As far as I read there are a different approaches to perform relationship extraction: 1) Handwritten patterns 2) Supervised machine learning 3) Semi-supervised machine learning Since I want to use supervised machine learning I need training data. It would be nice if anyone could give me some direction, many thanks. Here is a screen shoot of my data frame, entities are provided by a customised spaCy model. I have access to the syntactic dependencies and part-of-speech tags of each sentence, as given by spaCy:
1
1
0
0
0
0
I am working on google collab using python and I have a 12Gb Ram. I am trying to use word2vec pre-trained by google to represent sentences by vectors. I should have same length vectors even if they do not have the same number of words so I used padding (the maximum length of a sentence here is my variable max) The problem is that every time I want to create a matrix containing all of my vectors i run out of RAM memory quickly (on 20k th / 128k vector) This is my code : final_x_train = [] l=np.zeros((max,300)) # The legnth of a google pretained model is 300 for i in new_X_train: buildWordVector(final_x_train, i, model, l) gc.collect() #doesn't do anything except slowing the run time def buildWordVector(new_X, sent, model, l): for x in range(len(sent)): try: l[x]= list(model[sent[x]]) gc.collect() #doesn't do anything except slowing the run time except KeyError: continue new_X.append([list(x) for x in l]) all the variable that i have : df: 16.8MiB new_X_train: 1019.1KiB X_train: 975.5KiB y_train: 975.5KiB new_X_test: 247.7KiB X_test: 243.9KiB y_test: 243.9KiB l: 124.3KiB final_x_train: 76.0KiB stop_words: 8.2KiB But I am at 12Gb/12Gb (RAM) and the session has expired As you can see the garbage collector is not doing anything because apperently is cannot see the variables but I really need a solution to solve this problem can anyone help me please?
1
1
0
1
0
0