text
stringlengths
0
27.6k
python
int64
0
1
DeepLearning or NLP
int64
0
1
Other
int64
0
1
Machine Learning
int64
0
1
Mathematics
int64
0
1
Trash
int64
0
1
I am using gensim's doc2vec implementation and I have a few thousand documents tagged with four labels. yield TaggedDocument(text_tokens, [labels]) I'm training a Doc2Vec model with a list of these TaggedDocuments. However, I'm not sure how to infer the tag for a document that was not seen during training. I see that there is a infer_vector method which returns the embedding vector. But how can I get the most likely label from that? An idea would be to infer the vectors for every label that I have and then calculate the cosine similarity between these vectors and the vector for the new document I want to classify. Is this the way to go? If so, how can I get the vectors for each of my four labels?
1
1
0
0
0
0
I am using gensim LdaMulticore to extract topics.It works perfectly fine from Jupyter/Ipython notebook, but when I run from Command prompt, the loop runs indefinitely. Once the execution arrives at the LdaMulticore function, the execution starts from first. Please help me as I am novice if __name__ == '__main__': model = models.LdaMulticore(corpus=corpus_train, id2word=dictionary, num_topics=20, chunksize=4000, passes=30, alpha=0.5, eta=0.05, decay=0.5, eval_every=10, workers=3, minimum_probability=0) **RESULTS:-** Moving to Topics Extraction Script--------------------------------- 2017-08-18 18:59:36,448 : INFO : using serial LDA version on this node 2017-08-18 18:59:37,183 : INFO : running online LDA training, 20 topics, 1 passes over the supplied corpus of 400 documents, updating every 12000 documents, evaluating every ~400 documents, iterating 50x with a convergence threshold of 0.001000 2017-08-18 18:59:37,183 : WARNING : too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy 2017-08-18 18:59:37,183 : INFO : training LDA model using 3 processes 2017-08-18 18:59:37,214 : INFO : PROGRESS: pass 0, dispatched chunk #0 = documents up to #400/400, outstanding queue size 1 Importing required Packages Importing required Packages
1
1
0
0
0
0
I am new to Python programming.I have a pandas data frame in which two string columns are present. Data frame is like below: Case Action Create Create New Account Create New Account Create New Account Create New Account Create Old Account Delete Delete New Account Delete New Account Delete Old Account Delete Old Account Delete Old Account Here we can see in Create, out 5 actions 4 actions has been Create New Account. Means 4/5(=80%) .Similarly in Delete case maximum cases are Delete Old Account. So my requirement is when next time any case comes like Create, i should get o/p as Crate New Account with frequency score. Expected O/P : Case Action Score Create Create New Account 80 Delete Delete Old Account 60
1
1
0
0
0
0
I'm new to Python and I'm trying to teach myself language processing. NLTK in python has a function called FreqDist that gives the frequency of words in a text, but for some reason it's not working properly. This is what the tutorial has me write: fdist1 = FreqDist(text1) vocabulary1 = fdist1.keys() vocabulary1[:50] So basically it's supposed to give me a list of the 50 most frequent words in the text. When I run the code, though, the result is the 50 least frequent words in order of least frequent to most frequent, as opposed to the other way around. The output I am getting is as follows: [u'succour', u'four', u'woods', u'hanging', u'woody', u'conjure', u'looking', u'eligible', u'scold', u'unsuitableness', u'meadows', u'stipulate', u'leisurely', u'bringing', u'disturb', u'internally', u'hostess', u'mohrs', u'persisted', u'Does', u'succession', u'tired', u'cordially', u'pulse', u'elegant', u'second', u'sooth', u'shrugging', u'abundantly', u'errors', u'forgetting', u'contributed', u'fingers', u'increasing', u'exclamations', u'hero', u'leaning', u'Truth', u'here', u'china', u'hers', u'natured', u'substance', u'unwillingness...] I'm copying the tutorial exactly, but I must be doing something wrong. Here is the link to the tutorial: http://www.nltk.org/book/ch01.html#sec-computing-with-language-texts-and-words The example is right under the heading "Figure 1.3: Counting Words Appearing in a Text (a frequency distribution)" Does anyone know how I might fix this?
1
1
0
0
0
0
I'm using langdetect to determine the language of a set of strings which I know are either in English or French. Sometimes, langdetect tells me the language is Romanian for a string I know is in French. How can I make langdetect choose between English or French only, and not all other languages? Thanks!
1
1
0
0
0
0
I'm using CountVectorizer to create a sparse matrix representation of a co-occurrence matrix. I have a list of sentences, and I have another list (vector) of "weights" - the number of times I'd like each sentences tokens to be counted. It's possible to create a list with each sentence repeated many times according to its relevant weight, but this is terribly inefficient and un-pythonic. Some of my weights are in the millions and up. How can I efficiently tell CountVectorizer to use the weight vector I have?
1
1
0
0
0
0
In SpaCy you can set extensions for documents like this: Doc.set_extension('chapter_id', default='') doc = nlp('This is my text') doc._.chapter_id = 'This is my ID' However, I'm having thousands of text files that should be handled by NLP. And SpaCy suggests to use pipe for this: docs = nlp.pipe(array_of_texts) How to apply my extension values during pipe?
1
1
0
0
0
0
i have developed a rasa intent clasification model which is showing correct intents and entities from training data but in addition it is also showing intent ranking with all the other intents and i dont want that to be shown can anyone help me in removing that from my output thanks for the help....... code for the model is....... from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals # from rasa_nlu.converters import load_data from rasa_nlu.training_data import load_data from rasa_nlu.config import RasaNLUModelConfig #from rasa_nlu.config import RasaNLUConfig from rasa_nlu.model import Trainer, Metadata, Interpreter from rasa_nlu import config def train (data, config_file, model_dir): training_data = load_data(data) configuration = config.load(config_file) trainer = Trainer(configuration) trainer.train(training_data) model_directory = trainer.persist(model_dir, fixed_model_name = 'chat') def run(): interpreter = Interpreter.load('./models/nlu/default/chat') print(interpreter.parse('buy a pendrive from amazon')) #print(interpreter.parse(u'What is the reivew for the movie Die Hard?')) if __name__ == '__main__': #train('./data/testData.json', './config/config.yml', './models/nlu') run() for tainning the data remove comment before train and comment run() and to for running vice versa output after running just want to remove intent ranking
1
1
0
1
0
0
I followed the weather rasa chatbot one provided by Justina Petraityte, you can find the GitHub repository here. Yet my chatbot never recognizes the intent I try to provide him, which has to be the location, and I don't know how to handle this case as far as it create an error when calling for the weather API, which is therefore empty. For instance I tried to ask for a the weather in Italy but, as you can see here. It don't recognizes Italy as an intent even if it was in data.json. For instance : Image where we can see an example where he doesn't recognizes the intent Therefore, what to do when the intent isn't recognized ? Should we still save it to the stories.md ? Content of domain file : action_factory: null action_names: - utter_greet - utter_goodbye - utter_ask_location - action_weather actions: - utter_greet - utter_goodbye - utter_ask_location - actions.ActionWeather config: store_entities_as_slots: true entities: - location intents: - greet - goodbye - inform slots: location: initial_value: null type: rasa_core.slots.TextSlot templates: utter_ask_location: - text: In what location? utter_goodbye: - text: Talk to you later. - text: Bye bye :( utter_greet: - text: Hello! How can I help? topics: [] Rasa Core version: (MoodbotEnv) mike@mike-thinks:~/Programing/Rasa_tutorial/moodbot4$ pip list : ... rasa-core (0.9.0a3) rasa-nlu (0.12.3) Python version: (MoodbotEnv) mike@mike-thinks:~/Programing/Rasa_tutorial/moodbot4$ python -V Python 3.5.2 Operating system : Linux 16.04
1
1
0
0
0
0
I want to use condition GANs with the purpose of generated images for one domain (noted as domain A) and by having input images from a second domain (noted as domain B) and the class information as well. Both domains are linked with the same label information (every image of domain A is linked to an image to domain B and a specific label). My generator so far in Keras is the following: def generator_model_v2(): global BATCH_SIZE inputs = Input((IN_CH, img_cols, img_rows)) e1 = BatchNormalization(mode=0)(inputs) e2 = Flatten()(e1) e3 = BatchNormalization(mode=0)(e2) e4 = Dense(1024, activation="relu")(e3) e5 = BatchNormalization(mode=0)(e4) e6 = Dense(512, activation="relu")(e5) e7 = BatchNormalization(mode=0)(e6) e8 = Dense(512, activation="relu")(e7) e9 = BatchNormalization(mode=0)(e8) e10 = Dense(IN_CH * img_cols *img_rows, activation="relu")(e9) e11 = Reshape((3, 28, 28))(e10) e12 = BatchNormalization(mode=0)(e11) e13 = Activation('tanh')(e12) model = Model(input=inputs, output=e13) return model So far my generator takes as input the images from the domain A (and the scope to output images from the domain B). I want somehow to input also the information of the class for the input domain A with the scope to produce images of the same class for the domain B. How can I add the label information after the flattening. So instead of having input size 1x1024 to have 1x1025 for example. Can I use a second Input for the class information in the Generator. And if yes how can I call then the generator from the training procedure of the GANs? The training procedure: discriminator_and_classifier_on_generator = generator_containing_discriminator_and_classifier( generator, discriminator, classifier) generator.compile(loss=generator_l1_loss, optimizer=g_optim) discriminator_and_classifier_on_generator.compile( loss=[generator_l1_loss, discriminator_on_generator_loss, "categorical_crossentropy"], optimizer="rmsprop") discriminator.compile(loss=discriminator_loss, optimizer=d_optim) # rmsprop classifier.compile(loss="categorical_crossentropy", optimizer=c_optim) for epoch in range(30): for index in range(int(X_train.shape[0] / BATCH_SIZE)): image_batch = Y_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE] label_batch = LABEL_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE] # replace with your data here generated_images = generator.predict(X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE]) real_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], image_batch),axis=1) fake_pairs = np.concatenate((X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], generated_images), axis=1) X = np.concatenate((real_pairs, fake_pairs)) y = np.concatenate((np.ones((100, 1, 64, 64)), np.zeros((100, 1, 64, 64)))) d_loss = discriminator.train_on_batch(X, y) discriminator.trainable = False c_loss = classifier.train_on_batch(image_batch, label_batch) classifier.trainable = False g_loss = discriminator_and_classifier_on_generator.train_on_batch( X_train[index * BATCH_SIZE:(index + 1) * BATCH_SIZE, :, :, :], [image_batch, np.ones((100, 1, 64, 64)), label_batch]) discriminator.trainable = True classifier.trainable = True The code is implementation of conditional dcgans (with the addition of a classifier over the discriminator). And the network's functions are: def generator_containing_discriminator_and_classifier(generator, discriminator, classifier): inputs = Input((IN_CH, img_cols, img_rows)) x_generator = generator(inputs) merged = merge([inputs, x_generator], mode='concat', concat_axis=1) discriminator.trainable = False x_discriminator = discriminator(merged) classifier.trainable = False x_classifier = classifier(x_generator) model = Model(input=inputs, output=[x_generator, x_discriminator, x_classifier]) return model def generator_containing_discriminator(generator, discriminator): inputs = Input((IN_CH, img_cols, img_rows)) x_generator = generator(inputs) merged = merge([inputs, x_generator], mode='concat',concat_axis=1) discriminator.trainable = False x_discriminator = discriminator(merged) model = Model(input=inputs, output=[x_generator,x_discriminator]) return model
1
1
0
1
0
0
I want to use CoreNLP on production so it should be scalable enough. (5000 requests between 9am to 5pm) I am using Python wrapper pycorenlp and using Flask framework as API an endpoint. This Flask API endpoint is hosted on Elastic Beanstalk (AWS). Reason: http://flask.pocoo.org/docs/dev/deploying/ I know it's possible to run the Stanford CoreNLP server multithreaded-ly. But is this enough? Should I be running multiple coreNLP servers? What are the best practices to make this combination scalable enough? I am assuming that coreNLP server should be running on the same server where Flask endpoint is hosted.
1
1
0
0
0
0
I am extracting the text from a .pdf file using PyPDF2 package. I am getting output but not as in it's desired form. I am unable to find where's the problem? The code snippet is as follows: import PyPDF2 def Read(startPage, endPage): global text text = [] cleanText = " " pdfFileObj = open('F:\\Pen Drive 8 GB\\PDF\\Handbooks\\book1.pdf', 'rb') pdfReader = PyPDF2.PdfFileReader(pdfFileObj) num_pages = pdfReader.numPages print(num_pages) while (startPage <= endPage): pageObj = pdfReader.getPage(startPage) text += pageObj.extractText() startPage += 1 pdfFileObj.close() for myWord in text: if myWord != ' ': cleanText += myWord text = cleanText.strip().split() print(text) Read(3, 3) The output which I am getting at present is attached for the reference and which is as follows: Any help is highly appreciated.
1
1
0
0
0
0
I have a text corpus with item descriptions in English, Russian and Polish. This text corpus has 68K observations. Some of these observations are written in English, some in Russian, and some in Polish. Could you tell me how properly and cost-efficiently implement a word stemming in this case? I can not use an English stemmer on Russian words and vice versa. Unfortunately, I could not find a good language identifier. E.g. langdetect works too slow and often incorrectly. For example, I try to identify language of english word 'today': detect("today") "so" # i.e Somali So far my code implementation looks bad. I just use one stemmer on another: import nltk # polish stemmer from pymorfologik import Morfologik clean_items = [] # create stemmers snowball_en = nltk.SnowballStemmer("english") snowball_ru = nltk.SnowballStemmer("russian") stemmer_pl = Morfologik() # loop over each item; create an index i that goes from 0 to the length # of the item list for i in range(0, num_items): # Call our function for each one, and add the result to the list of # clean items cleaned = items.iloc[i] # to word stem clean_items.append(snowball_ru.stem(stemmer_pl(snowball_en.stem(cleaned))))
1
1
0
0
0
0
So bit of a long shot here, and I apologize for the lack of information. However, I'm struggling to even know where to look now. So I'm trying to split good and bad comments from a made-up survey of employees at a random company. All I have is a dataframe consisting of the comment an employee has made along with their managers ID code. The idea is to try and see how many good and/or bad comments are associated with a manager via their ID. import pandas as pd trial_text=pd.read_csv("trial.csv") trial_text.head() ManagerCode Comment 0 AB123 Great place to work 1 AB123 Need more training 2 AB123 Hate working here 3 AB124 Always late home 4 AB124 Manager never listens I've used NLTK quite a lot for data sets that include a lot more information so anything NLTK based won't be a problem. Like I say, with what I have, "Google" has far too much information that I don't know where to begin (or that is useful)! If there's anyone that might just have a suggestion that could put me on track that would be great! Thanks
1
1
0
0
0
0
I have a semi-structured dataset, each row pertains to a single user: id, skills 0,"java, python, sql" 1,"java, python, spark, html" 2, "business management, communication" Why semi-structured is because the followings skills can only be selected from a list of 580 unique values. My goal is to cluster users, or find similar users based on similar skillsets. I have tried using a Word2Vec model, which gives me very good results to identify similar skillsets - For eg. model.most_similar(["Data Science"]) gives me - [('Data Mining', 0.9249375462532043), ('Data Visualization', 0.9111810922622681), ('Big Data', 0.8253220319747925),... This gives me a very good model for identifying individual skills and not group of skills. how do I make use of the vector provided from the Word2Vec model to successfully cluster groups of similar users?
1
1
0
0
0
0
I have a dataframe of sentences that looks like this: text 0 this is great! 1 how dare you?! I can succesfully use TextBlob.words (https://textblob.readthedocs.io/en/dev/quickstart.html#tokenization) to break each sentence into its individual words. An example would be a = TextBlob('moon is big') print(a) WordList(['moon','is','big']) WordList creates a list type blob.Wordlist that saves each word. I can break the sentences in the dataframe into individual words and save it it in a variable using this code: for i in df.text: d = TextBlob(i) words_list=d.words To get the sentiment of every word, I need to reapply TextBlob to every word. I can do this with the below code and append the polarity score in a list. lst=[] for i in text.text: d = TextBlob(i) words_list=d.words for i in words_list: f = TextBlob(i) print(f.sentiment) lst.append(f.sentiment.polarity) At this point, I dont know which polarity score belongs to which sentence, because my goal is that I want to average the polarity score of every word per row of dataframe and generate a new column score. Is there anyway I can pass an index per blob.Wordlist so I can match the average back to the dataframe? code so far: from textblob import TextBlob import pandas as pd import statistics as s df = pd.DataFrame({'text':['this is great!','how dare you?!']}) lst=[] for i in text.text: d = TextBlob(i) words_list=d.words for i in words_list: f = TextBlob(i) print(f.sentiment) lst.append(f.sentiment.polarity) for i in lst: z = s.mean(lst) df['score'] = z New df should look like this: text score 0 this is great! 0.2 1 how dare you?! 0.3 NOT text score 0 this is great! 0.133333 1 how dare you?! 0.133333 Thank you in advance. edit: @kevin here is your code with the proper df names from textblob import TextBlob import pandas as pd import statistics as s df = pd.DataFrame({'text':['this is great!','how dare you?!']}) df['score'] = 0 for j in range(len(df.text)): lst=[] i = df.text[j] d = TextBlob(i) words_list=d.words for i in words_list: f = TextBlob(i) print(f.sentiment) lst.append(f.sentiment.polarity) z = s.mean(lst) df['score'][j] = z
1
1
0
0
0
0
I have this dataframe (text_df): There are 10 different authors with 13834 rows of text. I then created a bag of words and used a TfidfVectorizer like so: from sklearn.feature_extraction.text import TfidfVectorizer tfidf_v = TfidfVectorizer(max_df=0.5, max_features=13000, min_df=5, stop_words='english', use_idf=True, norm=u'l2', smooth_idf=True ) X = tfidf_v.fit_transform(corpus).toarray() # corpus --> bagofwords y = text_df.iloc[:,1].values Shape of X is (13834,2701) I decided to use 7 clusters for KMeans: from sklearn.cluster import KMeans km = KMeans(n_clusters=7,random_state=42) I'd like to extract the authors of the texts in each cluster to see if the authors are consistently grouped into the same cluster. Not sure about the best way to go about this. Thanks! Update: Trying to visualize the author count per cluster using nested dictionary like so: author_cluster = {} for i in range(len(y_kmeans)): # check 20 random predictions j = np.random.randint(0, 13833, 1)[0] if y_kmeans[j] not in author_cluster: author_cluster[y_kmeans[j]] = {} if y[j] not in author_cluster[y_kmeans[j]]: author_cluster[y_kmeans[j]][y[j]] = 1 else: author_cluster[y_kmeans[j]][y[j]] += 1 Output: There should be a larger count per cluster and probably more than one author per cluster. I'd like to use all of the predictions to get a more accurate count instead of using a subset. But open to alternative solutions.
1
1
0
0
0
0
I'd like to know exactly what is being done to text using the specified pattern in this tokenizer: from nltk.tokenize import RegexpTokenizer tokenizer = RegexpTokenizer(r'[a-zA-Z]\w+'?\w*') text_token = text.apply(tokenizer.tokenize) Where "text" is a pandas series, each row being a sentence. I specifically want to understand the r'[a-zA-Z]\w+'?\w'* part. Details (explanation of each component) would be appreciated.
1
1
0
0
0
0
I have a multidimensional vector designed for an NLP Classifier. Here's the dataframe (text_df): I used a TfidfVectorizer to create the vector: from sklearn.feature_extraction.text import TfidfVectorizer tfidf_v = TfidfVectorizer(max_df=0.5, max_features=13000, min_df=5, stop_words='english', use_idf=True, norm=u'l2', smooth_idf=True ) X = tfidf_v.fit_transform(corpus).toarray() y = text_df.iloc[:,1].values Shape of X is (13834, 2701). I used 7 clusters for KMeans: from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=7,random_state=42) I tried using PCA, but I'm not sure if the graph looks right. from sklearn.decomposition import PCA X_pca = PCA(2).fit_transform(X) plt.scatter(X_pca[:,0],X_pca[:,1],c=y_kmeans) plt.title("Clusters") plt.legend() plt.show() Is this normal for NLP based clusters? I was hoping for more distinctive clusters. Is there a way to clean up this cluster graph? (i.e. clearer groupings, distinct boundaries, cluster points closer together, etc.).
1
1
0
0
0
0
I can successfully split a sentence into its individual words and take of every average of the polarity score of every word using this code. It works great. import statistics as s from textblob import TextBlob a = TextBlob("""Thanks, I'll have a read!""") print(a) c=[] for i in a.words: c.append(a.sentiment.polarity) d = s.mean(c) d = 0.25 a.words = WordList(['Thanks', 'I', "'ll", 'have', 'a', 'read']) How do I transfer the above code to a df that looks like this?: df text 1 Thanks, I’ll have a read! but take the average of every polarity per word? The closet is I can apply polarity to every sentence for every sentence in df: def sentiment_calc(text): try: return TextBlob(text).sentiment.polarity except: return None df_sentences['sentiment'] = df_sentences['text'].apply(sentiment_calc)
1
1
0
0
0
0
I'm new to python and virtualenv. I have pip installed and have installed a virtualenv, through which I have downloaded the python NLP library spacy. Now I am having an issue downloading a language library (en). The command I run is: $ python3 -m spacy download en and the error I get is: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 142, in _get_module_details return _get_module_details(pkg_main_name, error) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/spacy/__init__.py", line 4, in <module> from .cli.info import info as cli_info File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/spacy/cli/__init__.py", line 1, in <module> from .download import download File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/spacy/cli/download.py", line 11, in <module> from .link import link File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/spacy/cli/link.py", line 9, in <module> from ..util import prints File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/spacy/util.py", line 8, in <module> import regex as re File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/regex.py", line 683, in <module> _pattern_type = type(_compile("", 0, {})) File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/site-packages/regex.py", line 436, in _compile pattern_locale = _getlocale()[1] File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/locale.py", line 581, in getlocale return _parse_localename(localename) File "/Users/JoshiMac/Documents/pythonprojects/LangEnv/lib/python3.6/locale.py", line 490, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: UTF-8
1
1
0
0
0
0
When i am trying to install fuzzywuzzylibrary in my jupyter notebook, i am getting below error. Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/fuzzywuzzy/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/fuzzywuzzy/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/fuzzywuzzy/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/fuzzywuzzy/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/fuzzywuzzy/ Could not find a version that satisfies the requirement fuzzywuzzy (from versions: ) No matching distribution found for fuzzywuzzy Could anyone please help me ??
1
1
0
0
0
0
I want to chunk the string to get the groups in a certain height. The original order should be kept and it should also be completly contain all the original words. import nltk height = 2 sentence = [("the", "DT"), ("little", "JJ"), ("yellow", "JJ"), ("dog", "NN"), ("barked","VBD"), ("at", "IN"), ("the", "DT"), ("cat", "NN")] pattern = """NP: {<DT>?<JJ>*<NN>} VBD: {<VBD>} IN: {<IN>}""" NPChunker = nltk.RegexpParser(pattern) result = NPChunker.parse(sentence) In [29]: Tree.fromstring(str(result)).pretty_print() S _________________|_____________________________ NP VBD IN NP ________|_________________ | | _____|____ the/DT little/JJ yellow/JJ dog/NN barked/VBD at/IN the/DT cat/NN My approach is kind of brute force like below: In [30]: [list(map(lambda x: x[0], _tree.leaves())) for _tree in result.subtrees(lambda x: x.height()==height)] Out[30]: [['the', 'little', 'yellow', 'dog'], ['barked'], ['at'], ['the', 'cat']] I thought there should exist some direct API or something I can use to do chuncking. Any suggestions are highly appreciated.
1
1
0
0
0
0
I'm playing around with sklearn and NLP for the first time, and thought I understood everything I was doing up until I didn't know how to fix this error. Here is the relevant code (largely adapted from http://zacstewart.com/2015/04/28/document-classification-with-scikit-learn.html): from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.preprocessing import StandardScaler from sklearn.decomposition import TruncatedSVD from sgboost import XGBClassifier from pandas import DataFrame def read_files(path): for article in os.listdir(path): with open(os.path.join(path, doc)) as f: text = f.read() yield os.path.join(path, article), text def build_data_frame(path, classification) rows = [] index = [] for filename, text in read_files(path): rows.append({'text': text, 'class': classification}) index.append(filename) df = DataFrame(rows, index=index) return df data = DataFrame({'text': [], 'class': []}) for path, classification in SOURCES: # SOURCES is a list of tuples data = data.append(build_data_frame(path, classification)) data = data.reindex(np.random.permutation(data.index)) classifier = Pipeline([ ('features', FeatureUnion([ ('text', Pipeline([ ('tfidf', TfidfVectorizer()), ('svd', TruncatedSVD(algorithm='randomized', n_components=300) ])), ('words', Pipeline([('wscaler', StandardScaler())])), ])), ('clf, XGBClassifier(silent=False)), ]) classifier.fit(data['text'].values, data['class'].values) The data loaded into the DataFrame is preprocessed text with all stopwords, punctuation, unicode, capitals, etc. taken care of. This is the error I'm getting once I call fit on the classifier where the ... represents one of the documents that should have been vecorized in the pipeline: ValueError: could not convert string to float: ... I first thought the TfidfVectorizer() is not working, causing an error on the SVD algorithm, but after I extracted each step out of the pipeline and implemented them sequentially, the same error only came up on XGBClassifer.fit(). Even more confusing to me, I tried to piece this script apart step-by-step in the interpreter, but when I tried to import either read_files or build_data_frame, the same ValueError came up with one of my strings, but this was merely after: from classifier import read_files I have no idea how that could be happening, if anyone has any idea what my glaring errors may be, I'd really appreciate it. Trying to wrap my head around these concepts on my own but coming across a problem likes this leaves me feeling pretty incapacitated.
1
1
0
0
0
0
I'm making a neural network. The training output for all pairs is either 0 or 1. I've noticed that if I add only a single training pair with target output '1' and 9 other pairs with '0', my weights after training all become negative, however if I increase the number of '1' target outputs in the training set, I see positive weights as well. A training set that gives all negative weights: INPUT: [[0.46 0.4 0.98039216] [0.58 0. 0.98039216] [0.2 1. 0.39215686] [0.1 0.4 0.45960784] [0.74 0.53333333 0.19607843] [0.48 0.93333333 0. ] [0.38 0.7 0.98039216] [0.02 0.53333333 1. ] [0. 0.03333333 0.88235294] [1. 0.8 0.78431373]] OUTPUT: [[0.][0.][0.][0.][0.][0.][0.][0.][0.][1.]] WEIGHTS BEFORE TRAINING (RANDOM): [[-0.16595599] [ 0.44064899] [-0.99977125]] WEIGHTS AFTER TRAINING: [[-1.48868116] [-4.8662876 ] [-5.42639621]] However, if I change target outputs by one more '1' as such [[0.][0.][0.][0.][0.][0.][0.][0.][0.][1.]] I get a positive weight as well after training: [[ 1.85020129] [-1.9759502 ] [-1.03829837]] What could be the reason for this? Could it be that too many '0' make the '1' insignificant when training? If so, how should I change the approach when training? I want to use this training with a training set with around 480 training pairs with output '0' and 20 with '1' (I'm using a sigmoid function:) Full code: from numpy import exp, array, random, dot from collections import defaultdict import csv import numpy as np class NeuralNetwork(): def __init__(self): random.seed(1) self.synaptic_weights = 2 * random.random((3, 1)) - 1 def __sigmoid(self, x): return 1 / (1 + exp(-x)) def __sigmoid_derivative(self, x): return x * (1 - x) def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations): for iteration in range(number_of_training_iterations): output = self.think(training_set_inputs) error = training_set_outputs - output adjustment = training_set_inputs.T.dot(error * self.__sigmoid_derivative(output)) self.synaptic_weights += adjustment def think(self, inputs): return self.__sigmoid(dot(inputs, self.synaptic_weights)) if __name__ == "__main__": neural_network = NeuralNetwork() print ("Random starting synaptic weights: ") print (neural_network.synaptic_weights) training_set_inputs = array([ [0.46,0.4,0.98039216], [0.58,0.0,0.98039216], [0.2,1.0,0.39215686], [0.1,0.4,0.45960784], [0.74,0.53333333,0.19607843], [0.48,0.93333333,0.0], [0.38,0.7,0.98039216], [0.02,0.53333333,1.0], [0.,0.03333333,0.88235294], [1.0,0.8,0.78431373]]) training_set_outputs = array([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]]).T neural_network.train(training_set_inputs, training_set_outputs, 10000) print ("New synaptic weights after training: ") print (neural_network.synaptic_weights) print ("Considering new situation [0.5,0.5,0.5] -> ?: ") test = [0.5,0.5,0.5] print (neural_network.think(array(test))) Any ideas? Thanks
1
1
0
1
0
0
I have this code and I have list of article as dataset. Each raw has an article. I run this code: import gensim docgen = TokenGenerator( raw_documents, custom_stop_words ) # the model has 500 dimensions, the minimum document-term frequency is 20 w2v_model = gensim.models.Word2Vec(docgen, size=500, min_count=20, sg=1) print( "Model has %d terms" % len(w2v_model.wv.vocab) ) w2v_model.save("w2v-model.bin") # To re-load this model, run #w2v_model = gensim.models.Word2Vec.load("w2v-model.bin") def calculate_coherence( w2v_model, term_rankings ): overall_coherence = 0.0 for topic_index in range(len(term_rankings)): # check each pair of terms pair_scores = [] for pair in combinations(term_rankings[topic_index], 2 ): pair_scores.append( w2v_model.similarity(pair[0], pair[1]) ) # get the mean for all pairs in this topic topic_score = sum(pair_scores) / len(pair_scores) overall_coherence += topic_score # get the mean score across all topics return overall_coherence / len(term_rankings) import numpy as np def get_descriptor( all_terms, H, topic_index, top ): # reverse sort the values to sort the indices top_indices = np.argsort( H[topic_index,:] )[::-1] # now get the terms corresponding to the top-ranked indices top_terms = [] for term_index in top_indices[0:top]: top_terms.append( all_terms[term_index] ) return top_terms from itertools import combinations k_values = [] coherences = [] for (k,W,H) in topic_models: # Get all of the topic descriptors - the term_rankings, based on top 10 terms term_rankings = [] for topic_index in range(k): term_rankings.append( get_descriptor( terms, H, topic_index, 10 ) ) # Now calculate the coherence based on our Word2vec model k_values.append( k ) coherences.append( calculate_coherence( w2v_model, term_rankings ) ) print("K=%02d: Coherence=%.4f" % ( k, coherences[-1] ) ) I face with this error: raise KeyError("word '%s' not in vocabulary" % word) KeyError: u"word 'business' not in vocabulary" The original code works great with their data set. https://github.com/derekgreene/topic-model-tutorial Could you help what this error is?
1
1
0
0
0
0
I'm trying to preprocess words to remove common prefixes like "un" and "re", however all of nltk's common stemmers seem to completely ignore prefixes: from nltk.stem import PorterStemmer, SnowballStemmer, LancasterStemmer PorterStemmer().stem('unhappy') # u'unhappi' SnowballStemmer('english').stem('unhappy') # u'unhappi' LancasterStemmer().stem('unhappy') # 'unhappy' PorterStemmer().stem('reactivate') # u'reactiv' SnowballStemmer('english').stem('reactivate') # u'reactiv' LancasterStemmer().stem('reactivate') # 'react' Isn't part of the job of a stemmer to remove common prefixes as well as suffixes? Is there another stemmer which does this reliably?
1
1
0
0
0
0
I have a co-occurrence matrix stored in a CSV file which contains the relationship between words and emojis like this: word emo1 emo2 emo3 w1 0.5 0.3 0.2 w2 0.8 0 0 w3 0.2 0.5 0.2 This co-occurrence matrix is huge which has 1584755 rows and 621 columns. I have a Sequential() LSTM model in Keras where I use pre-trained (word2vec) word-embedding. Now I would like to use the co-occurrence matrix as another embedding layer. How can I do that? My current code is something like this: model = Sequential() model.add(Embedding(max_features, embeddings_dim, input_length=max_sent_len, weights=[embedding_weights])) model.add(Dropout(0.25)) model.add(Convolution1D(nb_filter=nb_filter, filter_length=filter_length, border_mode='valid', activation='relu', subsample_length=1)) model.add(MaxPooling1D(pool_length=pool_length)) model.add(LSTM(embeddings_dim)) model.add(Dense(reg_dimensions)) model.add(Activation('sigmoid')) model.compile(loss='mean_absolute_error', optimizer='adam') model.fit( train_sequences , train_labels , nb_epoch=30, batch_size=16) Also, if the co-occurrence matrix is sparse then what would be the best way to use it in the embedding layer?
1
1
0
0
0
0
I was experimenting with a VAE implementation in Tensorflow for MNIST dataset. To start things off, I trained a VAE based on MLP encoder and decoder. It trains just fine, the loss decreases and it generates plausibly looking digits. Here's a code of the decoder of this MLP-based VAE: x = sampled_z x = tf.layers.dense(x, 200, tf.nn.relu) x = tf.layers.dense(x, 200, tf.nn.relu) x = tf.layers.dense(x, np.prod(data_shape)) img = tf.reshape(x, [-1] + data_shape) As a next step, I decided to add convolutional layers. Changing just the encoder worked just fine, but when I use deconvolutions in the decoder (instead of fc layers) I don't get any training at all. The loss function never decreases, and the output is always black. Here's the code of deconvolutional decoder: x = tf.layers.dense(sampled_z, 24, tf.nn.relu) x = tf.layers.dense(x, 7 * 7 * 64, tf.nn.relu) x = tf.reshape(x, [-1, 7, 7, 64]) x = tf.layers.conv2d_transpose(x, 64, 3, 2, 'SAME', activation=tf.nn.relu) x = tf.layers.conv2d_transpose(x, 32, 3, 2, 'SAME', activation=tf.nn.relu) x = tf.layers.conv2d_transpose(x, 1, 3, 1, 'SAME', activation=tf.nn.sigmoid) img = tf.reshape(x, [-1, 28, 28]) This seems bizarre, the code looks just fine to me. I narrowed it down to the deconvolutional layers in the decoder, there's something in there that breaks it. E.g. if I add a fully-connected layer (even without the nonlinearity!) after the last deconvolution, it works again! Here's the code: x = tf.layers.dense(sampled_z, 24, tf.nn.relu) x = tf.layers.dense(x, 7 * 7 * 64, tf.nn.relu) x = tf.reshape(x, [-1, 7, 7, 64]) x = tf.layers.conv2d_transpose(x, 64, 3, 2, 'SAME', activation=tf.nn.relu) x = tf.layers.conv2d_transpose(x, 32, 3, 2, 'SAME', activation=tf.nn.relu) x = tf.layers.conv2d_transpose(x, 1, 3, 1, 'SAME', activation=tf.nn.sigmoid) x = tf.contrib.layers.flatten(x) x = tf.layers.dense(x, 28 * 28) img = tf.reshape(x, [-1, 28, 28]) I'm really a little stuck at this point, does anyone have any idea what might be happening here? I use tf 1.8.0, Adam optimizer, 1e-4 learning rate. EDIT: As @Agost pointed out, I should perhaps clarify things about my loss function and the training process. I model the posterior as a Bernoulli distribution and maximizing ELBO as my loss. Inspired by this post. Here's the full code of encoder, decoder, and the loss: def make_prior(): mu = tf.zeros(N_LATENT) sigma = tf.ones(N_LATENT) return tf.contrib.distributions.MultivariateNormalDiag(mu, sigma) def make_encoder(x_input): x_input = tf.reshape(x_input, shape=[-1, 28, 28, 1]) x = conv(x_input, 32, 3, 2) x = conv(x, 64, 3, 2) x = conv(x, 128, 3, 2) x = tf.contrib.layers.flatten(x) mu = dense(x, N_LATENT) sigma = dense(x, N_LATENT, activation=tf.nn.softplus) # softplus is log(exp(x) + 1) return tf.contrib.distributions.MultivariateNormalDiag(mu, sigma) def make_decoder(sampled_z): x = tf.layers.dense(sampled_z, 24, tf.nn.relu) x = tf.layers.dense(x, 7 * 7 * 64, tf.nn.relu) x = tf.reshape(x, [-1, 7, 7, 64]) x = tf.layers.conv2d_transpose(x, 64, 3, 2, 'SAME', activation=tf.nn.relu) x = tf.layers.conv2d_transpose(x, 32, 3, 2, 'SAME', activation=tf.nn.relu) x = tf.layers.conv2d_transpose(x, 1, 3, 1, 'SAME') img = tf.reshape(x, [-1, 28, 28]) img_distribution = tf.contrib.distributions.Bernoulli(img) img = img_distribution.probs img_distribution = tf.contrib.distributions.Independent(img_distribution, 2) return img, img_distribution def main(): mnist = input_data.read_data_sets(os.path.join(experiment_dir(EXPERIMENT), 'MNIST_data')) tf.reset_default_graph() batch_size = 128 x_input = tf.placeholder(dtype=tf.float32, shape=[None, 28, 28], name='X') prior = make_prior() posterior = make_encoder(x_input) mu, sigma = posterior.mean(), posterior.stddev() z = posterior.sample() generated_img, output_distribution = make_decoder(z) likelihood = output_distribution.log_prob(x_input) divergence = tf.distributions.kl_divergence(posterior, prior) elbo = tf.reduce_mean(likelihood - divergence) loss = -elbo global_step = tf.train.get_or_create_global_step() optimizer = tf.train.AdamOptimizer(1e-3).minimize(loss, global_step=global_step)
1
1
0
0
0
0
Is there a way to translate unicode emojis to an appropriate ascii emoticon in Python? I know the emoji library which can be used to convert unicode emojis to something like :crying_face:. But what I would need is to convert it to :'( Is there an elegant way to do this without having to translate every possible emoji manually? Another option would be to convert the ascii emojis also to their textual representation, i.e. :'( should become :crying_face:. My intermediate goal is to find a way to transform ascii and unicode emojis to a common representation. My final goal would be to replace emoticons (no matter if unicode or ascii) by the emotion they represent (if they do not represent an emotion, remove them)
1
1
0
0
0
0
I am new to the text mining. I have a CSV file. I need to go through each line and extract some information then write them into another CSV file. I am looking for specific information which I have in a dictionary. Consider below sentence: "the application version is 1.8.2 and the variable skt.len passes the required information. file ReadMe.txt has the specifications." My dictionary is: ["application version", "variable", "file"] I need to extract: application version: 1.8.2 variable: skt.len file: ReadMe.txt What is the best way to extract such information from text? I am playing with NLTK and StanfordCoreNLP features. But, I could not extract the information yet. I am thinking to use regex to extract the application version. Any idea? PS: I know that this may make the task more complicated. But, sentences in each line of the CSV file may have different structures. For example: "application version" in one line, may be "app version" in another line. Or "file" in one line may be "filename" in another line.
1
1
0
0
0
0
This: import re title = 'Decreased glucose-6-phosphate dehydrogenase activity along with oxidative stress affects visual contrast sensitivity in alcoholics.' words = list(filter(None, re.split('\W+', title))) for word in words: print(word) results in: Decreased glucose 6 phosphate dehydrogenase activity along with oxidative stress affects visual contrast sensitivity in alcoholics Ideally, I would like to prevent the splitting of words like: glucose-6-phosphate Is there a better way to obtain separate words of a sentence like this in Python? Should I adopt the regular expression or is there something OOTB? Thanks.
1
1
0
0
0
0
I have an annotated corpus for the task of Coreference Resolution. Can you let me know how to extract the data from xml file. I did the following but not work. from lxml import objectify import pandas as pd xml = objectify.parse(open('Dari_Coref_2_coref_level.xml')) root = xml.getroot() df = pd.DataFrame(columns='markable') for i in range(0, 2): obj = root.getchildren()[i].getchildren() row = dict(zip(['markable'], [obj[0].text])) row_s = pd.Series(row) row_s.name = i df = df.append(row_s) print(df) And the structure of my xml file is like: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE markables SYSTEM "markables.dtd"> <markables xmlns="www.eml.org/NameSpaces/coref"> <markable id="markable_1" span="word_1..word_4" mentiontype="ne" coref_class="set_1" mmax_level="coref" coreftype="ident" /> <markable id="markable_3" span="word_33..word_34" mentiontype="ne" coref_class="set_2" mmax_level="coref" coreftype="ident" /> <markable id="markable_2" span="word_5..word_9" mentiontype="np" coref_class="set_1" mmax_level="coref" coreftype="ident" /> <markable id="markable_5" span="word_89..word_90" mentiontype="np" coref_class="set_3" mmax_level="coref" coreftype="ident" /> <markable id="markable_4" span="word_35..word_44" mentiontype="np" coref_class="set_2" mmax_level="coref" coreftype="ident" /> <markable id="markable_7" span="word_124..word_126" mentiontype="ne" coref_class="set_4" mmax_level="coref" coreftype="ident" /> <markable id="markable_6" span="word_91..word_95" mentiontype="np" coref_class="set_3" mmax_level="coref" coreftype="ident" /> </markables>
1
1
0
0
0
0
I am looking for a way to use Pos tagging for French sentences with Python. I saw that we could use Stanford CoreNLP but after several searches on google, I did not find real examples that could satisfy me .. It would be great to have a piece of code that shows me how to solve my problem
1
1
0
0
0
0
I have two tensor with 3 dimensions: tensor 1 (bs1, sent_len1, emb_dim) tensor 2 (bs2, sent_len2, emb_dim) bs1 and bs2 are unknown and they are not necessarily equal. I want to product these tensors to get an output like this: output (bs1, bs2, sent_len2, sent_len1)
1
1
0
0
0
0
I am trying to train a Seq2Seq model using LSTM in Keras library of Python. I want to use TF IDF vector representation of sentences as input to the model and getting an error. X = ["Good morning", "Sweet Dreams", "Stay Awake"] Y = ["Good morning", "Sweet Dreams", "Stay Awake"] vectorizer = TfidfVectorizer() vectorizer.fit(X) vectorizer.transform(X) vectorizer.transform(Y) tfidf_vector_X = vectorizer.transform(X).toarray() #shape - (3,6) tfidf_vector_Y = vectorizer.transform(Y).toarray() #shape - (3,6) tfidf_vector_X = tfidf_vector_X[:, :, None] #shape - (3,6,1) since LSTM cells expects ndims = 3 tfidf_vector_Y = tfidf_vector_Y[:, :, None] #shape - (3,6,1) X_train, X_test, y_train, y_test = train_test_split(tfidf_vector_X, tfidf_vector_Y, test_size = 0.2, random_state = 1) model = Sequential() model.add(LSTM(output_dim = 6, input_shape = X_train.shape[1:], return_sequences = True, init = 'glorot_normal', inner_init = 'glorot_normal', activation = 'sigmoid')) model.add(LSTM(output_dim = 6, input_shape = X_train.shape[1:], return_sequences = True, init = 'glorot_normal', inner_init = 'glorot_normal', activation = 'sigmoid')) model.add(LSTM(output_dim = 6, input_shape = X_train.shape[1:], return_sequences = True, init = 'glorot_normal', inner_init = 'glorot_normal', activation = 'sigmoid')) model.add(LSTM(output_dim = 6, input_shape = X_train.shape[1:], return_sequences = True, init = 'glorot_normal', inner_init = 'glorot_normal', activation = 'sigmoid')) adam = optimizers.Adam(lr = 0.001, beta_1 = 0.9, beta_2 = 0.999, epsilon = None, decay = 0.0, amsgrad = False) model.compile(loss = 'cosine_proximity', optimizer = adam, metrics = ['accuracy']) model.fit(X_train, y_train, nb_epoch = 100) The above code throws: Error when checking target: expected lstm_4 to have shape (6, 6) but got array with shape (6, 1) Could someone tell me what's wrong and how to fix it?
1
1
0
0
0
0
address='''No-33-6,BEML Layout,Basaveshwaranagara 8th Main,Kamala Nagar,Near Academy Of Science and Knowledge,Bengaluru,Karnataka 560079''' I tried using below pattern.But .* matches all characters re.findall('[n][o].*',adress) Match based on first term that is No and last term 6 digits pincode/zipcode output:No-33-6,BEML Layout,Basaveshwaranagara 8th Main,Kamala Nagar,Near Academy Of Science and Knowledge,Bengaluru,Karnataka 560079
1
1
0
0
0
0
I want to know if there is anyway that I can un-stem them to a normal form? The problem is that I have thousands of words in different forms e.g. eat, eaten, ate, eating and so on and I need to count the frequency of each word. All of these - eat, eaten, ate, eating etc will count towards eat and hence, I used stemming. But the next part of the problem requires me to find similar words in data and I am using nltk's synsets to calculate Wu-Palmer Similarity among the words. The problem is that nltk's synsets wont work on stemmed words, or at least in this code they won't. check if two words are related to each other How should I do it? Is there a way to un-stem a word?
1
1
0
0
0
0
I want to make a list of sentences from a string and then print them out. I don't want to use NLTK to do this. So it needs to split on a period at the end of the sentence and not at decimals or abbreviations or title of a name or if the sentence has a .com This is attempt at regex that doesn't work. import re text = """\ Mr. Smith bought cheapsite.com for 1.5 million dollars, i.e. he paid a lot for it. Did he mind? Adam Jones Jr. thinks he didn't. In any case, this isn't true... Well, with a probability of .9 it isn't. """ sentences = re.split(r' *[\.\?!]['"\)\]]* *', text) for stuff in sentences: print(stuff) Example output of what it should look like Mr. Smith bought cheapsite.com for 1.5 million dollars, i.e. he paid a lot for it. Did he mind? Adam Jones Jr. thinks he didn't. In any case, this isn't true... Well, with a probability of .9 it isn't.
1
1
0
0
0
0
The following code is a very simple example of using word embedding to predict the labels (see below). The example is taken from here. from numpy import array from keras.preprocessing.text import one_hot from keras.preprocessing.sequence import pad_sequences from keras.models import Sequential from keras.layers import Dense from keras.layers import Flatten from keras.layers.embeddings import Embedding # define documents docs = ['Well done!', 'Good work', 'Great effort', 'nice work', 'Excellent!', 'Weak', 'Poor effort!', 'not good', 'poor work', 'Could have done better.'] # define class labels labels = array([1,1,1,1,1,0,0,0,0,0]) # integer encode the documents vocab_size = 50 encoded_docs = [one_hot(d, vocab_size) for d in docs] print(encoded_docs) # pad documents to a max length of 4 words max_length = 4 padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post') print(padded_docs) # define the model model = Sequential() model.add(Embedding(vocab_size, 8, input_length=max_length)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) # compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) # summarize the model print(model.summary()) # fit the model model.fit(padded_docs, labels, epochs=50, verbose=0) # evaluate the model loss, accuracy = model.evaluate(padded_docs, labels, verbose=0) print('Accuracy: %f' % (accuracy*100)) Let us say we have structured data like this: hours_of_revision = [10, 5, 7, 3, 100, 0, 1, 0.5, 4, 0.75] Here every entry aligns with each row showing nicely that one should really spend more time to revise to achieve good marks (-: Just wondering, could one incorporate this into the model to use the text and structured data?
1
1
0
0
0
0
I created a keras LSTM model to predict the next word given a sentence: pretrained_weights = w2v_model.wv.syn0 vocab_size, emdedding_size = pretrained_weights.shape lstm_model = Sequential() lstm_model.add(Embedding(input_dim= vocab_size, output_dim=emdedding_size, weights=[pretrained_weights])) lstm_model.add(LSTM(units=emdedding_size)) lstm_model.add(Dense(units=vocab_size)) lstm_model.add(Activation('softmax')) lstm_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') lstm_model.fit(X, y, batch_size=128, epochs=3) When X are sentences and y are the next word for each sentence. Now , I have a sentence, and 5 words, and I want to rank them by probability given the sentence. What is the best way to do so?
1
1
0
0
0
0
Python Please explain why these two codes work differently.. Actually I am trying to make kind of AI where in initial generations the individuals will go in random directions. For keeping the code simple I have provided some random directions in Brain myself. There is an Individual class that gives a brain to the individual. It also has a function that returns a child with EXACTLY the same brain (means same directions to go in) as the parent. I have two codes: First: When some directions is changed in the parent, the same thing is changed in the child too (or if changed in child, it gets changed in parent too) which I don't want to happen. Second: This one is not completely mine (and that's why I don't really know why it works) but it works fine. Some direction changed in parent is not changed in the child and vice-versa. Please someone explain me the difference and why first one didn't work. I would really appreciate your answer. First one: class Brain(): def __init__(self): self.directions = [[1, 2], [5, 3], [7, 4], [1, 5]] class Individual(): def __init__(self): self.brain = Brain() def getChild(self): child = Individual() child.brain = self.brain return child parent = Individual() child = parent.getChild() parent.brain.directions[0] = [5, 2] print(parent.brain.directions) print(child.brain.directions) [ [5, 2], [5, 3], [7, 4], [1, 5] ] [ [5, 2], [5, 3], [7, 4], [1, 5] ] Second one: class Brain(): def __init__(self): self.directions = [[1, 2], [5, 3], [7, 4], [1, 5]] def clone(self): clone = Brain() for i, j in enumerate(self.directions): clone.directions[i] = j return clone class Individual(): def __init__(self): self.brain = Brain() def getChild(self): child = Individual() child.brain = self.brain.clone() return child parent = Individual() child = parent.getChild() parent.brain.directions[0] = [5, 2] print(parent.brain.directions) print(child.brain.directions) [ [5, 2], [5, 3], [7, 4], [1, 5] ] [ [1, 2], [5, 3], [7, 4], [1, 5] ]
1
1
0
0
0
0
I am trying to find the city names and person names from another unstructured data files. We have many text files. How to find such string in using pandas or python for example: I have to find a string Ram and Mumbai from another unstructured data file.
1
1
0
0
0
0
I am doing king of transfer learning. What I have done is First train the model with the big datasets and save the weights. Then I train the model with my dataset by freezing the layers. But I see there was some overfitting. So I try to change the dropout of the model and load the weights since the numbers are changing while drop out are changing. I find difficulties to change the dropout. Directly my question is, Is it possible to change the model's dropout while loading the weights? my scenario 1 is like that model defined. train the model. list item save weights. ... redefine the dropout others are not changed in the model load the weights . I got the error. 2nd Scenario model1 defined. train the model. save weights load model1 weights to model1 .... model2 defined by changing the dropouts. try to set the wights of model1 to model 2 using for loop except for the dropout layer. I got an error. This is the error I got. File "/home/sathiyakugan/PycharmProjects/internal-apps/apps/support-tools/EscalationApp/LSTM_Attention_IMDB_New_open.py", line 343, in <module> NewModel.layers[i].set_weights(layer.get_weights()) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/base_layer.py", line 1062, in set_weights str(weights)[:50] + '...') ValueError: You called `set_weights(weights)` on layer "lstm_5" with a weight list of length 1, but the layer was expecting 3 weights. Provided weights: [array([[ 0. , 0. , 0. , ..., 0.... What is the right way to go? Since I am new to Keras, I am struggling to go further.
1
1
0
0
0
0
What I want to know is what the best approach would be to extract meaning from a text. I gave NLTK a read, and it did give me some good information on the basics of NLP. I'm new to nlp, so I'm having a tough time deciding what my direction should be. After reading the NLTK text, here's what I'm thinking what would solve my problem: Here is my ideal goal with example sentences: Input: Do X on 8/29/2018 until 9/12/2018 (every Wednesday) and 9/10/2018 (Monday) Output Part of Speech tag in bold (At least what I invision): Do X on 8/29/2018(Date) until(Range) 9/12/2018(Date) (every(Frequency) Wednesday(Day) and 9/10/2018(Date) (Monday)(Day) Next, I would loop through the part of speech tags and chuck the text. My hope is after chunking of the text appropriately, I would then need to do some additional processing in order to figure out what the user exactly wants. Here's what I'm thinking the output would be after a successful meaning extraction from the sentence : 8/29/2018 - 9/12/2018, Wednedays 9/10/2018, Monday I realize that finding days of the month, days and dates, etc can be easily found through a regular expression. But my issue is that the NLTK method nltk.pos_tag method would not work for me. (For those who aren't familiar, the method is a part of speech tagger tagging words likw noun, verb etc.) I would most likely have to customize my own pos_tag method? So here's my question. Is tagging each tokenized word first, then chunking the sentences from the tags considered best practice to extract meaning? I'm guessing I would need some sort of AI classification to learn the chunking part so that in the future I can extracting meaning from more than one sentence. Is my approach sane? Have I gone mad? :)
1
1
0
0
0
0
I am trying to make a tree (nested dictionary) from the output of dependency parser. The sentence is "I shot an elephant in my sleep". I am able to get the output as described on the link: How do I do dependency parsing in NLTK? nsubj(shot-2, I-1) det(elephant-4, an-3) dobj(shot-2, elephant-4) prep(shot-2, in-5) poss(sleep-7, my-6) pobj(in-5, sleep-7) To convert this list of tuples into nested dictionary, I used the following link: How to convert python list of tuples into tree? def build_tree(list_of_tuples): all_nodes = {n[2]:((n[0], n[1]),{}) for n in list_of_tuples} root = {} print all_nodes for item in list_of_tuples: rel, gov,dep = item if gov is not 'ROOT': all_nodes[gov][1][dep] = all_nodes[dep] else: root[dep] = all_nodes[dep] return root This gives the output as follows: {'shot': (('ROOT', 'ROOT'), {'I': (('nsubj', 'shot'), {}), 'elephant': (('dobj', 'shot'), {'an': (('det', 'elephant'), {})}), 'sleep': (('nmod', 'shot'), {'in': (('case', 'sleep'), {}), 'my': (('nmod:poss', 'sleep'), {})})})} To find the root to leaf path, I used the following link: Return root to specific leaf from a nested dictionary tree [Making the tree and finding the path are two separate things]The second objective is to find the root to leaf node path like done Return root to specific leaf from a nested dictionary tree. But I want to get the root-to-leaf (dependency relationship path) So, for instance, when I will call recurse_category(categories, 'an') where categories is the nested tree structure and 'an' is the word in the tree, I should get ROOT-nsubj-dobj (dependency relationship till root) as output.
1
1
0
0
0
0
Hi I am working on one NLP project, where I need to identify entities / organization names from the text. However, the words in string are concatenated with (_ : ,) characters as shown below: RING_LECO:108_.250X.436X.093V_772_520 I would want to clean the string as below: Ring Leco 108 .250X.436X.093V 772_520 We have removed special characters between two words (A-Z:A-Z,A-Z:0-9) but retained _ symbol between 772 and 520. Is there any way that I could do this?
1
1
0
0
0
0
from librosa.feature import mfcc from librosa.core import load def extract_mfcc(sound): data, frame = load(sound) return mfcc(data, frame) mfcc = extract_mfcc("sound.wav") I would like to get the MFCC of the following sound.wav file which is 48 seconds long. I understand that the data * frame = length of audio. But when I compute the MFCC as shown above and get its shape, this is the result: (20, 2086) What do those numbers represent? How can I calculate the time of the audio just by its MFCC? I'm trying to calculate the average MFCC per ms of audio. Any help is appreciated! Thank you :)
1
1
0
0
0
0
I am learning natural language processing for bigram topic. At this stage, I am having difficulty in the Python computation, but I try. I will be using this corpus that has not been subjected to tokenization as my main raw dataset. I can generate the bigram results using nltk module. However, my question is how to compute in Python to generate the bigrams containing more than two specific words. More specifically, I wish to find all the bigrams, which are available in corpus_A, that contain words from the word_of_interest. corpus = ["he is not giving up so easily but he feels lonely all the time his mental is strong and he always meet new friends to get motivation and inspiration to success he stands firm for academic integrity when he was young he hope that santa would give him more friends after he is a grown up man he stops wishing for santa clauss to arrival he and his friend always eat out but they clean their hand to remove sand first before eating"] word_of_interest = ['santa', 'and', 'hand', 'stands', 'handy', 'sand'] I want to get the bigram for each of the individual words from the list of word_of_interest. Next, I want to get the frequency for each bigram available based on their appearance in the corpus_A. With the frequency available, I want to sort and print out the bigram based on their probability from highest to lower. I have tried out codes from on-line search but it does not give me an output. The codes are mentioned below: for i in corpus: bigrams_i = BigramCollocationFinder.from_words(corpus, window_size=5) bigram_j = lambda i[x] not in i x += 1 print(bigram_j) Unfortunately, the output did not return what I am planning to achieve. Please advice me. The output that I want will have the bigram with the specific words from the word_of_interest and their probabilities sorted as shown below. [((santa, clauss), 0.89), ((he, and), 0.67), ((stands, firm), 0.34))]
1
1
0
0
0
0
Here is the code snippet: In [390]: t Out[390]: ['my', 'phone', 'number', 'is', '1111', '1111', '1111'] In [391]: ner_tagger.tag(t) Out[391]: [('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111\xa01111\xa01111', 'NUMBER')] What I expect is: Out[391]: [('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111', 'NUMBER'), ('1111', 'NUMBER'), ('1111', 'NUMBER')] As you can see the artificial phone number is joined by \xa0 which is said to be a non-breaking space. Can I separate that by setting the CoreNLP without changing other default rules. The ner_tagger is defined as: ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
1
1
0
0
0
0
I've been trying to plot a graph using networkx of which the nodes' name are Thai language. The problem is it cannot show in Thai and draw_networkx() does not provide parameter for utf_8. Please give me a suggestion. import networkx as nx G=nx.Graph() G.add_node('กิน') G.add_node('หิว') G.add_node('ข้าว') G.add_node('ถั่ว') G.add_node('milk') G.add_edge('กิน','หิว') G.add_edge('กิน','ข้าว') G.add_edge('กิน','ถั่ว') G.add_edge('กิน','milk') pos = nx.spring_layout(G) nx.draw_networkx(G, pos,node_size= 2400) plt.savefig('test.png')
1
1
0
0
0
0
For Skip-gram word2vec training samples are obtained as follows: Sentence: The fox was running across the maple forest The word fox give next pairs for training: fox-run, fox-across, fox-maple, fox-forest and etc. for every word. CBOW w2v use reverse approach: run-fox, across-fox, maple-fox, forest-fox or for forest word: fox-forest, run-forest, across-forest, maple-forest So we get all the pairs. What's the difference between Skip-gram word2vec and CBOW w2v during training with gensim library, if we do not specify the target word when training in the CBOW-mode? In both cases all pairs of words are used, or not?
1
1
0
1
0
0
When I use SpaCy to identify stopwords, it doesn't work if I use the en_core_web_lg corpus, but it does work when I use en_core_web_sm. Is this a bug, or am I doing something wrong? import spacy nlp = spacy.load('en_core_web_lg') doc = nlp(u'The cat ran over the hill and to my lap') for word in doc: print(f' {word} | {word.is_stop}') Result: The | False cat | False ran | False over | False the | False hill | False and | False to | False my | False lap | False However, when I change this line to use the en_core_web_smcorpus, I get different results: nlp = spacy.load('en_core_web_sm') The | False cat | False ran | False over | True the | True hill | False and | True to | True my | True lap | False
1
1
0
0
0
0
I am trying to run StanfordCoreNLP parser and I have the following code: from pycorenlp import StanfordCoreNLP nlp = StanfordCoreNLP('http://localhost:9000') def depparse(text): parsed="" output = nlp.annotate(text, properties={ 'annotators': 'depparse', 'outputFormat': 'json' }) for i in output["sentences"]: for j in i["basicDependencies"]: parsed=parsed+str(j["dep"]+'('+ j["governorGloss"]+' ')+str(j["dependentGloss"]+')'+' ') return parsed text='I shot an elephant in my sleep' depparse(text) This gives me output as: 'ROOT(ROOT shot) nsubj(shot I) det(elephant an) dobj(shot elephant) case(sleep in) nmod:poss(sleep my) nmod(shot sleep) ' To convert the relationships into tree, I am encountered one stackoverflow post Stanford NLP parse tree format. However, the output of the parser is in "bracketed parse (tree)". Hence, I am not sure how can I achieve it. I tried changing the outputformat as well but it gives an error. I also found Python - Generate a dictionary(tree) from a list of tuples and implemented list_of_tuples = [('ROOT','ROOT', 'shot'),('nsubj','shot', 'I'),('det','elephant', 'an'),('dobj','shot', 'elephant'),('case','sleep', 'in'),('nmod:poss','sleep', 'my'),('nmod','shot', 'sleep')] nodes={} for i in list_of_tuples: rel,parent,child=i nodes[child]={'Name':child,'Relationship':rel} forest=[] for i in list_of_tuples: rel,parent,child=i node=nodes[child] if parent=='ROOT':# this should be the Root Node forest.append(node) else: parent=nodes[parent] if not 'children' in parent: parent['children']=[] children=parent['children'] children.append(node) print forest I got the following output [{'Name': 'shot', 'Relationship': 'ROOT', 'children': [{'Name': 'I', 'Relationship': 'nsubj'}, {'Name': 'elephant', 'Relationship': 'dobj', 'children': [{'Name': 'an', 'Relationship': 'det'}]}, {'Name': 'sleep', 'Relationship': 'nmod', 'children': [{'Name': 'in', 'Relationship': 'case'}, {'Name': 'my', 'Relationship': 'nmod:poss'}]}]}]
1
1
0
0
0
0
I am working with Gensim library to train some data files using doc2vec, while trying to test the similarity of one of the files using the method model.docvecs.most_similar("file") , I always get all the results above 91% with almost no difference between them (which is not logic), because the files do not have similarities between them. so the results are inaccurate. Here is the code for training the model model = gensim.models.Doc2Vec(vector_size=300, min_count=0, alpha=0.025, min_alpha=0.00025,dm=1) model.build_vocab(it) for epoch in range(100): model.train(it,epochs=model.iter, total_examples=model.corpus_count) model.alpha -= 0.0002 model.min_alpha = model.alpha model.save('doc2vecs.model') model_d2v = gensim.models.doc2vec.Doc2Vec.load('doc2vecs.model') sim = model_d2v.docvecs.most_similar('file1.txt') print sim **this is the output result** [('file2.txt', 0.9279470443725586), ('file6.txt', 0.9258157014846802), ('file3.txt', 0.92499840259552), ('file5.txt', 0.9209873676300049), ('file4.txt', 0.9180108308792114), ('file7.txt', 0.9141069650650024)] what am I doing wrong ? how could I improve the accuracy of results ?
1
1
0
0
0
0
User uploads tabular data with information like classes, professors, schedule and such. I want to easily extract that information. I can use an OCR library, but it'd simply output text as randomly mixed. I would have no idea what something belongs to. Is there a way to train OCR little bit to only look at certain part of image (form) and then label data so when it extracts it's all labeled. etc Suppose i had a form with lots of data, I want it to only look at address section and label it. Or it spreadsheet like data and i want it to label it by columns. Simply extract all text into string isn't that useful.
1
1
0
0
0
0
I would like to indentify all types of numbers in a string. Example: a = 'I 0.34 -345 3/4 3% want to get -0.34 2018-09 all numbers' Result: ['I', '_num', '_num', '_num', '_num', 'want', 'to', 'get', '_num', '_num', 'all', 'numbers'] it is a nlp project and I wonder if there is a better method to get the result. I could just list all the types then use regex but it's not concise,someone has good ideas?
1
1
0
0
0
0
I'm attempting to custom a special attention layer in keras. But I'm so confused why this error always happen after trying to lots of method. Traceback (most recent call last): File "D:/Users/LawLi/PyCharmProjects/fixed_talentDNA/adx.py", line 52, in <module> print(model.predict([tensor1, tensor2, indices])) # (bs1, sl1, sl2) File "D:\Users\LawLi\Anaconda3\lib\site-packages\keras\engine\training.py", line 1172, in predict steps=steps) File "D:\Users\LawLi\Anaconda3\lib\site-packages\keras\engine\training_arrays.py", line 293, in predict_loop ins_batch = slice_arrays(ins, batch_ids) File "D:\Users\LawLi\Anaconda3\lib\site-packages\keras\utils\generic_utils.py", line 507, in slice_arrays return [None if x is None else x[start] for x in arrays] File "D:\Users\LawLi\Anaconda3\lib\site-packages\keras\utils\generic_utils.py", line 507, in <listcomp> return [None if x is None else x[start] for x in arrays] IndexError: index 4 is out of bounds for axis 0 with size 4 this is my test code about custom layer, the code contain a sample of prediction.And it can be run directly. from keras.layers import * from keras.models import Model from keras.utils import to_categorical from keras.layers.merge import * class CustomLayer(Layer): def __init__(self, **kwargs): self.supports_masking = True super(CustomLayer, self).__init__(**kwargs) def build(self, input_shape): assert len(input_shape) == 3 super(CustomLayer, self).build(input_shape) def compute_mask(self, inputs, mask=None): return None def call(self, x, mask=None): tensor1, tensor2, ind = x[0], x[1], x[2] # (bs1, sl1, wd) (bs2, sl2, wd) (bs1, bs2. sl1, sl2) tensor2 = K.permute_dimensions(tensor2, [0, 2, 1]) align = K.dot(tensor1, tensor2) align = K.permute_dimensions(align, [0, 2, 1, 3]) # (bs1, bs2, sl1, sl2) align = align + ind align = K.max(align, axis=1) align = K.sum(align, axis=2) align = K.softmax(align, axis=1) weighted_ans = tensor1 * K.expand_dims(align, 2) return K.sum(weighted_ans, axis=1) def compute_output_shape(self, input_shape): t1_shape, t2_shape = input_shape[0], input_shape[1] return t1_shape[0], t1_shape[1], t1_shape[2] # model example t1 = Input(shape=(7, 3)) t2 = Input(batch_shape=(4, 6, 3)) t3 = Input(shape=(4, 7, 6)) output = CustomLayer()([t1, t2, t3]) model = Model([t1, t2, t3], output) # data example tensor1 = np.random.rand(10, 7, 3) # (bs1, sl1, wd) tensor2 = np.random.rand(4, 6, 3) # (bs2, sl2, wd) indices = np.array([0, 1, 3, 2, 0, 1, 2, 2, 3, 1]) # (bs1, 1) indices = to_categorical(indices, num_classes=4) * 999 - 999 # (bs1, bs2) indices = np.expand_dims(indices, axis=2) indices = np.expand_dims(indices, axis=3) indices = np.repeat(indices, 7, axis=2).repeat(6, axis=3) print(model.predict([tensor1, tensor2, indices])) # (bs1, sl1, wd) thank you for help.
1
1
0
0
0
0
I am using Keras.Backend.armax() in a gamma layer. The model compiles fine but throws an error during fit(). ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval. My model: latent_dim = 512 encoder_inputs = Input(shape=(train_data.shape[1],)) encoder_dense = Dense(vocabulary, activation='softmax') encoder_outputs = Embedding(vocabulary, latent_dim)(encoder_inputs) encoder_outputs = LSTM(latent_dim, return_sequences=True)(encoder_outputs) encoder_outputs = Dropout(0.5)(encoder_outputs) encoder_outputs = encoder_dense(encoder_outputs) encoder_outputs = Lambda(K.argmax, arguments={'axis':-1})(encoder_outputs) encoder_outputs = Lambda(K.cast, arguments={'dtype':'float32'})(encoder_outputs) encoder_dense1 = Dense(train_label.shape[1], activation='softmax') decoder_embedding = Embedding(vocabulary, latent_dim) decoder_lstm1 = LSTM(latent_dim, return_sequences=True) decoder_lstm2 = LSTM(latent_dim, return_sequences=True) decoder_dense2 = Dense(vocabulary, activation='softmax') decoder_outputs = encoder_dense1(encoder_outputs) decoder_outputs = decoder_embedding(decoder_outputs) decoder_outputs = decoder_lstm1(decoder_outputs) decoder_outputs = decoder_lstm2(decoder_outputs) decoder_outputs = Dropout(0.5)(decoder_outputs) decoder_outputs = decoder_dense2(decoder_outputs) model = Model(encoder_inputs, decoder_outputs) model.summary() Model summary for easy visualizing: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_7 (InputLayer) (None, 32) 0 _________________________________________________________________ embedding_13 (Embedding) (None, 32, 512) 2018816 _________________________________________________________________ lstm_19 (LSTM) (None, 32, 512) 2099200 _________________________________________________________________ dropout_10 (Dropout) (None, 32, 512) 0 _________________________________________________________________ dense_19 (Dense) (None, 32, 3943) 2022759 _________________________________________________________________ lambda_5 (Lambda) (None, 32) 0 _________________________________________________________________ lambda_6 (Lambda) (None, 32) 0 _________________________________________________________________ dense_20 (Dense) (None, 501) 16533 _________________________________________________________________ embedding_14 (Embedding) (None, 501, 512) 2018816 _________________________________________________________________ lstm_20 (LSTM) (None, 501, 512) 2099200 _________________________________________________________________ lstm_21 (LSTM) (None, 501, 512) 2099200 _________________________________________________________________ dropout_11 (Dropout) (None, 501, 512) 0 _________________________________________________________________ dense_21 (Dense) (None, 501, 3943) 2022759 ================================================================= Total params: 14,397,283 Trainable params: 14,397,283 Non-trainable params: 0 _________________________________________________________________ I googled for the solution but almost all were about a faulty model. Some recommended to not use functions causing that are causing issues. However, as you can see, I cannot create this model without K.argmax (If you know any other way then do tell me). How do I solve this issue and hence train my model?
1
1
0
0
0
0
I'm having two strings string1 = "apple banna kiwi mango" string2 = "aple banana mango lemon" I want the resultant of addition of these two strings (not concatenation) i.e. result should look like result = "apple banana kiwi mango lemon" My current approach is rather simple. Tokenize the multiline string (the above strings are after tokenization), remove any noises (special/ newline characters/ empty strings) The next step is to identify the cosine similarity of the strings, if it is above 0.9, then I add one of the string to final result Now, here is the problem. It doesn't cover the part where one string contains one half of a word and other contains the other half (or correct word in some cases) of word. I have also added this function in my script. But again the problem remains. Any help on how to move forward with this is appreciated. def text_to_vector(text): words = WORD.findall(text) return Counter(words) def get_cosine(vec1, vec2): intersection = set(vec1.keys()) & set(vec2.keys()) numerator = sum([vec1[x] * vec2[x] for x in intersection]) sum1 = sum([vec1[x]**2 for x in vec1.keys()]) sum2 = sum([vec2[x]**2 for x in vec2.keys()]) denominator = math.sqrt(sum1) * math.sqrt(sum2) if not denominator: return 0.0 else: return float(numerator) / denominator def merge_string(string1, string2): i = 0 while not string2.startswith(string1[i:]): i += 1 sFinal = string1[:i] + string2 return sFinal for item in c: for j in d: vec1 = text_to_vector(item) vec2 = text_to_vector(j) r = get_cosine(vec1, vec2) if r > 0.5: if r > 0.85: final.append(item) break else: sFinal = merge_string(item, j) #print("1.", len(sFinal), len(item), len(j)) if len(sFinal) >= len(item) + len(j) -8: sFinal = merge_string(j, item) final.append(sFinal) #print("2.", len(sFinal), len(item), len(j)) temp.append([item, j]) break
1
1
0
0
0
0
I have a pandas dataframe that looks like the following: Type Keywords ---- -------- Animal [Pigeon, Bird, Raccoon, Dog, Cat] Pet [Dog, Cat, Hamster] Pest [Rat, Mouse, Raccoon, Pigeon] Farm [Chicken, Horse, Cow, Sheep] Predator [Wolf, Fox, Raccoon] Let's say that I have the following string: input = "There is a dead rat and raccoon in my pool" Given that I tokenize the string and remove stop-words so that it becomes input = [Dead, Rat, Raccoon, Pool] I need to go through each row and find the rows that have the highest number of keyword matches. With the given example, the results would look like the following: Type Keywords Matches ---- -------- ------- Animal [Pigeon, Bird, Raccoon, Dog, Cat] 1 Pet [Dog, Cat, Hamster] 0 Pest [Rat, Mouse, Raccoon, Pigeon] 2 Farm [Chicken, Horse, Cow, Sheep] 0 Predator [Wolf, Fox, Raccoon] 1 The output would be the top three Type names that have the highest number of matches. In the above case, since the "Pest" category has the highest number of matches, it would be selected as the highest match. Additionally both the Animal and Predator categories would be selected. The output in order would thus be: output = [Pest, Animal, Predator] Doing this task with nested for loops is easy, but since I have thousands of these kinds of rows, I'm looking for a better solution. (Additionally for some reason I have encountered a lot of bugs when using non in-built functions with pandas, perhaps it's because of vectorization?) I looked at the groupby and isin functions that are inbuilt in pandas, but as far as I could tell they would not be able to get me to the output that I want (I would not be surprised at all if I am incorrect in this assumption). I next investigated the usage of sets and hashmaps with pandas, but unfortunately my coding knowledge and current ability is not yet proficient enough to craft a solid solution. This StackOverflow link in particular got me much closer to what I wanted, though it didn't find the top three match row names. I would greatly appreciate any help or advice.
1
1
0
0
0
0
I would like someone to correct my understanding of how VADER scores text. I've read an explanation of this process here, however I cannot match the compound score of test sentences to Vader's output when recreating the process it describes. Lets say we have the sentence: "I like using VADER, its a fun tool to use" The words VADER picks up are 'like' (+1.5 score), and 'fun' (+2.3). According to the documentation, these values are summed (so +3.8), and then normalized to a range between 0 and 1 using the following function: (alpha = 15) x / x2 + alpha With our numbers, this should become: 3.8 / 14.44 + 15 = 0.1290 VADER, however, outputs the returned compound score as follows: Scores: {'neg': 0.0, 'neu': 0.508, 'pos': 0.492, 'compound': 0.7003} Where am I going wrong in my reasoning? Similar questions have been asked several times, however an actual example of VADER classifying has not yet been provided. Any help would be appreciated.
1
1
0
0
0
0
i am able to extract topics from LDA model using gensim. when i print topics ,it is displaying topics with 10 number of words by defaults. i want to show 15 words in one topic.i tried to change it but still i am getting 10 words per topic.how can i change this default behavior? here is the code: for n, topic in model.show_topics(num_topics=-1, num_words=15,formatted=False): topic = [word for word, _ in topic] cm = CoherenceModel(topics=[topic], texts=documents, dictionary=dictionary, window_size=10) coherence_values[n] = cm.get_coherence() top_topics = sorted(coherence_values.items(), key=operator.itemgetter(1), reverse=True) result.append((model, top_topics)) and for printing the topics: pprint([lm.show_topic(topicid) for topicid, c_v in top_topics[:8]])
1
1
0
0
0
0
File "Predicting_stock", line 9 def get_data(HistoricalQuotes.csv): ^ SyntaxError: invalid syntax
1
1
0
0
0
0
Okay, So I wrote the following agent for a bot to play tic tac toe. I have used the traditional minimax algorithm without pruning. The thing is that it works perfectly for a 3x3 board. But when I run this on a 4x4 board, it gets stuck computing. I am not able to understand why. I am passing the agent a numpy array perspectiveState, which has 0 for empty, 1 for the agents move, and -1 for the opponents moves. It returns the position of its next move (1). The flow of control starts from the turn() function, which calls the minimax() function. What am I doing wrong here? class MiniMaxAgent: def isMovesLeft(self, perspectiveState): size = perspectiveState.shape[0] #print('!!', np.abs(perspectiveState).sum()) if np.abs(perspectiveState).sum() == size*size: return False return True def evaluate(self, perspectiveState): size = perspectiveState.shape[0] rsum = perspectiveState.sum(axis=0) csum = perspectiveState.sum(axis=1) diagSum = perspectiveState.trace() antiDiagSum = np.fliplr(perspectiveState).trace() if size in rsum or size in csum or size == diagSum or size == antiDiagSum: return 10 if -1*size in rsum or -1*size in csum or -1*size == diagSum or -1*size == antiDiagSum: return -10 return 0 def minimax(self, perspectiveState, isMax): score = self.evaluate(perspectiveState) if score == 10: return score if score == -10: return score if not self.isMovesLeft(perspectiveState): return 0 if isMax: best = -1000 for i in range(perspectiveState.shape[0]): for j in range(perspectiveState.shape[0]): if perspectiveState[i,j]==0: perspectiveState[i,j] = 1 #print('@', isMax) best = max(best, self.minimax(perspectiveState, not isMax)) perspectiveState[i,j] = 0 #print('#', best) return best else: best = 1000; for i in range(perspectiveState.shape[0]): for j in range(perspectiveState.shape[0]): if perspectiveState[i,j]==0: perspectiveState[i,j] = -1 #print('@', isMax) best = min(best, self.minimax(perspectiveState, not isMax)) perspectiveState[i,j] = 0 #print('#', best) return best def turn(self, perspectiveState): r,c = perspectiveState.shape bestVal = -1000 bestR, bestC = -1, -1 for i in range(r): for j in range(c): if perspectiveState[i,j] == 0: perspectiveState[i,j] = 1 moveVal = self.minimax(perspectiveState, False) #undo perspectiveState[i,j] = 0 if moveVal > bestVal: bestVal = moveVal bestR = i bestC = j return bestR, bestC
1
1
0
0
0
0
I am doing a keyphrase classification task and for this i am working with the head noun extraction from keyphrases in python. The little help available on internet is not of good use. i am struggling with this.
1
1
0
0
0
0
I would like to make a series of files containing the trees in this PDF (http://mica.lif.univ-mrs.fr/d6.clean2-backup.pdf). The names of the files would be the corresponding tree numbers on the left (t0, t1, etc). I have tried to use python to extract the relevant information and trees, but I'm having trouble. To be specific, when I tried extracting the trees as images (using https://nedbatchelder.com/blog/200712/extracting_jpgs_from_pdfs.html), none of the trees showed up (presumably because the trees aren't the right format). However, when I try extracting it all as text (as https://www.geeksforgeeks.org/working-with-pdf-files-in-python/), the trees lose all their formatting (and some of their information, I think). How could I go about getting the files I want from this PDF? Could it be done in Python? Is there another way that's easier? Alternatively, the website (http://mica.lif.univ-mrs.fr/) from which I obtained the PDF has the trees in another form (ex: t27 S##1#l# NP#0#2#l#s NP#0#2#r#s VP##3#l# V##4#l#h V##4#r#h NP#1#5#l#s NP#1#5#r#s VP##3#r# S##1#r#). Is there a good way to convert this form into a good visual in the form of trees? Any help in either of these approaches (or others if people have ideas) would be much appreciated. Thanks!
1
1
0
0
0
0
I'm new to tensorflow. I installed python and tensorflow. I'm getting below error after running my sample code. I have installed tensorflow by below command. I saw that the below command seems for mac, but I have used this command only to install tensorflow, it is successfully installed. I did not get link for windows, that is why I used below link. If anyone knows actual windows installation link for tensorflow, please share and provide solution for the below issue. pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py3-none-any.whl Python 3.7.0, pip 18.0, tenserflow 1.5.0, windows 10 installation_test.py import tensorflow as tf sess = tf.Session() hello = tf.constant("Hellow") print(sess.run(hello)) a = tf.constant(20) b = tf.constant(22) print('a + b = {0}'.format(sess.run(a+b))) PS F:\tensorflow> python .\installation_test.py PS F:\tensorflow> python .\installation_test.py Traceback (most recent call last): File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 18, in swig_import_helper fp, pathname, description = imp.find_module('_pywrap_tensorflow', [dirname(__file__)]) File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\imp.py", line 297, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named '_pywrap_tensorflow' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\__init__.py", line 66, in <module> from tensorflow.python import pywrap_tensorflow File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 28, in <module> _pywrap_tensorflow = swig_import_helper() File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 20, in swig_import_helper import _pywrap_tensorflow ModuleNotFoundError: No module named '_pywrap_tensorflow' During handling of the above exception, another exception occurred: Traceback (most recent call last): File ".\installation_test.py", line 1, in <module> import tensorflow as tf File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\__init__.py", line 24, in <module> from tensorflow.python import * File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\__init__.py", line 72, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 18, in swig_import_helper fp, pathname, description = imp.find_module('_pywrap_tensorflow', [dirname(__file__)]) File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\imp.py", line 297, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named '_pywrap_tensorflow' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\__init__.py", line 66, in <module> from tensorflow.python import pywrap_tensorflow File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 28, in <module> _pywrap_tensorflow = swig_import_helper() File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 20, in swig_import_helper import _pywrap_tensorflow ModuleNotFoundError: No module named '_pywrap_tensorflow' Failed to load the native TensorFlow runtime. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/get_started/os_setup.md#import_error for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.
1
1
0
0
0
0
I have a corpus that consists of various messages. I used NLTK to create a series of bi-grams and tri-grams. I created the grams by doing pre-processing like removing stop words and things of the sort. How can I take a bi-gram (or tri) and search to see if it exists in a new message? I would have to pre-process the message at some point wouldn't I? Or, if I can do this another way, during the creation of the n-gram process, is it possible to index the messages and output both the n-grams and which message they apply to?
1
1
0
0
0
0
I'm testing this basic example from the SpaCy docs and getting some strange results. import spacy nlp = spacy.load('en_core_web_md') tokens = nlp(u'dog cat banana') for token1 in tokens: for token2 in tokens: print(token1.text, token2.text, token1.similarity(token2)) My setup: MacBook Pro macOS 10.13.4 Conda 4.5.9 Python 3.5.5 SpaCy 2.0.12 Expected results: dog dog 1.0 dog cat 0.80168545 dog banana 0.24327646 cat dog 0.80168545 cat cat 1.0 cat banana 0.2815437 banana dog 0.24327646 banana cat 0.2815437 banana banana 1.0 My Results: dog dog 1.0 dog cat 0.0 dog banana 0.0 cat dog 0.0 cat cat 1.0 cat banana -0.0446812 banana dog -7.82874e+17 banana cat -8.24222e+17 banana banana 1.0 I've tried uninstalling & re-installing SpaCy and all of the various models and even SpaCy itself. I've also tried an even simpler example: import spacy nlp = spacy.load('en_core_web_md') cat = nlp(u'cat') dog = nlp(u'dog') print(cat.similarity(dog)) # 0.0
1
1
0
0
0
0
I am using Gensim wrapper to obtain wordRank embeddings (I am following their tutorial to do this) as follows. from gensim.models.wrappers import Wordrank model = Wordrank.train(wr_path = "models", corpus_file="proc_brown_corp.txt", out_name= "wr_model") model.save("wordrank") model.save_word2vec_format("wordrank_in_word2vec.vec") However, I am getting the following error FileNotFoundError: [WinError 2] The system cannot find the file specified. I am just wondering what I have made wrong as everything looks correct to me. Please help me. Moreover, I want to know if the way I am saving the model is correct. I saw that Gensim offers the method save_word2vec_format. What is the advantage of using it without directly using the original wordRank model?
1
1
0
0
0
0
I've been trying out spaCy for a small side-project, and had a few questions & concerns. I noticed that spaCy's named-entity recognition results (with its largest en_vectors_web_lg model) don't seem to be as accurate as that of Google Cloud Natural Language API [1]. Google's API is able to extract more entities, more accurately, most likely because their model is even larger. So, is there a way to improve spaCy's NER results using a different model if possible, or through some other technique? Secondly, Google's API also returns Wikipedia article links for relevant entities. Is this possible with spaCy too, or using some other technique on top of spaCy's NER results? Thirdly, I noticed that spaCy has a similarity() method [2] that uses GloVe word vectors. But being new to it, I'm not sure what's the best way to frequently perform similarity comparison between each document in a set of documents (say 5000-10000 text documents of under 500 characters each) to generate buckets of similar documents? Hoping for someone to have any suggestions or tips. Many thanks! [1] https://cloud.google.com/natural-language/ [2] https://spacy.io/usage/vectors-similarity
1
1
0
0
0
0
Given: import pandas as pd lis1= ('baseball', 'basketball', 'baseball', 'hockey', 'hockey', 'basketball') lis2= ('I had lots of fun', 'This was the most boring sport', "I hit the ball hard", 'the puck went too fast', 'I scored a goal', 'the basket was broken') pd.DataFrame({'topic':lis1, 'review':lis2}) topic review 0 baseball I had lots of fun 1 basketball This was the most boring sport 2 baseball I hit the ball hard 3 hockey the puck went too fast 4 hockey I scored a goal 5 basketball the basket was broken I need this as a pd.DataFrame: lis1= ('baseball', 'basketball', 'hockey') lis2= ("I had lots of fun, I hit the ball hard", "This was the most boring sport, the basket was broken","the puck went too fast I scored a goal") pd.DataFrame({'topic':lis1, 'review':lis2}) topic review 0 baseball I had lots of fun, I hit the ball hard 1 basketball This was the most boring sport, the basket was... 2 hockey the puck went too fast I scored a goal I'm confused because the column I'd like to group by is a string and I'd like to combine the strings together. The strings do not have to be divided by a comma.
1
1
0
0
0
0
I am using the nltk CoreNLPParser with the Stanford NLP server for POS tagging as described in this answer. This tagger treats words with hyphens as multiple words, for example dates like 2007-08 are tagged as CP, :, CP. However, my model uses words with hyphen as one token. Is it possible using the CoreNLPParser to prevent splitting at hyphens?
1
1
0
0
0
0
I am an absolute beginner in chat bot. I am learning on my own and went on developing a very simple chat bot using Dialog flow. I have a python code for responding the request to my Dialog flow bot. I have enabled "webhook" in fulfillment and also enabled in "Intent".My ngrok url is http://ae3df23b.ngrok.io/. I have written a function in my python code which respond to ngrok which connects Dialog flow. Now problem is that It is showing error "404 not found" and The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. Please help me guys. Thanks in advance. My code is #import necessary packages and libraries import urllib import os import json from flask import Flask from flask import request from flask import make_response app=Flask(__name__) @app.route('/webhook', methods=['POST']) def webhook(): req=request.get_json(silent=True, force=True) print("Request:") print(json.dumps(req, indent=4)) res=makeWebhookResult(req) res=json.dumps(res, indent=4) print(res) r=make_response(res) r.headers['Content-Type']='application/json' return r def makeWebhookResult(req): if req.get("result").get("action")!="interest": return {} result=req.get("result") parameters=result.get("parameters") name=parameters.get("Banknames") bank={'SBI':'10%', 'HDFC Bank':'9%', 'Bank of Baroda':'11', 'Federal Bank':'8.9%', 'ICICI Bank': '11.5%'} speech='The interest rate of '+ name + "is" + str(bank[name]) print("Response:") print(speech) return { "speech":speech, "displayText":speech, "source":"BankInterestRates" } if __name__ == "__main__": port=int(os.getenv('PORT', 80)) print("Starting app on port %d", (port)) app.run(debug=True, port=port, host='0.0.0.0')
1
1
0
0
0
0
I have python list like below documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey"] Now i need to stem it (each word) and get another list. How do i do that ?
1
1
0
0
0
0
I have written the following code to produce bag of words: count_vect = CountVectorizer() final_counts = count_vect.fit_transform(data['description'].values.astype('U')) vocab = count_vect.get_feature_names() print(type(final_counts)) #final_counts is a sparse matrix print("--------------------------------------------------------------") print(final_counts.shape) print("--------------------------------------------------------------") print(final_counts.toarray()) print("--------------------------------------------------------------") print(final_counts[769].shape) print("--------------------------------------------------------------") print(final_counts[769]) print("--------------------------------------------------------------") print(final_counts[769].toarray()) print("--------------------------------------------------------------") print(len(vocab)) print("--------------------------------------------------------------") I am getting following output: <class 'scipy.sparse.csr.csr_matrix'> -------------------------------------------------------------- (770, 10252) -------------------------------------------------------------- [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] -------------------------------------------------------------- (1, 10252) -------------------------------------------------------------- (0, 4819) 1 (0, 2758) 1 (0, 3854) 2 (0, 3987) 1 (0, 1188) 1 (0, 3233) 1 (0, 981) 1 (0, 10065) 1 (0, 9811) 1 (0, 8932) 1 (0, 9599) 1 (0, 10150) 1 (0, 7716) 1 (0, 10045) 1 (0, 5783) 1 (0, 5500) 1 (0, 5455) 1 (0, 3234) 1 (0, 7107) 1 (0, 6504) 1 (0, 3235) 1 (0, 1625) 1 (0, 3591) 1 (0, 6525) 1 (0, 365) 1 : : (0, 5527) 1 (0, 9972) 1 (0, 4526) 3 (0, 3592) 4 (0, 10214) 1 (0, 895) 1 (0, 10062) 2 (0, 10210) 1 (0, 1246) 1 (0, 9224) 2 (0, 4924) 1 (0, 6336) 2 (0, 9180) 8 (0, 6366) 2 (0, 414) 12 (0, 1307) 1 (0, 9309) 1 (0, 9177) 1 (0, 3166) 1 (0, 396) 1 (0, 9303) 7 (0, 320) 5 (0, 4782) 2 (0, 10088) 3 (0, 4481) 3 -------------------------------------------------------------- [[0 0 0 ... 0 0 0]] -------------------------------------------------------------- 10252 -------------------------------------------------------------- It's clear that there are 770 documents and 10,252 unique words in the corpus. My confusion is why is this line print(final_counts[769]) in my code printing this: (0, 4819) 1 (0, 2758) 1 (0, 3854) 2 (0, 3987) 1 (0, 1188) 1 (0, 3233) 1 (0, 981) 1 (0, 10065) 1 (0, 9811) 1 (0, 8932) 1 (0, 9599) 1 (0, 10150) 1 (0, 7716) 1 (0, 10045) 1 (0, 5783) 1 (0, 5500) 1 (0, 5455) 1 (0, 3234) 1 (0, 7107) 1 (0, 6504) 1 (0, 3235) 1 (0, 1625) 1 (0, 3591) 1 (0, 6525) 1 (0, 365) 1 : : (0, 5527) 1 (0, 9972) 1 (0, 4526) 3 (0, 3592) 4 (0, 10214) 1 (0, 895) 1 (0, 10062) 2 (0, 10210) 1 (0, 1246) 1 (0, 9224) 2 (0, 4924) 1 (0, 6336) 2 (0, 9180) 8 (0, 6366) 2 (0, 414) 12 (0, 1307) 1 (0, 9309) 1 (0, 9177) 1 (0, 3166) 1 (0, 396) 1 (0, 9303) 7 (0, 320) 5 (0, 4782) 2 (0, 10088) 3 (0, 4481) 3 The first index is the document index. I am printing the vector of 769th document (started from 0). So the first index should have been 769 instead of 0, like, (769, 4819) 1 . Why isn't it so?
1
1
0
0
0
0
I want to create an artificial intelligence program using Python. I found out that I need gTTS but it doesn't save mp3 files. Help me, please. #gtts is imported def talkToMe(audio, lgg = 'en'): #print(audio) tts = gTTS(text = audio, lang = lgg) #tts.save('audio.mp3') #doesn't work with open("audio.mp3") as fp: #doesn't work tts.write_to_fp(fp) os.system('mpg123\mpg123.exe audio.mp3') Traceback (most recent call last): File "C:\Users\zigzag\Desktop\gtts_test1\main.py", line 9, in <module> talkToMe("hello") File "C:\Users\zigzag\Desktop\gtts_test1\main.py", line 7, in talkToMe tts.write_to_fp(fp) File "B:\Python36\lib\site-packages\gtts\tts.py", line 187, in write_to_fp part_tk = self.token.calculate_token(part) File "B:\Python36\lib\site-packages\gtts_token\gtts_token.py", line 28, in calculate_token seed = self._get_token_key() File "B:\Python36\lib\site-packages\gtts_token\gtts_token.py", line 62, in _get_token_key a = re.search("a\\\\x3d(-?\d+);", tkk_expr).group(1) AttributeError: 'NoneType' object has no attribute 'group'
1
1
0
0
0
0
I have a dataset where I tagged the noun phrases. How to find these tags and extract the data from inside the tag. در همین حال <coref coref_coref_class="set_0" coref_mentiontype="ne" markable_scheme="coref" coref_coreftype="ident"> نجیب الله خواجه عمری </coref> <coref coref_coref_class="set_0" coref_mentiontype="np" markable_scheme="coref" coref_coreftype="ident"> سرپرست وزارت تحصیلات عالی افغانستان </coref> گفت که def ex_feature(text): for w in text: if w.startswith("<coref") and w.endswith("</coref>"): print(w)
1
1
0
0
0
0
I'm in need of suggestions how to extract keywords from a large document. The keywords should be inline what we have defined as the intended search results. For example, I need the owner's name, where the office is situated, what the operating industry is when a document about a company is given, and the defined set of words would be, {owner, director, office, industry...}-(1) the intended output has to be something like, {Mr.Smith James, ,Main Street, Financial Banking}-(2) I was looking for a method related to Semantic Similarity where sentences containing words similar to the given corpus (1), would be extracted, and using POS tagging to extract nouns from those sentences. It would be a useful if further resources could be provided that support this approach.
1
1
0
0
0
0
I have a text file, df.txt with following lines: This is sentence 1 This is sentence 2 This is sentence 3 This is sentence 4 This is sentence 5 This is sentence 6 I would like to get another text file as This is sentence 1 This is sentence 2 This is sentence 3 This is sentence 4 This is sentence 5 This is sentence 6 I tried: import itertools block = '' with open('df.txt', 'r') as file: for i, value in enumerate(itertools.islice(file, 2)): block += value print(block) which is not close: This is sentence 1 This is sentence 2 I assume a similar post should be here but I could not find. Thank you for help.
1
1
0
0
0
0
I found this algorithm but it appears the creator didn't test whether there's cases where there's no path. It seems the length of the open_list gets bigger and bigger if there's no path and I don't know the solution. This is my first post so sorry for any mistakes I've made and help is much appreciated. class Node(): """A node class for A* Pathfinding""" def __init__(self, parent=None, position=None): self.parent = parent self.position = position self.g = 0 self.h = 0 self.f = 0 def __eq__(self, other): return self.position == other.position def astar(maze, start, end): """Returns a list of tuples as a path from the given start to the given end in the given maze""" # Create start and end node start_node = Node(None, start) start_node.g = start_node.h = start_node.f = 0 end_node = Node(None, end) end_node.g = end_node.h = end_node.f = 0 # Initialize both open and closed list open_list = [] closed_list = [] # Add the start node open_list.append(start_node) # Loop until you find the end while len(open_list) > 0: # Get the current node current_node = open_list[0] current_index = 0 for index, item in enumerate(open_list): if item.f < current_node.f: current_node = item current_index = index # Pop current off open list, add to closed list open_list.pop(current_index) closed_list.append(current_node) # Found the goal if current_node == end_node: path = [] current = current_node while current is not None: path.append(current.position) current = current.parent return path[::-1] # Return reversed path # Generate children children = [] for new_position in [(0, -1), (0, 1), (-1, 0), (1, 0), (-1, -1), (-1, 1), (1, -1), (1, 1)]: # Adjacent squares # Get node position node_position = (current_node.position[0] + new_position[0], current_node.position[1] + new_position[1]) # Make sure within range if node_position[0] > (len(maze) - 1) or node_position[0] < 0 or node_position[1] > (len(maze[len(maze)-1]) -1) or node_position[1] < 0: continue # Make sure walkable terrain if maze[node_position[0]][node_position[1]] != 0: continue # Create new node new_node = Node(current_node, node_position) # Append children.append(new_node) # Loop through children for child in children: # Child is on the closed list for closed_child in closed_list: if child == closed_child: continue # Create the f, g, and h values child.g = current_node.g + 1 child.h = ((child.position[0] - end_node.position[0]) ** 2) + ((child.position[1] - end_node.position[1]) ** 2) child.f = child.g + child.h # Child is already in the open list for open_node in open_list: if child == open_node and child.g > open_node.g: continue # Add the child to the open list open_list.append(child) def main(): maze = [[0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]] start = (0, 0) end = (7, 6) path = astar(maze, start, end) return path print(main())
1
1
0
0
0
0
I have a .txt file with 3 columns: word position, word and tag (NN, VB, JJ, etc.). Example of txt file: 1 i PRP 2 want VBP 3 to TO 4 go VB I want to find the frequency of the word and tag as a pair in the list in order to find the most frequently assigned tag to a word. Example of Results: 3 (food, NN), 2 (Brave, ADJ) My idea is to start by opening the file from the folder, read the file line by line and split, set a counter using dictionary and print with the most common to uncommon in descending order. My code is extremely rough (I'm almost embarrassed to post it): file=open("/Users/Desktop/Folder1/trained.txt") wordcount={} for word in file.read().split(): from collections import Counter c = Counter() for d in dicts.values(): c += Counter(d) print(c.most_common()) file.close() Obviously, i'm getting no results. Anything will help. Thanks. UPDATE: so i got this code posted on here which worked, but my results are kinda funky. here's the code (the author removed it so i don't know who to credit): file=open("/Users/Desktop/Folder1/trained.txt").read().split(' ') d = {} for i in file: if i[1:] in d.keys(): d[i[1:]] += 1 else: d[i[1:]] = 1 print (sorted(d.items(), key=lambda x: x[1], reverse=True)) here are my results: [('', 15866), ('\t.\t.', 9479), ('\ti\tPRP', 7234), ('\tto\tTO', 4329), ('\tlike\tVB', 2533), ('\tabout\tIN', 2518), ('\tthe\tDT', 2389), ('\tfood\tNN', 2092), ('\ta\tDT', 2053), ('\tme\tPRP', 1870), ('\twant\tVBP', 1713), ('\twould\tMD', 1507), ('0\t.\t.', 1427), ('\teat\tVB', 1390), ('\trestaurant\tNN', 1371), ('\tuh\tUH', 1356), ('1\t.\t.', 1265), ('\ton\tIN', 1237), ("\t'd\tMD", 1221), ('\tyou\tPRP', 1145), ('\thave\tVB', 1127), ('\tis\tVBZ', 1098), ('\ttell\tVB', 1030), ('\tfor\tIN', 987), ('\tdollars\tNNS', 959), ('\tdo\tVBP', 956), ('\tgo\tVB', 931), ('2\t.\t.', 912), ('\trestaurants\tNNS', 899), there seem to be a mix of good results with words and other results with space or random numbers, anyone know a way to remove what aren't real words? also, i know \t is supposed to signify a tab, is there a way to remove that as well? you guys really helped a lot
1
1
0
0
0
0
i have the following code in my Jupyter : import pandas as pd import quandl df=quandl.get('WIKI/GOOGL') print(df.head()) #upto here its working but here comes the error df=df[['Adj. Open','Adj. High','Adj. Low','Adj. Close','Adj. Volume',]] df['HL_PCT']=(df['Adj. High']-df['Adj. Low'])/df['Adj. Close'] df['PCT_change']=(df['Adj. Close']-df['Adj. Open'])/df['Adj. Open'] df=df[['Adj. Close','HL_PCT','PCT_change','Adj.Volume']] print(df.head()) this generates the following error: \local\programs\python\python37-32\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-11-c981ac0a05ec> in <module>() 2 df['HL_PCT']=(df['Adj. High']-df['Adj. Low'])/df['Adj. Close']*100.0 3 df['PCT_change']=(df['Adj. Close']-df['Adj. Open'])/df['Adj. Open']*100.0 ----> 4 df=df[['Adj. Close','HL_PCT','PCT_change','Adj.Volume']] 5 print(df.head()) c:\users\xyz\appdata\local\programs\python\python37-32\lib\site- packages\pandas\core\frame.py in __getitem__(self, key) 2680 if isinstance(key, (Series, np.ndarray, Index, list)): 2681 # either boolean or fancy integer index -> 2682 return self._getitem_array(key) 2683 elif isinstance(key, DataFrame): 2684 return self._getitem_frame(key) c:\users\xyz\appdata\local\programs\python\python37-32\lib\site-packages\pandas\core\frame.py in _getitem_array(self, key) 2724 return self._take(indexer, axis=0) 2725 else: -> 2726 indexer = self.loc._convert_to_indexer(key, axis=1) 2727 return self._take(indexer, axis=1) 2728 c:\users\xyz\appdata\local\programs\python\python37-32\lib\site-packages\pandas\core\indexing.py in _convert_to_indexer(self, obj, axis, is_setter) 1325 if mask.any(): 1326 raise KeyError('{mask} not in index' -> 1327 .format(mask=objarr[mask])) 1328 1329 return com._values_from_object(indexer) KeyError: "['Adj.Volume'] not in index" can you help me? ​
1
1
0
1
0
0
Is it possible to tokenize emojis like :), :(, ;~( properly using the spaCy Python library? e.g. If I run the following code: import spacy nlp = spacy.load('en') doc = nlp("Hello bright world :)") And then visualize the doc with displaCy: It incorrectly parses world :) as one token. How can I modify spaCy so it recognizes these additional symbols? Thanks. edit: Found the following: https://github.com/ines/spacymoji but I think it only supports Unicode emojis like ✨ and not ASCII ones like :)?
1
1
0
0
0
0
Problem: I am trying to extract a list of proper nouns from a job description, such as the following. text = "Civil, Mechanical, and Industrial Engineering majors are preferred." I want to extract the following from this text: Civil Engineering Mechanical Engineering Industrial Engineering This is one case of the problem, so use of application-specific information will not work. For instance, I cannot have a list of majors and then try to check if parts of the names of those majors are in the sentence along with the word "major" since I need this for other sentences as well. Attempts: 1. I have looked into spacy dependency-parsing, but parent-child relationships do not show up between each Engineering type (Civil,Mechanical,Industrial) and the word Engineering. import spacy nlp = spacy.load('en_core_web_sm') doc = nlp(u"Civil, Mechanical, and Industrial Engineering majors are preferred.") print( "%-15s%-15s%-15s%-15s%-30s" % ( "TEXT","DEP","HEAD TEXT","HEAD POS","CHILDREN" ) ) for token in doc: if not token.text in ( ',','.' ): print( "%-15s%-15s%-15s%-15s%-30s" % ( token.text ,token.dep_ ,token.head.text ,token.head.pos_ ,','.join( str(c) for c in token.children ) ) ) ...outputting... TEXT DEP HEAD TEXT HEAD POS CHILDREN Civil amod majors NOUN ,,Mechanical Mechanical conj Civil ADJ ,,and and cc Mechanical PROPN Industrial compound Engineering PROPN Engineering compound majors NOUN Industrial majors nsubjpass preferred VERB Civil,Engineering are auxpass preferred VERB preferred ROOT preferred VERB majors,are,. I have also tried using nltk pos tagging, but I get the following... import nltk nltk.pos_tag( nltk.word_tokenize( 'Civil, Mechanical, and Industrial Engineering majors are preferred.' ) ) [('Civil', 'NNP'), (',', ','), ('Mechanical', 'NNP'), (',', ','), ('and', 'CC'), ('Industrial', 'NNP'), ('Engineering', 'NNP'), ('majors', 'NNS'), ('are', 'VBP'), ('preferred', 'VBN'), ('.', '.')] The types of engineering and the word Engineering all come up as NNP (proper nouns), so any kind of RegexpParser pattern I can think of does not work. Question: Does anyone know of a way - in Python 3 - to extract these noun phrase pairings? EDIT: Addition Examples The following examples are similar to the first example, except these are verb-noun / verb-propernoun versions. text="Experience with testing and automating API’s/GUI’s for desktop and native iOS/Android" Extract: testing API’s/GUI’s automation API’s/GUI’s text="Design, build, test, deploy and maintain effective test automation solutions" Extract: Design test automation solutions build test automation solutions test test automation solutions deploy test automation solutions maintain test automation solutions
1
1
0
0
0
0
My model always predict under probability 0.5 for all pixels. I dropped all images without ships and have tried focal loss,iou loss,weighted loss to deal with imbalance . But the result is same.After few batches the masks i predicted gradually became all zeros. Here is my notebook: enter link description here Kaggle discussion:enter link description here In the notebook , basically what i did is : (1)discard all samples where there is no ship (2)build a plain u-net (3)define three custom loss function(iouloss,focal_binarycrossentropy,biased_crossentropy), all of which i have tried. (4)train and submit #define different losses to try def iouloss(y_true,y_pred): intersection = K.sum(y_true * y_pred, axis=-1) sum_ = K.sum(y_true + y_pred, axis=-1) jac = intersection / (sum_ - intersection) return 1 - jac def focal_binarycrossentropy(y_true,y_pred): #focal loss with gamma 8 t1=K.binary_crossentropy(y_true, y_pred) t2=tf.where(tf.equal(y_true,0),t1*(y_pred**8),t1*((1-y_pred)**8)) return t2 def biased_crossentropy(y_true,y_pred): #apply 1000 times heavier punishment to ship pixels t1=K.binary_crossentropy(y_true, y_pred) t2=tf.where(tf.equal(y_true,0),t1*1000,t1) return t2 ... #try different loss function unet.compile(loss=iouloss, optimizer="adam", metrics=[ioumetric]) or unet.compile(loss=focal_binarycrossentropy, optimizer="adam", metrics=[ioumetric]) or unet.compile(loss=biased_crossentropy, optimizer="adam", metrics=[ioumetric]) ... #start training unet.train_on_batch(x=image_batch,y=mask_batch)
1
1
0
0
0
0
I created an AI in python/pygame but even after spending hours of debugging, I could not find why the individuals(dots) are not getting mutated. After few generations, all the individuals just overlap each other and follow the same exact path. But after mutation they should move a little bit differently. Here is what a population size of 10 looks like after every 2-3 generations.. Image 1 Image 2 Image 3 As you can see, just after few generations they just overlap and all the individuals in the population move together, following exact same path! We need mutations!!! I would be really grateful to you if you could find any mistake. Thank! I saw the code from: https://www.youtube.com/watch?v=BOZfhUcNiqk&t and tried to make it in python. Here's my code import pygame, random import numpy as np pygame.init() width = 800 height = 600 screen = pygame.display.set_mode((width, height)) pygame.display.set_caption("The Dots") FPS = 30 clock = pygame.time.Clock() gameExit = False grey = [30, 30, 30] white = [255, 255, 255] black = [0, 0, 0] red = [255, 0, 0] goal = [400, 10] class Dot(): def __init__(self): self.x = int(width/2) self.y = int(height - 150) self.r = 3 self.c = black self.xVel = self.yVel = 0 self.xAcc = 0 self.yAcc = 0 self.dead = False self.steps = 0 self.reached = False self.brain = Brain(200) def show(self): pygame.draw.circle(screen, self.c, [int(self.x), int(self.y)], self.r) def update(self): if (self.x >= width or self.x <= 0 or self.y >= height or self.y <= 0): self.dead = True elif (np.sqrt((self.x-goal[0])**2 + (self.y-goal[1])**2) < 5): self.reached = True if not self.dead and not self.reached: if len(self.brain.directions) > self.steps: self.xAcc = self.brain.directions[self.steps][0] self.yAcc = self.brain.directions[self.steps][1] self.steps += 1 self.xVel += self.xAcc self.yVel += self.yAcc if self.xVel > 5: self.xVel = 5 if self.yVel > 5: self.yVel = 5 self.x += self.xVel self.y += self.yVel else: self.dead = True def calculateFitness(self): distToGoal = np.sqrt((self.x-goal[0])**2 + (self.y-goal[1])**2) self.fitness = 1/(distToGoal**2) return self.fitness def getChild(self): child = Dot() child.brain = self.brain return child class Brain(): def __init__(self, size): self.size = size self.directions = [] self.randomize() def randomize(self): self.directions.append((np.random.normal(size=(self.size, 2))).tolist()) self.directions = self.directions[0] def mutate(self): for i in self.directions: rand = random.random() if rand < 1: i = np.random.normal(size=(1, 2)).tolist()[0] class Population(): def __init__(self, size): self.size = size self.dots = [] self.fitnessSum = 0 for i in range(self.size): self.dots.append(Dot()) def show(self): for i in self.dots: i.show() def update(self): for i in self.dots: i.update() def calculateFitness(self): for i in self.dots: i.calculateFitness() def allDead(self): for i in self.dots: if not i.dead and not i.reached: return False return True def calculateFitnessSum(self): self.fitnessSum = 0 for i in self.dots: self.fitnessSum += i.fitness def SelectParent(self): rand = random.uniform(0, self.fitnessSum) runningSum = 0 for i in self.dots: runningSum += i.fitness if runningSum > rand: return i def naturalSelection(self): newDots = [] self.calculateFitnessSum() for i in self.dots: parent = self.SelectParent() newDots.append(parent.getChild()) self.dots = newDots def mutate(self): for i in self.dots: i.brain.mutate() test = Population(100) while not gameExit: for event in pygame.event.get(): if event.type == pygame.QUIT: gameExit = True screen.fill(white) if test.allDead(): #Genetic Algorithm test.calculateFitness() test.naturalSelection() test.mutate() else: test.update() test.show() pygame.draw.circle(screen, red, goal, 4) clock.tick(FPS) pygame.display.update() pygame.quit() Thanks for any help!
1
1
0
0
0
0
I downloaded en_core_web_lg(en_core_web_lg-2.0.0) but when I load it and used it on spacy. But it seems to miss lots of basic common stop words such as "be", "a" etc. Am I missing correct version ? import nltk n = nltk.corpus.stopwords.words('english') "be" in n O/P: True import spacy nlp = spacy.load("en_core_web_lg") nlp.vocab["be"].is_stop O/P: False
1
1
0
0
0
0
I want to find out the trigrams of a corpus but with the restriction that at least two words of the trigrams are not proper nouns. This is my code so far. def collocation_finder(text,window_size): ign = stopwords.words('english') #Clean the text finder = TrigramCollocationFinder.from_words(text, window_size) finder.apply_freq_filter(2) finder.apply_word_filter(lambda w: len(w) < 2 or w.lower() in ign) finder.apply_word_filter(lambda w: next(iter(w)) in propernouns) trig_mes = TrigramAssocMeasures() #Get trigrams based on raw frequency collocs = finder.nbest(trig_mes.raw_freq,10) scores = finder.score_ngrams( trig_mes.raw_freq) return(collocs) Where propernouns is a list of all the proper nouns in the corpus. The thing is that my last word filter the one that was supposed to make sure that I don't go over my restriction. Any ideas?
1
1
0
0
0
0
I am trying to figure out wether I can use min_df, max_df and max_features at the same time as arguments of the TfidfVectorizer class from Scikit.Sklearn. I perfectly understand what each of them is for. I have passed a data to TfidfVectorizer() fixing min_df = 0.05 and max_df = 0.95 that meaning that the terms appearing in less of 5% of my documents are ignored and the same with those appearing in more than 95% of my documents (as explained in Understanding min_df and max_df in scikit CountVectorizer). Like this, my data, after doing TF-IDF has 360 columns. However, this is way too much so I would like to set max_features = 100. However, when I print the shape of my new data after being transformed, I still get 360 columns, instead of 100 as I was supposed to get. I also tried to fix just max_features = 100 to check if without the other parameters it would return just the 100 columns but it didn't, it actually has 952 columns. I read the documentation and it is saying that this parameter is supposed to return the top max_features, however I can't observe that. Does anyone have a clue of what is going on?
1
1
0
1
0
0
I would like to parse a document using spaCy and apply a token filter so that the final spaCy document does not include the filtered tokens. I know that I can take the sequence of tokens filtered, but I am insterested in having the actual Doc structure. text = u"This document is only an example. " \ "I would like to create a custom pipeline that will remove specific tokesn from the final document." doc = nlp(text) def keep_token(tok): # This is only an example rule return tok.pos_ not not in {'PUNCT', 'NUM', 'SYM'} final_tokens = list(filter(keep_token, doc)) # How to get a spacy.Doc from final_tokens? I tried to reconstruct a new spaCy Doc from the tokens lists but the API is not clear how to do it.
1
1
0
0
0
0
Is there a way how to customize this stopWords = set(stopwords.words('english')) or any other way, so I can use a text file with stop-words from my language in Python's NLTK? If my text file was my_stop_words.txt, how can I say to NLTK to take this set of words instead of set for 'english'? Thanks a lot!
1
1
0
0
0
0
Y = Dense(2)(Y) Z = LSTM(128, return_sequences=False)(X) Z = Dense(2)(Z) M = concatenate([Y, Z,Y+Z]) M=Dense(4)(M) M = Dense(2)(M) # Add a softmax activation M = Activation('softmax')(M) # Create Model instance which converts sentence_indices into X. model = Model(inputs=sentence_indices, outputs=M) return model The given block is my code. Here, I have given the partial code. What I want is I want to merge layers using M = concatenate([Y, Z]), it is working fine. Then I thought of adding more variables to the Dense layer so I add M = concatenate([Y, Z,Y+Z]); however, it's not working. It gave me this error: Traceback (most recent call last): File "/home/sathiyakugan/PycharmProjects/internal-apps/apps/support-tools/EscalationApp/IMDBmodified.py", line 213, in <module> model = buildModel((maxLen,), word_to_vec_map, word_to_index) File "/home/sathiyakugan/PycharmProjects/internal-apps/apps/support-tools/EscalationApp/IMDBmodified.py", line 206, in buildModel model = Model(inputs=sentence_indices, outputs=M) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 91, in __init__ self._init_graph_network(*args, **kwargs) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 235, in _init_graph_network self.inputs, self.outputs) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 1412, in _map_graph_network tensor_index=tensor_index) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 1399, in build_map node_index, tensor_index) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 1399, in build_map node_index, tensor_index) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 1399, in build_map node_index, tensor_index) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 1399, in build_map node_index, tensor_index) File "/home/sathiyakugan/PycharmProjects/Python/venv/lib/python3.5/site-packages/keras/engine/network.py", line 1371, in build_map node = layer._inbound_nodes[node_index] AttributeError: 'NoneType' object has no attribute '_inbound_nodes' Could you please help me to resolve this problem?
1
1
0
1
0
0
I have been having problems using custom extension attributes with the recently improved Matcher (spaCy 2.012). Even a simple example (mostly copied from here) is not working as I expected: import spacy from spacy.tokens import Token from spacy.matcher import Matcher nlp = spacy.load('en') text = 'I have apple. I have had nothing.' doc = nlp(text) def on_match(matcher, doc, id, matches): print('Matched!', matches) Token.set_extension('is_fruit', getter=lambda token: token.text in ('apple', 'banana')) pattern1 = [{'LEMMA': 'have'}, {'_': {'is_fruit': True}}] matcher = Matcher(nlp.vocab) matcher.add('HAVING_FRUIT', on_match, pattern1) matches = matcher(doc) print(matches) This gives the following output: [(13835066833201802823, 1, 2), (13835066833201802823, 5, 6), (13835066833201802823, 6, 7)] In other words, the rule correctly matches on the span 'have' (1, 2), but incorrectly matches 'have' (5, 6) and 'had' (6, 7). Furthermore, the callback function is not called. The custom attribute appears to be ignored. When I add a new pattern, as follows: Token.set_extension('nope', default=False) pattern2 = [{'LEMMA': 'nothing'}] matcher.add('NADA', on_match, pattern2) matches = matcher(doc) print(matches) I get the following output: [(12682145344353966206, 1, 2), (12682145344353966206, 5, 6), (12682145344353966206, 6, 7)] Matched! [(12682145344353966206, 1, 2), (12682145344353966206, 5, 6), (12682145344353966206, 6, 7), (5033951595686580046, 7, 8)] [(12682145344353966206, 1, 2), (12682145344353966206, 5, 6), (12682145344353966206, 6, 7), (5033951595686580046, 7, 8)] The first rule functions as above. Then the second rule triggers, along with the callback function (which prints the message). There is an additional correct match for the new pattern along with the correct and erroneous matches from the first rule. So, I have a few questions: why does pattern1 match incorrectly? (i.e. why does the _ custom attribute constraint not apply?) why does the callback function not work on the first call? why does it work upon addition of a new rule? In my own code, when using custom attributes as constraints in subsequent patterns, these patterns match on ALL tokens. I assume this is related to the behaviour exhibited by the code above.
1
1
0
0
0
0
I'm writing a text classification system in Python. This is what I'm doing to canonicalize each token: lem, stem = WordNetLemmatizer(), PorterStemmer() for doc in corpus: for word in doc: lemma = stem.stem(lem.lemmatize(word)) The reason I don't want to just lemmatize is because I noticed that WordNetLemmatizer wasn't handling some common inflections. In the case of adverbs, for example, lem.lemmatize('walking') returns walking. Is it wise to perform both stemming and lemmatization? Or is it redundant? Do researchers typically do one or the other, and not both?
1
1
0
1
0
0
I am implementing my own perceptron algorithm in python wihtout using numpy or scikit yet. I wanted to get the basics right before proceeding to machine learning specific modules. I wrote the code as given below. used iris data set to classify based on sepal length and petal size. updating the weights at the end of each training set learning rate, number of iterations for training provided to the algorithm from client Issues: My training algorithm degrades instead of improving over time. Can someone please explain what i am doing incorrectly. This is my error set across iteration number, as you can see the error is actually increasing. { 0: 0.01646885885483229, 1: 0.017375368112097056, 2: 0.018105024923841584, 3: 0.01869233173693685, 4: 0.019165059856726563, 5: 0.01954556263697238, 6: 0.019851832477317588, 7: 0.02009835160930562, 8: 0.02029677690109266, 9: 0.020456491062436744 } import pandas as panda import matplotlib.pyplot as plot import random remote_location = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' class Perceptron(object): def __init__(self, epochs, learning_rate, weight_range = None): self.epochs = epochs self.learning_rate = learning_rate self.weight_range = weight_range if weight_range else [-1, 1] self.weights = [] self._x_training_set = None self._y_training_set = None self.number_of_training_set = 0 def setup(self): self.number_of_training_set = self.setup_training_set() self.initialize_weights(len(self._x_training_set[0]) + 1) def setup_training_set(self): """ Downloading training set data from UCI ML Repository - Iris DataSet """ data = panda.read_csv(remote_location) self._x_training_set = list(data.iloc[0:, [0,2]].values) self._y_training_set = [0 if i.lower()!='iris-setosa' else 1 for i in data.iloc[0:, 4].values] return len(self._x_training_set) def initialize_weights(self, number_of_weights): random_weights = [random.uniform(self.weight_range[0], self.weight_range[1]) for i in range(number_of_weights)] self.weights.append(-1) # setting up bias unit self.weights.extend(random_weights) def draw_initial_plot(self, _x_data, _y_data, _x_label, _y_label): plot.xlabel(_x_label) plot.ylabel(_y_label) plot.scatter(_x_data,_y_data) plot.show() def learn(self): self.setup() epoch_data = {} error = 0 for epoch in range(self.epochs): for i in range(self.number_of_training_set): _x = self._x_training_set[i] _desired = self._y_training_set[i] _weight = self.weights guess = _weight[0] ## setting up the bias unit for j in range(len(_x)): guess += _weight[j+1] * _x[j] error = _desired - guess ## i am going to reset all the weights if error!= 0 : ## resetting the bias unit self.weights[0] = error * self.learning_rate for j in range(len(_x)): self.weights[j+1] = self.weights[j+1] + error * self.learning_rate * _x[j] #saving error at the end of the training set epoch_data[epoch] = error # print(epoch_data) self.draw_initial_plot(list(epoch_data.keys()), list(epoch_data.values()),'Epochs', 'Error') def runMyCode(): learning_rate = 0.01 epochs = 15 random_generator_start = -1 random_generator_end = 1 perceptron = Perceptron(epochs, learning_rate, [random_generator_start, random_generator_end]) perceptron.learn() runMyCode() plot with epoch 6 plot with epoch > 6
1
1
0
1
0
0
To make a comparable study, I am working with data that has already been tokenised (not with spacy). I need to use these tokens as input to ensure that I work with the same data across the board. I wish to feed these tokens into spaCy's tagger, but the following fails: import spacy nlp = spacy.load('en', disable=['tokenizer', 'parser', 'ner', 'textcat']) sent = ['I', 'like', 'yellow', 'bananas'] doc = nlp(sent) for i in doc: print(i) with the following trace Traceback (most recent call last): File "C:/Users/bmvroy/.PyCharm2018.2/config/scratches/scratch_6.py", line 6, in <module> doc = nlp(sent) File "C:\Users\bmvroy\venv\lib\site-packages\spacy\language.py", line 346, in __call__ doc = self.make_doc(text) File "C:\Users\bmvroy\venv\lib\site-packages\spacy\language.py", line 378, in make_doc return self.tokenizer(text) TypeError: Argument 'string' has incorrect type (expected str, got list) First of all, I'm not sure why spaCy tries to tokenize the input as I disabled the tokenizer in the load() statement. Second, evidently this is not the way to go. I am looking for a way to feed the tagger a list of tokens. Is that possible with spaCy? I tried the solution provided by @aab combined with info from the documentation but to no avail: from spacy.tokens import Doc from spacy.lang.en import English from spacy.pipeline import Tagger nlp = English() tagger = Tagger(nlp.vocab) words = ['Listen', 'up', '.'] spaces = [True, False, False] doc = Doc(nlp.vocab, words=words, spaces=spaces) processed = tagger(doc) print(processed) This code didn't run, and gave the following error: processed = tagger(doc) File "pipeline.pyx", line 426, in spacy.pipeline.Tagger.__call__ File "pipeline.pyx", line 438, in spacy.pipeline.Tagger.predict AttributeError: 'bool' object has no attribute 'tok2vec'
1
1
0
0
0
0
I've tried several methods of loading the google news word2vec vectors (https://code.google.com/archive/p/word2vec/): en_nlp = spacy.load('en',vector=False) en_nlp.vocab.load_vectors_from_bin_loc('GoogleNews-vectors-negative300.bin') The above gives: MemoryError: Error assigning 18446744072820359357 bytes I've also tried with the .gz packed vectors; or by loading and saving them with gensim to a new format: from gensim.models.word2vec import Word2Vec model = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) model.save_word2vec_format('googlenews2.txt') This file then contains the words and their word vectors on each line. I tried to load them with: en_nlp.vocab.load_vectors('googlenews2.txt') but it returns "0". What is the correct way to do this? Update: I can load my own created file into spacy. I use a test.txt file with "string 0.0 0.0 ...." on each line. Then zip this txt with .bzip2 to test.txt.bz2. Then I create a spacy compatible binary file: spacy.vocab.write_binary_vectors('test.txt.bz2', 'test.bin') That I can load into spacy: nlp.vocab.load_vectors_from_bin_loc('test.bin') This works! However, when I do the same process for the googlenews2.txt, I get the following error: lib/python3.6/site-packages/spacy/cfile.pyx in spacy.cfile.CFile.read_into (spacy/cfile.cpp:1279)() OSError:
1
1
0
0
0
0
I have a large csv with thousands of comments from my blog that I'd like to do sentiment analysis on using textblob and nltk. I'm using the python script from https://wafawaheedas.gitbooks.io/twitter-sentiment-analysis-visualization-tutorial/sentiment-analysis-using-textblob.html, but modified for Python3. ''' uses TextBlob to obtain sentiment for unique tweets ''' from importlib import reload import csv from textblob import TextBlob import sys # to force utf-8 encoding on entire program #sys.setdefaultencoding('utf8') alltweets = csv.reader(open("/path/to/file.csv", 'r', encoding="utf8", newline='')) sntTweets = csv.writer(open("/path/to/outputfile.csv", "w", newline='')) for row in alltweets: blob = TextBlob(row[2]) print (blob.sentiment.polarity) if blob.sentiment.polarity > 0: sntTweets.writerow([row[0], row[1], row[2], row[3], blob.sentiment.polarity, "positive"]) elif blob.sentiment.polarity < 0: sntTweets.writerow([row[0], row[1], row[2], row[3], blob.sentiment.polarity, "negative"]) elif blob.sentment.polarity == 0.0: sntTweets.writerow([row[0], row[1], row[2], row[3], blob.sentiment.polarity, "neutral"]) However, when I run this, I continually get $ python3 sentiment.py Traceback (most recent call last): File "sentiment.py", line 17, in <module> blob = TextBlob(row[2]) IndexError: list index out of range I know what the error means, but I'm not sure what I need to do to fix. Any thoughts on what I'm missing? Thanks!
1
1
0
0
0
0