text
stringlengths
0
27.6k
python
int64
0
1
DeepLearning or NLP
int64
0
1
Other
int64
0
1
Machine Learning
int64
0
1
Mathematics
int64
0
1
Trash
int64
0
1
I am solving a multi-class classification problem using Keras. But I am assuming the accuracy is bad due to poor word embedding of my data (domain-specific data). Keras has its own Embedding layer, which is a supervised learning method. So I have 2 questions regarding this : Can I use word2vec embedding in Embedding layer of Keras, because word2vec is a form of unsupervised learning/self-supervised? If yes, then can I use transfer learning on word2vec pre-train model to put extra knowledge of my domain specific features.
1
1
0
0
0
0
I'm writing a simple A* algorithm for finding the shortest path. But I need something more complicated. The agent can only go forward and rotate(90 deg). Will it influence at path or I can use simple A*? Thanks for all. def astar(maze, start, end): start_node = Node(None, start) start_node.g = start_node.h = start_node.f = 0 end_node = Node(None, end) end_node.g = end_node.h = end_node.f = 0 open_list = [] closed_list = [] open_list.append(start_node) while len(open_list) > 0: current_node = open_list[0] current_index = 0 for index, item in enumerate(open_list): if item.f < current_node.f: current_node = item current_index = index open_list.pop(current_index) closed_list.append(current_node) if current_node == end_node: path = [] current = current_node while current is not None: path.append(current.position) current = current.parent return path[::-1] children = [] for new_position in [(0, -1), (0, 1), (-1, 0), (1, 0), (-1, -1), (-1, 1), (1, -1), (1, 1)]: node_position = (current_node.position[0] + new_position[0], current_node.position[1] + new_position[1]) if node_position[0] > (len(maze) - 1) or node_position[0] < 0 or node_position[1] > (len(maze[len(maze)-1]) -1) or node_position[1] < 0: continue if maze[node_position[0]][node_position[1]] != 0: continue new_node = Node(current_node, node_position) children.append(new_node) for child in children: for closed_child in closed_list: if child == closed_child: continue child.g = current_node.g + 1 child.h = ((child.position[0] - end_node.position[0]) ** 2) + ((child.position[1] - end_node.position[1]) ** 2) child.f = child.g + child.h for open_node in open_list: if child == open_node and child.g > open_node.g: continue open_list.append(child)
1
1
0
0
0
0
Hi I'm getting this error when running the AmazonReviews tutorial from NLP with H2O Tutorial http://docs.h2o.ai/h2o-tutorials/latest-stable/h2o-world-2017/nlp/index.html when I try to run : # Train Word2Vec Model from h2o.estimators.word2vec import H2OWord2vecEstimator # This takes time to run - left commented out #w2v_model = H2OWord2vecEstimator(vec_size = 100, model_id = "w2v.hex") #w2v_model.train(training_frame=words) # Pre-trained model available on s3: https://s3.amazonaws.com/tomk/h2o-world/megan/w2v.hex w2v_model = h2o.load_model("https://s3.amazonaws.com/tomk/h2o-world/megan/w2v.hex") I get the following error : --------------------------------------------------------------------------- H2OResponseError Traceback (most recent call last) <ipython-input-22-a55d2503e18d> in <module> 7 8 # Pre-trained model available on s3: https://s3.amazonaws.com/tomk/h2o-world/megan/w2v.hex ----> 9 w2v_model = h2o.load_model("https://s3.amazonaws.com/tomk/h2o-world/megan/w2v.hex") ~\Anaconda3\lib\site-packages\h2o\h2o.py in load_model(path) 989 """ 990 assert_is_type(path, str) --> 991 res = api("POST /99/Models.bin/%s" % "", data={"dir": path}) 992 return get_model(res["models"][0]["model_id"]["name"]) 993 ~\Anaconda3\lib\site-packages\h2o\h2o.py in api(endpoint, data, json, filename, save_to) 101 # type checks are performed in H2OConnection class 102 _check_connection() --> 103 return h2oconn.request(endpoint, data=data, json=json, filename=filename, save_to=save_to) 104 105 ~\Anaconda3\lib\site-packages\h2o\backend\connection.py in request(self, endpoint, data, json, filename, save_to) 400 auth=self._auth, verify=self._verify_ssl_cert, proxies=self._proxies) 401 self._log_end_transaction(start_time, resp) --> 402 return self._process_response(resp, save_to) 403 404 except (requests.exceptions.ConnectionError, requests.exceptions.HTTPError) as e: ~\Anaconda3\lib\site-packages\h2o\backend\connection.py in _process_response(response, save_to) 723 # Client errors (400 = "Bad Request", 404 = "Not Found", 412 = "Precondition Failed") 724 if status_code in {400, 404, 412} and isinstance(data, (H2OErrorV3, H2OModelBuilderErrorV3)): --> 725 raise H2OResponseError(data) 726 727 # Server errors (notably 500 = "Server Error") H2OResponseError: Server error java.lang.IllegalArgumentException: Error: Cannot find persist manager for scheme https Request: POST /99/Models.bin/ data: {'dir': 'https://s3.amazonaws.com/tomk/h2o-world/megan/w2v.hex'}
1
1
0
0
0
0
I am trying to get word2vec to work in python3, however as my dataset is too large to easily fit in memory I am loading it via an iterator (from zip files). However when I run it I get the error Traceback (most recent call last): File "WordModel.py", line 85, in <module> main() File "WordModel.py", line 15, in main word2vec = gensim.models.Word2Vec(data,workers=cpu_count()) File "/home/thijser/.local/lib/python3.7/site-packages/gensim/models/word2vec.py", line 783, in __init__ fast_version=FAST_VERSION) File "/home/thijser/.local/lib/python3.7/site-packages/gensim/models/base_any2vec.py", line 759, in __init__ self.build_vocab(sentences=sentences, corpus_file=corpus_file, trim_rule=trim_rule) File "/home/thijser/.local/lib/python3.7/site-packages/gensim/models/base_any2vec.py", line 936, in build_vocab sentences=sentences, corpus_file=corpus_file, progress_per=progress_per, trim_rule=trim_rule) File "/home/thijser/.local/lib/python3.7/site-packages/gensim/models/word2vec.py", line 1591, in scan_vocab total_words, corpus_count = self._scan_vocab(sentences, progress_per, trim_rule) File "/home/thijser/.local/lib/python3.7/site-packages/gensim/models/word2vec.py", line 1576, in _scan_vocab total_words += len(sentence) TypeError: object of type 'generator' has no len() Here is the code: import zipfile import os from ast import literal_eval from lxml import etree import io import gensim from multiprocessing import cpu_count def main(): data = TrainingData("/media/thijser/Data/DataSets/uit2") print(len(data)) word2vec = gensim.models.Word2Vec(data,workers=cpu_count()) word2vec.save('word2vec.save') class TrainingData: size=-1 def __init__(self, dirname): self.data_location = dirname def __len__(self): if self.size<0: for zipfile in self.get_zips_in_folder(self.data_location): for text_file in self.get_files_names_from_zip(zipfile): self.size=self.size+1 return self.size def __iter__(self): #might not fit in memory otherwise yield self.get_data() def get_data(self): for zipfile in self.get_zips_in_folder(self.data_location): for text_file in self.get_files_names_from_zip(zipfile): yield self.preproccess_text(text_file) def stripXMLtags(self,text): tree=etree.parse(text) notags=etree.tostring(tree, encoding='utf8', method='text') return notags.decode("utf-8") def remove_newline(self,text): text.replace("\ "," ") return text def preproccess_text(self,text): text=self.stripXMLtags(text) text=self.remove_newline(text) return text def get_files_names_from_zip(self,zip_location): files=[] archive = zipfile.ZipFile(zip_location, 'r') for info in archive.infolist(): files.append(archive.open(info.filename)) return files def get_zips_in_folder(self,location): zip_files = [] for root, dirs, files in os.walk(location): for name in files: if name.endswith((".zip")): filepath=root+"/"+name zip_files.append(filepath) return zip_files main() for d in data: for dd in d : print(type(dd)) Does show me that dd is of the type string and contains the correct preprocessed strings (with length somewhere between 50 and 5000 words each).
1
1
0
1
0
0
How does SpaCy keeps track of character and token offset during tokenization? In SpaCy, there's a Span object that keeps the start and end offset of the token/span https://spacy.io/api/span#init There's a _recalculate_indices method seems to be retrieving the token_by_start and token_by_end but that looks like all the recalcuation is doing. When looking at extraneous spaces, it's doing some smart alignment of the spans. Does it recalculate after every regex execution, does it keep track of the character's movement? Does it do a post regexes execution span search?
1
1
0
0
0
0
I need to convert abbreviations back using NLP. Like what's to what is, it's to it is, etc. I want to use it to preprocess the raw sentence. Actually, I also confused about whether I should do this or just simply remove the ' and convert what's to whats. Otherwise, anyway, is will be removed as a stop word in a later step. In another hand, should we consider whats and what as lemma? Or, we should use stemmer to cut the s off? BTW, I don't think abbreviation is the right term here, but I'm not good at English as well. So, please introduce me the formal NLP or linguistics term we used for what's, how's, etc.
1
1
0
0
0
0
I am training a doc2vec gensim model with txt file 'full_texts.txt' that contains ~1600 documents. Once I have trained the model, I wish to use similarity methods over words and sentences. However, since this is my first time using gensim , I am unable to get a solution. If I want to look for similarity by words I try as mentioned below but I get an error that the word doesnt exist in the vocabulary and on the other question is how do I check similarity for entire documents? I have read a lot of questions around it, like this one and looked up documentation but still not sure what I am doing wrong. from gensim.models import Doc2Vec from gensim.models.doc2vec import TaggedLineDocument from gensim.models.doc2vec import TaggedDocument tagdocs = TaggedLineDocument('full_texts.txt') d2v_mod = Doc2Vec(min_count=3,vector_size = 200, workers = 2, window = 5, epochs = 30,dm=0,dbow_words=1,seed=42) d2v_mod.build_vocab(tagdocs) d2v_mod.train(tagdocs,total_examples=d2v_mod.corpus_count,epochs=20) d2v_mod.wv.similar_by_word('overdraft',topn=10) KeyError: "word 'overdraft' not in vocabulary"
1
1
0
0
0
0
My data iterator currently runs on the CPU as device=0 argument is deprecated. But I need it to run on the GPU with the rest of the model etc. Here is my code: pad_idx = TGT.vocab.stoi["<blank>"] model = make_model(len(SRC.vocab), len(TGT.vocab), N=6) model = model.to(device) criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1) criterion = criterion.to(device) BATCH_SIZE = 12000 train_iter = MyIterator(train, device, batch_size=BATCH_SIZE, repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)), batch_size_fn=batch_size_fn, train=True) valid_iter = MyIterator(val, device, batch_size=BATCH_SIZE, repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)), batch_size_fn=batch_size_fn, train=False) #model_par = nn.DataParallel(model, device_ids=devices) The above code gives this error: The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu. The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu. I have tried passing in 'cuda' as an argument instead of device=0 but I receive this error: <ipython-input-50-da3b1f7ed907> in <module>() 10 train_iter = MyIterator(train, 'cuda', batch_size=BATCH_SIZE, 11 repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)), ---> 12 batch_size_fn=batch_size_fn, train=True) 13 valid_iter = MyIterator(val, 'cuda', batch_size=BATCH_SIZE, 14 repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)), TypeError: __init__() got multiple values for argument 'batch_size' I have also tried passing in device as an argument. Device being defined as device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') But receive the same error as just above. Any suggestions would be much appreciated, thanks.
1
1
0
1
0
0
I've been working on a project regarding NLP and I'm using Stanford Core NLP library for it, but it's parser function doesn't seem to work. I run the code and it always gets hung up, not responding for hours. Tried changing the way i pass directory address to the function, tried re-downloading the Stanford Core NLP files again. from nltk.tokenize import sent_tokenize import re import os import itertools from nltk.corpus import wordnet as wn from stanfordcorenlp import StanfordCoreNLP import json sentences = [] sents_clauses = [] def feature_extraction(): print("Directory Access") os.chdir('C://Users/mohdm/Documents/FYP/stanford-corenlp-full-2018-10-05/') print("Directory Accessed") CORE_NLP_DIR = os.getcwd() print(CORE_NLP_DIR) print("Setting Parser") PARSER = StanfordCoreNLP(CORE_NLP_DIR, memory='4g', lang='en') print("Parser Set") Actual Output: Code Started Directory Access Directory Accessed C:\Users\mohdm\Documents\FYP\stanford-corenlp-full-2018-10-05 Setting Parser Expected Output: Code Started Directory Access Directory Accessed C:\Users\mohdm\Documents\FYP\stanford-corenlp-full-2018-10-05 Setting Parser Parser Set
1
1
0
0
0
0
I m not able to download "enron_mail_20150507.tar.gz" by doing "python startup.py". i got the following error and dont know how to fix. downloading the Enron dataset (this may take a while) to check on progress, you can cd up one level, then execute <ls -lthr> Enron dataset should be last item on the list, along with its current size download will complete at about 423 MB Traceback (most recent call last): File "startup.py", line 36, in urllib.urlretrieve(url, filename="../enron_mail_20150507.tar.gz") File "C:\Python27\lib\urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "C:\Python27\lib\urllib.py", line 245, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 213, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 350, in open_http h.endheaders(data) File "C:\Python27\lib\httplib.py", line 1049, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 893, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 855, in send self.connect() File "C:\Python27\lib\httplib.py", line 832, in connect self.timeout, self.source_address) File "C:\Python27\lib\socket.py", line 557, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno 11001] getaddrinfo failed I tried changing url in "startup.py" to " http://www.cs.cmu.edu/~enron/enron_mail_20150507.tar.gz", but it is not working too. If anybody downloaded it using python on WINDOW, please show me how. I m really appreciated. Anyway, I tried downloaded it manually but the file is kept downloading even after 1.1 GB of the file is downloaded. So, i got scared and stopped it... lol XD. How large is the "enron_mail_20150507.tar.gz" file? Where do i put the file after it is downloaded? In ud120-projects ? Please help me. Im stuck.
1
1
0
1
0
0
So i have a dataset for a NLP problem which contains data in the following format : code,body,result 2552272216,Does honey changes black hair into white ?,[Greying Hair] 2552210209,"Hello doctor,my mother was diagnosed with depression at the age of 36 due to over thinking about the family problems. Which caused her depression which caused several other mental problems and made her condition worse which resulted into a brain stroke and she passed away. Now my question iscan it happen with me or to my sister also at some point of.",[Depression] using pd.read_csv i read these lines using ',' as the delimiter but i want the last column to be read as a list and not string . Please help! import numpy as np import matplotlib.pyplot as plt import pandas as pd import json # Importing the dataset dataset = pd.read_csv('case_study_lybrate.csv', delimiter=',', quoting=1, skipinitialspace=True)
1
1
0
1
0
0
How can I get corresponding verbs and nouns for adverbs and adjectives in python? It seems simple succession and precedence may not be very accurate. There may be stopwords like to eg. in I am delighted to learn... I can't any library or even problem statement formalised as such. Code right now. Now I want to return the corresponding the verb for adverb and noun for each adjective in the sentence. Please help. Code: def pos_func(input_text): #pos tagging code: text=input_text tokens=tokenize_words(text) tagged=pos_tag(tokens) pos_store(tagged) def pos_store(tagged): verbs=[] adjectives=[] adverbs=[] nouns=[] for tag in tagged: pos=tag[1] if pos[0]=='V': verbs.append(tag[0]) elif pos[0]=='N': nouns.append(tag[0]) elif pos[0]=='J': adjectives.append(tag[0]) elif pos[0:2]=='RB': adverbs.append(tag[0]) def tokenize_words(text): tokens = TreebankWordTokenizer().tokenize(text) contractions = ["n't", "'ll", "'m"] fix = [] for i in range(len(tokens)): for c in contractions: if tokens[i] == c: fix.append(i) fix_offset = 0 for fix_id in fix: idx = fix_id - 1 - fix_offset tokens[idx] = tokens[idx] + tokens[idx+1] del tokens[idx+1] fix_offset += 1 return tokens
1
1
0
0
0
0
Going through the NLTK book, it's not clear how to generate a dependency tree from a given sentence. The relevant section of the book: sub-chapter on dependency grammar gives an example figure but it doesn't show how to parse a sentence to come up with those relationships - or maybe I'm missing something fundamental in NLP? EDIT: I want something similar to what the stanford parser does: Given a sentence "I shot an elephant in my sleep", it should return something like: nsubj(shot-2, I-1) det(elephant-4, an-3) dobj(shot-2, elephant-4) prep(shot-2, in-5) poss(sleep-7, my-6) pobj(in-5, sleep-7)
1
1
0
0
0
0
I have the following frozen inference graph. This is for semantic segmentation using Deeplab (download graph here). I converting this graph to tflite format tflite_convert \ --output_file=test2.lite \ --graph_def_file=frozen_inference_graph_3mbvoc.pb \ --input_arrays=ImageTensor \ --output_arrays=SemanticPredictions \ --input_shapes=1,450,600,3 \ --inference_input_type=QUANTIZED_UINT8 \ --inference_type=FLOAT \ --mean_values=128 \ --std_dev_values=128 After conversion the graph looks as follows(download it here) My question is how do I obtain graph similar to googles graph of deeplab available (here) ? To give you a more clearer question please see below image the graph on left is my tflite graph and the graph on right is graph of deeplab by google. How do I obtain results similar to graph on right?
1
1
0
1
0
0
I am using Python and spaCy as my NLP library. I am new to NLP work and I hope for some guidance in order to extract tabular information from a text. My goal is to find what type of expenses are frozen or not. Any guidance would be highly appreciated. TYPE_OF_EXPENSE FROZEN? NOT_FROZEN? purchase order frozen null capital frozen null consulting frozen null business meetings frozen null external hires frozen null KM&L null not frozen travel null not frozen import spacy nlp = spacy.load('en_core_web_sm') doc = nlp(u'Non-revenue-generating purchase order expenditures will be frozen. All capital related expenditures are frozen effectively for Q4. Following spending categories are frozen: Consulting, (including existing engagements), Business meetings. Please note that there is a hiring freeze for external hires, subcontractors and consulting services. KM&L expenditure will not be frozen. Travel cost will not be on ‘freeze’.) My ultimate goal is to extract all this table into an excel file. Even if you can advise for few of the categories above I would be deeply grateful. Thank you very much in advance.
1
1
0
0
0
0
I am trying to create an autoencoder from scratch for my dataset. It is a variational autoencoder for feature extraction. I am pretty new to machine learning and I would like to know how to feed my input data to the autoencoder. My data is a time series data. It looks like below: array([[[ 10, 0, 10, ..., 10, 0, 0], ..., [ 0, 12, 32, ..., 2, 2, 2]], [[ 0, 3, 7, ..., 7, 3, 0], ..... [ 0, 2, 3, ..., 3, 4, 6]], [[1, 3, 1, ..., 0, 10, 2], ..., [2, 11, 12, ..., 1, 1, 8]]], dtype=int64) It is a stack of arrays and the shape is (3, 1212, 700). And where do I pass the label? The examples online are simple and there is no detailed description as to how to feed the data in reality. Any examples or explanations will be highly helpful.
1
1
0
1
0
0
I have a dataframe of Inspection results & Violations that looks like: Results Violations Pass w/ Conditions 3. MANAGEMENT, FOOD EMPLOYEE AND CONDITIONAL E Pass 36. THERMOMETERS PROVIDED & ACCURATE Comment... What I need to do is have python loop through this pandas dataframe specifically in the violations column and identify all scenarios of 'Starts with a number and ends with Comments:' I was able to use regex to strip the number with this line of code df_new['Violations'] = df_new['Violations'].map(lambda x: x.lstrip('0123456789.- ').rstrip('[^a-zA-Z]Comments[^a-zA-Z]')) As you can see I tried to implement the comments closing end via the rstrip regex command but that does not appear to do anything. Output then looks like this Results Violations 0 Pass w/ Conditions MANAGEMENT, FOOD EMPLOYEE AND CONDITIONAL EMPL... 1 Pass THERMOMETERS PROVIDED & ACCURATE - Comments: 4... What is the regex command to basically say: Look for a number and delete everything between the number and Comments: Is there a simple way to do this?
1
1
0
0
0
0
I want to implement a word2vec using Keras. This is how I prepared my training data: encoded = tokenizer.texts_to_sequences(data) sequences = list() for i in range(1, len(encoded)): sent = encoded[i] _4grams = list(nltk.ngrams(sent, n=4)) for gram in _4grams: sequences.append(gram) # split into X and y elements sequences = np.array(sequences) X, y = sequences[:, 0:3], sequences[:, 3] X = to_categorical(X, num_classes=vocab_size) y = to_categorical(y, num_classes=vocab_size) Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, y, test_size=0.3, random_state=42) The following is my model in Keras: model = Sequential() model.add(Dense(50, input_shape=Xtrain.shape)) model.add(Dense(Ytrain.shape[1])) model.add(Activation("softmax")) Xtrain (6960, 3, 4048) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_22 (Dense) (None, 6960, 3, 50) 202450 _________________________________________________________________ dense_23 (Dense) (None, 6960, 3, 4048) 206448 _________________________________________________________________ activation_10 (Activation) (None, 6960, 3, 4048) 0 ================================================================= Total params: 408,898 Trainable params: 408,898 Non-trainable params: 0 _________________________________________________________________ None I got the error: history = model.fit(Xtrain, Ytrain, epochs=10, verbose=1, validation_data=(Xtest, Ytest)) Error when checking input: expected dense_22_input to have 4 dimensions, but got array with shape (6960, 3, 4048) I'm confused on how to prepare and feed my training data to a Keras neural network?
1
1
0
0
0
0
I'm trying to go through a list of (mostly) Arabic sentences, and remove those that are not Arabic. I've got a hack for telling if a character is Arabic or not: Arabic has no case, so if the character is alpha but isn't upper case or lower case, it's Arabic. I've got the code below, which works, but the language identification part is very slow, compared to the other filter. It doesn't seem to me like it's doing anything particularly complex, so I don't understand why it's taking so long. (The corpus is size is about 300K sentences before filtering.) Is there something I can do to make it more efficient? Thanks! def test_lang(string): """Takes a string and determines if it is written in Arabic characters or foreign, by testing whether the first character has a case attribute. This is intended for Arabic texts that may have English or French words added. If it encounters another case-less language (Chinese for instance), it will falsely identify it as Arabic.""" if not string or not string.isalpha(): return None char = string[0] if char.isalpha() and not (char.islower() or char.isupper()): lang = 'AR' else: lang = 'FW' return lang ... # remove sentences that are in English or French - THIS IS SLOW (takes a few mins) for sent in sents: if sent and test_lang(sent[0]) != 'AR': sents.remove(sent) # remove clearly MSA sentences -- THIS IS FAST (takes a few seconds) msa_features = ['ليس','لست','ليست','ليسوا','الذي','الذين','التي','ماذا', 'عن'] p = re.compile('|'.join(msa_features)) for sent in sents: if re.search(p, sent): sents.remove(sent)
1
1
0
0
0
0
I am writing a program which analyses online reviews and based on the ratings, stores the review into review_text and the corresponding rating into review_label as either positive(4 & 5 stars) or negative(1, 2 & 3 stars). Tried the following codes to add the review text and review label information of each review without any success. rev = ['review_text', 'review_label'] for file in restaurant_urls: url_rev= file html_r_r=requests.get(url_rev).text doc_rest=html_r_r soup_restaurant_content= BeautifulSoup(doc_rest, 'html.parser') star_text = soup_restaurant_content.find('img').get('alt') if star_text in ['1-star','2-star','3-star']: rev['review_label'].append('Negative') elif star_text in ['4-star','5-star']: rev['review_label'].append('Positive') else: print('check') rev['review_text'].append(soup_restaurant_content.find('p','text').get_text()) I want the reviews to be stored in the list rev with the review text stored in column review_text and the review label (whether positive or negative) under review_label. It would look something like 'review_text' 'review_label' review_1 positive review_2 negative
1
1
0
0
0
0
I have created a flask service for accepting requests with camera URLs as parameters for finding objects(table, chair etc...) in the camera frame. I have written code in flask for accepting POST requests. @app.route('/rest/detectObjects', methods=['GET','POST']) def detectObjects() ... json_result = function_call_for_detecting_objects() ... return In the function, its loads the tf model for object detection and returns the result. A large amount of request needs to be processed simultaneously by the flask server. So I need to execute the function using GPU as the camera access time and image processing for object detection takes much time and CPU utilization. Have a 4 GB GeForce GTX 1050 Ti/PCIe/SSE2. How can I make my python script to make use of GPU for this?
1
1
0
0
0
0
So lately I've been playing around with a WikiDump. I preprocessed it and trained it on Word2Vec + Gensim Does anyone know if there is only one script within Spacy that would generate tokenization, sentence recognition, part of speech tagging, lemmatization, dependency parsing, and named entity recognition all at once I have not been able to find clear documentation Thank you
1
1
0
0
0
0
I'm using the Vader SentimentAnalyzer to obtain the polarity scores. I used the probability scores for positive/negative/neutral before, but I just realized the "compound" score, ranging from -1 (most neg) to 1 (most pos) would provide a single measure of polarity. I wonder how the "compound" score computed. Is that calculated from the [pos, neu, neg] vector?
1
1
0
0
0
0
I wanted to pre-train BERT with the data from my own language since multilingual (which includes my language) model of BERT is not successful. Since whole pre-training costs a lot, I decided to fine tune it on its own 2 tasks: masked language model and next sentence prediction. There are previous implementation on different tasks (NER, sentiment analysis etc.), but I couldn't find any fine tuning on its own tasks. Is there an implementation that I couldn't see? If not, where should I start? I need some initial help.
1
1
0
0
0
0
I have a corpus of sentences in a specific domain. I am looking for an open-source code/package, that I can give the data and it will generate a good, reliable language model. (Meaning, given a context, know the probability for each word). Is there such a code/project? I saw this github repo: https://github.com/rafaljozefowicz/lm, but it didn't work.
1
1
0
0
0
0
I'm working on a POS Tagger using Python and Keras. The data I've got is using the STTS Tags, but I'm supposed to create a Tagger for the universal tagset. So I need to translate this. First I thought of making a dictionary and simply search replace the tags, but then I saw the option of setting a tagset using the TaggedCorpusReader. (e.g. 'brown') But I miss a list of possible tagsets that can be used there. Can I use the STTS Tagset somehow or do I have to make a dictionary myself? Example Source: Code #3 : map corpus tags to the universal tagset https://www.geeksforgeeks.org/nlp-customization-using-tagged-corpus-reader/ corpus = TaggedCorpusReader(filePath, "standard_pos_tagged.txt", tagset='STTS') #?? doesn't work sadly # .... trainingCorpus.tagged_sents(tagset='universal')[1] In the end it looked something like this: (big thanks to alexis) with open(resultFileName, "w") as output: for sent in stts_corpus.tagged_sents(): for word, tag in sent: try: newTag = mapping_dict[tag]; output.write(word+"/"+newTag+" ") except: print("except " + str(word) + " - " + str(tag)) output.write(" ")
1
1
0
0
0
0
I would like to extract "all" the noun phrases from a sentence. I'm wondering how I can do it. I have the following code: doc2 = nlp("what is the capital of Bangladesh?") for chunk in doc2.noun_chunks: print(chunk) Output: 1. what 2. the capital 3. bangladesh Expected: the capital of Bangladesh I have tried answers from spacy doc and StackOverflow. Nothing worked. It seems only cTakes and Stanford core NLP can give such complex NP. Any help is appreciated.
1
1
0
0
0
0
I want to update a model with new entities. I'm loading the "pt" NER model, and trying to update it. Before doing anything, I tried this phrase: "meu nome é Mário e hoje eu vou para academia". (in English this phrase is "my name is Mário and today I'm going to go to gym). Before the whole process I got this: Entities [('Mário', 'PER')] Tokens [('meu', '', 2), ('nome', '', 2), ('é', '', 2), ('Mário', 'PER', 3), ('e', '', 2), ('hoje', '', 2), ('eu', '', 2), ('vou', '', 2), ('pra', '', 2), ('academia', '', 2)] Ok, Mário is a name and it's correct. But I want the model recognize "hoje (today)" as DATE, then I ran the script below. After I ran the script, I've tried the same setence and got this: Entities [('hoje', 'DATE')] Tokens [('meu', '', 2), ('nome', '', 2), ('é', '', 2), ('Mário', '', 2), ('e', '', 2), ('hoje', 'DATE', 3), ('eu', '', 2), ('vou', '', 2), ('pra', '', 2), ('academia', '', 2)] The model is recognizing "hoje" as DATE, but totally forgot about Mário as Person. from __future__ import unicode_literals, print_function import plac import random from pathlib import Path import spacy from spacy.util import minibatch, compounding # training data TRAIN_DATA = [ ("Infelizmente não, eu briguei com meus amigos hoje", {"entities": [(45, 49, "DATE")]}), ("hoje foi um bom dia.", {"entities": [(0, 4, "DATE")]}), ("ah não sei, hoje foi horrível", {"entities": [(12, 16, "DATE")]}), ("hoje eu briguei com o Mário", {"entities": [(0, 4, "DATE")]}) ] @plac.annotations( model=("Model name. Defaults to blank 'en' model.", "option", "m", str), output_dir=("Optional output directory", "option", "o", Path), n_iter=("Number of training iterations", "option", "n", int), ) def main(model="pt", output_dir="/model", n_iter=100): """Load the model, set up the pipeline and train the entity recognizer.""" if model is not None: nlp = spacy.load(model) # load existing spaCy model print("Loaded model '%s'" % model) else: nlp = spacy.blank("pt") # create blank Language class print("Created blank 'en' model") doc = nlp("meu nome é Mário e hoje eu vou pra academia") print("Entities", [(ent.text, ent.label_) for ent in doc.ents]) print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc]) # create the built-in pipeline components and add them to the pipeline # nlp.create_pipe works for built-ins that are registered with spaCy if "ner" not in nlp.pipe_names: ner = nlp.create_pipe("ner") nlp.add_pipe(ner, last=True) # otherwise, get it so we can add labels else: ner = nlp.get_pipe("ner") # add labels for _, annotations in TRAIN_DATA: for ent in annotations.get("entities"): ner.add_label(ent[2]) # get names of other pipes to disable them during training other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "ner"] with nlp.disable_pipes(*other_pipes): # only train NER # reset and initialize the weights randomly – but only if we're # training a new model if model is None: nlp.begin_training() for itn in range(n_iter): random.shuffle(TRAIN_DATA) losses = {} # batch up the examples using spaCy's minibatch batches = minibatch(TRAIN_DATA, size=compounding(4.0, 32.0, 1.001)) for batch in batches: texts, annotations = zip(*batch) nlp.update( texts, # batch of texts annotations, # batch of annotations drop=0.5, # dropout - make it harder to memorise data losses=losses, ) print("Losses", losses) # test the trained model # for text, _ in TRAIN_DATA: doc = nlp("meu nome é Mário e hoje eu vou pra academia") print("Entities", [(ent.text, ent.label_) for ent in doc.ents]) print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc]) # save model to output directory if output_dir is not None: output_dir = Path(output_dir) if not output_dir.exists(): output_dir.mkdir() nlp.to_disk(output_dir) print("Saved model to", output_dir) # test the saved model print("Loading from", output_dir) nlp2 = spacy.load(output_dir) # for text, _ in TRAIN_DATA: # doc = nlp2(text) # print("Entities", [(ent.text, ent.label_) for ent in doc.ents]) # print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc])
1
1
0
1
0
0
The AI must predict the next number in a given sequence of incremental integers using Python, but so far I haven't gotten the intended result. I tried changing the learning rate and iterations but so far without any luck. The next number is supposed to be predicted based on this PATTERN: First number in the sequence (1) is a random int in the interval of [2^0 (current index), 2^1(next index) and so on so forth... The AI should be able to make the decision of which number to choose from the interval The problem I encountered is implementing the pattern mentioned above into the AI so it can predict the n+1, since I am fairly new to machine learning I don't know how to feed the AI that pattern and which libraries I have to work with. This is the code I used: import numpy as np # Init sequence data =\ [ [1, 3, 7, 8, 21, 49, 76, 224, 467, 514, 1155, 2683, 5216, 10544, 51510, 95823, 198669, 357535, 863317, 1811764, 3007503, 5598802, 14428676, 33185509, 54538862, 111949941, 227634408, 400708894, 1033162084, 2102388551, 3093472814, 7137437912, 14133072157, 20112871792, 42387769980, 100251560595, 146971536592, 323724968937, 1003651412950, 1458252205147, 2895374552463, 7409811047825, 15404761757071, 19996463086597, 51408670348612, 119666659114170, 191206974700443, 409118905032525, 611140496167764, 2058769515153876, 4216495639600700, 6763683971478124, 9974455244496710, 30045390491869460, 44218742292676575, 138245758910846492, 199976667976342049, 525070384258266191] ] X = np.matrix(data)[:, 0] y = np.matrix(data)[:, 1] def J(X, y, theta): theta = np.matrix(theta).T m = len(y) predictions = X * theta sqError = np.power((predictions-y), [2]) return 1/(2*m) * sum(sqError) dataX = np.matrix(data)[:, 0:1] X = np.ones((len(dataX), 2)) X[:, 1:] = dataX # gradient descent function def gradient(X, y, alpha, theta, iters): J_history = np.zeros(iters) m = len(y) theta = np.matrix(theta).T for i in range(iters): h0 = X * theta delta = (1 / m) * (X.T * h0 - X.T * y) theta = theta - alpha * delta J_history[i] = J(X, y, theta.T) return J_history, theta print(' '+40*'=') # Theta initialization theta = np.matrix([np.random.random(), np.random.random()]) # Learning rate alpha = 0.02 # Iterations iters = 1000000 print(' == Model summary == Learning rate: {} Iterations: {} Initial theta: {} Initial J: {:.2f} ' .format(alpha, iters, theta, J(X, y, theta).item())) print('Training model... ') # Train model and find optimal Theta value J_history, theta_min = gradient(X, y, alpha, theta, iters) print('Done, Model is trained') print(' Modelled prediction function is: y = {:.2f} * x + {:.2f}' .format(theta_min[1].item(), theta_min[0].item())) print('Cost is: {:.2f}'.format(J(X, y, theta_min.T).item())) # Calculate the predicted profit def predict(pop): return [1, pop] * theta_min # Now p = len(data) print(' '+40*'=') print('Initial sequence was: ', *np.array(data)[:, 1]) print(' Next numbers should be: {:,.1f}' .format(predict(p).item()))
1
1
0
1
0
0
I want to use Python Stanford NER module but keep getting an error,I searched it on internet but got nothing. Here is the basic usage with error. import ner tagger = ner.HttpNER(host='localhost', port=8080) tagger.get_entities("University of California is located in California, United States") Error Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> tagger.get_entities("University of California is located in California, United States") File "C:\Python27\lib\site-packages er\client.py", line 81, in get_entities tagged_text = self.tag_text(text) File "C:\Python27\lib\site-packages er\client.py", line 165, in tag_text c.request('POST', self.location, params, headers) File "C:\Python27\lib\httplib.py", line 1057, in request self._send_request(method, url, body, headers) File "C:\Python27\lib\httplib.py", line 1097, in _send_request self.endheaders(body) File "C:\Python27\lib\httplib.py", line 1053, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 897, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 859, in send self.connect() File "C:\Python27\lib\httplib.py", line 836, in connect self.timeout, self.source_address) File "C:\Python27\lib\socket.py", line 575, in create_connection raise err error: [Errno 10061] No connection could be made because the target machine actively refused it Using windows 10 with latest Java installed
1
1
0
0
0
0
I need to make a list of all -grams beginning at the head of string for each integer from 1 to M. Then return a tuple of M such lists. def letter_n_gram_tuple(s, M): s = list(s) output = [] for i in range(0, M+1): output.append(s[i:]) return(tuple(output)) From letter_n_gram_tuple("abcd", 3) output should be: (['a', 'b', 'c', 'd'], ['ab', 'bc', 'cd'], ['abc', 'bcd'])) However, my output is: (['a', 'b', 'c', 'd'], ['b', 'c', 'd'], ['c', 'd'], ['d']). Should I use string slicing and then saving slices into the list?
1
1
0
0
0
0
Is there any option to add custom punctuation marks, which aren't included in the default punctuation rules? (https://github.com/explosion/spaCy/blob/develop/spacy/lang/de/punctuation.py) I am using spaCy's Matcher class (https://spacy.io/usage/rule-based-matching) and the attribute "IS_PUNCT" to remove punctuation from my text. from spacy.matcher import Matcher # instantiate Matcher matcher = Matcher(nlp.vocab) # define pattern pattern = [{"IS_PUNCT": False}] # add pattern to matcher matcher.add("Cleaning", None, pattern) I would like to customize the punctuation rules to be able to remove "|" from my texts with the Matcher.
1
1
0
0
0
0
I am working on my paper, and one of the tasks is to extract the company name and location from the sentence of the following type: "Google shares resources with Japan based company." Here, I want the output to be "Google Japan". The sentence structure may also be varied like "Japan based company can access the resources of Google". I have tried an Attention based NN, but the error rate is around 0.4. Can anyone give me a little bit of hint about which model I should use? And I printed out the validation process like this: validation print And I got the graphs of the loss and accuracy: lass and accuracy It shows that the val_acc is 0.99. Is this mean my model is pretty good at predicting? But why do I get 0.4 error rate when I use my own validation function to show error rate? I am very new to ML. What does the val_acc actually mean? Here is my model: encoder_input = Input(shape=(INPUT_LENGTH,)) decoder_input = Input(shape=(OUTPUT_LENGTH,)) encoder = Embedding(input_dict_size, 64, input_length=INPUT_LENGTH, mask_zero=True)(encoder_input) encoder = LSTM(64, return_sequences=True, unroll=True)(encoder) encoder_last = encoder[:, -1, :] decoder = Embedding(output_dict_size, 64, input_length=OUTPUT_LENGTH, mask_zero=True)(decoder_input) decoder = LSTM(64, return_sequences=True, unroll=True)(decoder, initial_state=[encoder_last, encoder_last]) attention = dot([decoder, encoder], axes=[2, 2]) attention = Activation('softmax')(attention) context = dot([attention, encoder], axes=[2, 1]) decoder_combined_context = concatenate([context, decoder]) output = TimeDistributed(Dense(64, activation="tanh"))(decoder_combined_context) # equation (5) of the paper output = TimeDistributed(Dense(output_dict_size, activation="softmax"))(output) model = Model(inputs=[encoder_input, decoder_input], outputs=[output]) model.compile(optimizer='adam', loss="binary_crossentropy", metrics=['accuracy']) es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=200, min_delta=0.0005)
1
1
0
0
0
0
I've been testing different python lemmatizers for a solution I'm building out. One difficult problem I've faced is that stemmers are producing non english words which won't work for my use case. Although stemmers get "politics" and "political" to the same stem correctly, I'd like to do this with a lemmatizer, but spacy and nltk are producing different words for "political" and "politics". Does anyone know of a more powerful lemmatizer? My ideal solution would look like this: from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() print("political = ", lemmatizer.lemmatize("political")) print("politics = ", lemmatizer.lemmatize("politics")) returning: political = political politics = politics Where I want to return: political = politics politics = politics
1
1
0
0
0
0
I have a dataframe called "data" like that : id email_body 1 text_1 2 text_2 3 text_3 4 text_4 5 text_5 6 text_6 7 text_7 8 text_8 9 text_9 10 text_10 I'm using the following code to extract from the different rows , the full name(s), the first name(s) and the last name(s) which are contained in the different "text_i" : import nltk from nameparser.parser import HumanName from nltk.corpus import wordnet def get_human_names(text): tokens = nltk.tokenize.word_tokenize(text) pos = nltk.pos_tag(tokens) sentt = nltk.ne_chunk(pos, binary = False) person_list = [] lastname = [] firstname = [] person = [] name = "" for subtree in sentt.subtrees(filter=lambda t: t.label() == 'PERSON'): for leaf in subtree.leaves(): person.append(leaf[0]) if len(person) > 1: #avoid grabbing lone surnames for part in person: name += part + ' ' if name[:-1] not in person_list: person_list.append(name[:-1]) for person in person_list: person_split = person.split(" ") for name in person_split: if wordnet.synsets(name): if(name in person): person_list.remove(person) break firstname = [i.split(' ')[0] for i in person_list] lastname = [i.split(' ')[1] for i in person_list] name = '' person = [] return person_list, firstname, lastname names = data.email_body.apply(get_human_names) columns = ['names','firstname','lastname' ] data_2 = pd.DataFrame([names[0],names[1],names[2]], columns = columns) data_2 I'm obtaining the following dataset : id names firstname lastname 0 [Lesley Kirchman, Milap Majmundar, Segoe UI] [Lesley, Milap, Segoe] [Kirchman, Majmundar, UI] 1 [Gerrit Boerman, Lesley Kirchman, Segoe UI] [Gerrit, Lesley, Segoe] [Boerman, Kirchman, UI] 2 [Lesley Kirchman] [Lesley] [Kirchman] You can observe that I have only the 3 first rows, how to apply the function to the whole initial dataframe "data" and thus obtain a resulting dataframe with 10 rows ? Regards,
1
1
0
0
0
0
I have about 1200 tv show categories .. like Drama, News, Sports, Sports-non event, Drama Medical, Drama Crime.. etc How do I use NLP so that I get groups such that Drama, Drama medical and Drama Crime group together and Sports, Sports-non event etc group together and so on... basically the end goal is to reduce the 1200 categories to very few broad categories. Till now I have used bag of words to build a dictionary with 146 words..
1
1
0
0
0
0
I've selected the features from my data set and then when I try to select those features from my data set, I get this error. Why is this happening? dataset = pd.read_csv('Banking Dataset.csv') LabelEncoder1 = LabelEncoder() independent_variables[:,1] = LabelEncoder1.fit_transform(independent_variables[:,1]) LabelEncoder2 = LabelEncoder() independent_variables[:,2] = LabelEncoder2.fit_transform(independent_variables[:,2]) onehotencoder = OneHotEncoder(categorical_features=[1]) independent_variables = onehotencoder.fit_transform(independent_variables).toarray() X_train, X_test, Y_train,Y_test = train_test_split(independent_variables,target_values ,test_size=0.25,random_state=0) c = DecisionTreeClassifier(min_samples_split=100) features =["CreditScore","Geography","Gender","Age","Tenure","Balance","NumOfProducts","HasCrCard","IsActiveMember","EstimatedSalary"] X = X_train(features) Output: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result. X_train=X_train[features] Traceback (most recent call last): X_train=X_train[features] IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices Process finished with exit code 1
1
1
0
0
0
0
I am trying to find this word using regular expression. But the issue I found is whenever I tried using word border "\b" it doesn't work accurately. And if I didn't use any RegEx then it will show different output where it has been used like 'অশুভৰ' 'অশুভ_লক্ষণ'. I want to eliminate these results and only want that word precisely. This is the string: "মেকুৰীয়ে ৰাস্তা কাটিলে অশুভ বুলি ধৰা হয়, দুৱাৰডলিত বহাটো অশুভনীয়, যি লক্ষণ অশুভৰ পৰিচায়ক"
1
1
0
0
0
0
I am working on an NLP project and I hope to tokenize sentences and get counts of different tokens. Sometimes I hope a few words to be a phrase and do not count the words inside the phrase. I have found CountVectorizer in scikit-learn useful in counting phrases, but I could not figure out how to remove the words inside the phrases. For example: words = ['cat', 'dog', 'walking', 'my dog'] example = ['I was walking my dog and cat in the park'] vect = CountVectorizer(vocabulary=words, ngram_range=(1,2)) dtm = vect.fit_transform(example) print(dtm) I got: >>> vect.get_feature_names() ['cat', 'dog', 'walking', 'my dog'] >>> print(dtm) (0, 0) 1 (0, 1) 1 (0, 2) 1 (0, 3) 1 What I want is: >>> print(dtm) (0, 0) 1 (0, 2) 1 (0, 3) 1 But I want to keep 'dog' in the dictionary because it may appear on its own in other text.
1
1
0
0
0
0
I have successfully downloaded the 1B word language model trained using a CNN-LSTM (https://github.com/tensorflow/models/tree/master/research/lm_1b), and I would like to be able to input sentences or partial sentences to get the probability of each subsequent word in the sentence. For example, if I have a sentence like, "An animal that says ", I'd like to know the probability of the next word being "woof" vs. "meow". I understand that running the following produces the LSTM embeddings: bazel-bin/lm_1b/lm_1b_eval --mode dump_lstm_emb \ --pbtxt data/graph-2016-09-10.pbtxt \ --vocab_file data/vocab-2016-09-10.txt \ --ckpt 'data/ckpt-*' \ --sentence "An animal that says woof" \ --save_dir output That will produce files lstm_emb_step_*.npy where each file is the LSTM embedding for each word in the sentence. How can I transform these into probabilities over the trained model to be able to compare P(woof|An animal that says) vs. P(meow|An animal that says)? Thanks in advance.
1
1
0
0
0
0
from sklearn.feature_extraction.text import CountVectorizer getting this error from sklearn.feature_extraction.text import CountVectorizer File "C:\Users\Anaconda3\lib\site-packages\sklearn\__init__.py", line 57, in <module> from .base import clone File "C:\Users\Anaconda3\lib\site-packages\sklearn\base.py", line 12, in <module> from .utils.fixes import signature File "C:\Users\Anaconda3\lib\site-packages\sklearn\utils\__init__.py", line 11, in <module> from .validation import (as_float_array, File "C:\Users\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 18, in <module> from ..utils.fixes import signature File "C:\Users\\Anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 291, in <module> from scipy.sparse.linalg import lsqr as sparse_lsqr from .eigen import * File "C:\Users\Anaconda3\lib\site-packages\scipy\sparse\linalg\eigen\__init__.py", line 11, in <module> from .arpack import * File "C:\Users\Anaconda3\lib\site-packages\scipy\sparse\linalg\eigen\arpack\__init__.py", line 22, in <module> from .arpack import * File "C:\Users\Anaconda3\lib\site-packages\scipy\sparse\linalg\eigen\arpack\arpack.py", line 45, in <module> from . import _arpack ImportError: DLL load failed: The specified module could not be found.
1
1
0
0
0
0
I'm creating a very basic AI with Tensorflow, and am using the code from the official docs/tutorial. Here's my full code: from __future__ import absolute_import, division, print_function import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] train_images = train_images / 255.0 train_labels = train_labels / 255.0 plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() The issue is on this line: plt.xlabel(class_names[train_labels[i]]) TypeError: list indices must be integers or slices, not numpy.float64 No problem, change the numpy.float64 to int using .item() plt.xlabel(class_names[train_labels[i.item()]]) AttributeError: 'int' object has no attribute 'item' Was it an int in the first place? This is running on Python 3.7, with Tensorflow 1.13.1.
1
1
0
0
0
0
I have a corpus of English sentences sentences = [ "Mary had a little lamb.", "John has a cute black pup.", "I ate five apples." ] and a grammar (for the sake of simplicity) grammar = (''' NP: {<NNP><VBZ|VBD><DT><JJ>*<NN><.>} # NP ''') I wish to filter out the sentences which don't conform to the grammar. Is there a built-in NLTK function which can achieve this? In the above example, first two sentences follow the pattern of my grammar, but not the last one.
1
1
0
1
0
0
I have some sensors which fetch data from cement factory and sends data to AWS IoT. The data is then tested on pre-trained model and the model predicts quality of cement based on some parameters. The data is coming in one second interval. Since the data is coming in real-time, I want to train the model incrementally in real time. Can anybody suggest how train model continuously?
1
1
0
1
0
0
Suppose that I have a file which has thousands of skills starting from A-Z. Now, I would like to create a model that can group similar skills together (example neural network and SVM can group together). I know that I can use NLP for this problem, but I'm not sure about the algorithm that I can use to get the best result. I'm new to NLP so any help is greatly appreciated. I was thinking at first to use semantic similarity. So I can use pre-trained word embeddings to map the words to a new vector space where I can calculate the distance between the word embeddings, e.g. with word2vec or other implementations. But I'm not sure about this. Can you give me some link or show me how do I do it so I can get a best result? Take a look at the data[1]: https://i.stack.imgur.com/jGRI0.png <class 'pandas.core.frame.DataFrame'> RangeIndex: 36943 entries, 0 to 36942 Data columns (total 1 columns): Skills 36942 non-null object dtypes: object(1) memory usage: 288.7+ KB None Skills 0 .NET 1 .NET CLR 2 .NET Compact Framework 3 .NET Framework 4 .NET Remoting
1
1
0
1
0
0
I'm trying to create an AI to predict the outcome of FRC competition matches using tensorflow and TFLearn. Here is the relevant code: x = np.load("FRCPrediction/matchData.npz")["x"] y = np.load("FRCPrediction/matchData.npz")["y"] def buildModel(): net = tflearn.input_data(shape = [None, 36]) net = tflearn.fully_connected(net, 64) net = tflearn.dropout(net, 0.5) net = tflearn.fully_connected(net, 128, activation = "linear") net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy') net = tflearn.fully_connected(net, 1, activation = "linear") model = tflearn.DNN(net) return model model = buildModel() BATCHSIZE = 128 model.fit(x, y, batch_size = BATCHSIZE) It is failing with error: --------------------------------- Run id: 67BLHP Log directory: /tmp/tflearn_logs/ --------------------------------- Training samples: 36024 Validation samples: 0 -- --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-7-1b097e6d2ec5> in <module>() 1 for i in range(EPOCHS): ----> 2 history = model.fit(x, y, batch_size = BATCHSIZE) 3 print(history) 4 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 1126 'which has shape %r' % 1127 (np_val.shape, subfeed_t.name, -> 1128 str(subfeed_t.get_shape()))) 1129 if not self.graph.is_feedable(subfeed_t): 1130 raise ValueError('Tensor %s may not be fed.' % subfeed_t) ValueError: Cannot feed value of shape (128,) for Tensor 'TargetsData/Y:0', which has shape '(?, 128)' Any help is much appreciated.
1
1
0
1
0
0
I have a matrix of word embedding it goes on like - ([["word1","word2"...],["word6","word5"....],[...],[....]......]) Here the array are sentences and the words are embeddings , embeddings have shape (100,) Not all sentences have the same length I want all the sentences to have the same length I want to pad and trim how can I do it ?
1
1
0
0
0
0
So, I'm making my own home assistant and I'm trying to make a multi-intent classification system. However, I cannot find a way to split the query said by the user into the multiple different intents in the query. For example: I have my data for one of my intents (same format for all) {"intent_name": "music.off" , "examples": ["turn off the music" , "kill the music" , "cut the music"]} and the query said by the user would be: 'dim the lights, cut the music and play Black Mirror on tv' I want to split the sentence into their individual intents such as : ['dim the lights', 'cut the music', 'play black mirror on tv'] however, I can't just use re.split on the sentence with and and , as delimiters to split with as if the user asks : 'turn the lights off in the living room, dining room, kitchen and bedroom' this will be split into ['turn the lights off in the living room', 'kitchen', 'dining room', 'bedroom'] which would not be usable with my intent detection this is my problem, thank you in advance UPDATE okay so I've got this far with my code, it can get the examples from my data and identify the different intents inside as I wished however it is not splitting the parts of the original query into their individual intents and is just matching. import nltk import spacy import os import json #import difflib #import substring #import re #from fuzzysearch import find_near_matches #from fuzzywuzzy import process text = "dim the lights, shut down the music and play White Collar" commands = [] def get_matches(): for root, dirs, files in os.walk("./data"): for filename in files: f = open(f"./data/{filename}" , "r") file_ = f.read() data = json.loads(file_) choices.append(data["examples"]) for set_ in choices: command = process.extract(text, set_ , limit=1) commands.append(command) print(f"all commands : {commands}") this returns [('dim the lights') , ('turn off the music') , ('play Black Mirror')] which is the correct intents but I have no way of knowing which part of the query relates to each intent - this is the main problem my data is as follows , very simple for now until I figure out a method: play.json {"intent_name": "play.device" , "examples" : ["play Black Mirror" , "play Netflix on tv" , "can you please stream Stranger Things"]} music.json {"intent_name": "music.off" , "examples": ["turn off the music" , "cut the music" , "kill the music"]} lights.json {"intent_name": "lights.dim" , "examples" : ["dim the lights" , "turn down the lights" , "lower the brightness"]}
1
1
0
0
0
0
Pretty New to Neural Networks and AI. Was following up a blog to creating up a Digit Recognition System. Stuck right here: File "main.py", line 61, in <module> X: batch_x, Y: batch_y, keep_prob: dropout File "C:\Users\umara\AppData\Local\Programs\Python\Python37\lib\site- packages\tensorflow\python\client\session.py", line 929, in run run_metadata_ptr) File "C:\Users\umara\AppData\Local\Programs\Python\Python37\lib\site- packages\tensorflow\python\client\session.py", line 1128, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (128, 28, 28, 1) for Tensor 'Placeholder:0', which has shape '(?, 784)' I've tried in these samples as well : n_train = [d.reshape(28,28, 1) for d in mnist.train.num_examples] test_features =[d.reshape(28, 28, 1) for d in mnist.test.images] n_validation =[d.reshape(28, 28, 1) for d in mnist.validation.num_examples] Code : import numpy as np from PIL import Image import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data #Import data from MNIST DATA SET and save it in a folder mnist = input_data.read_data_sets("MNIST_data/",one_hot=True) #n_train = [d.reshape(28, 28, 1) for d in mnist.train.num_examples] n_train = mnist.train.num_examples #train_features = #test_features = [d.reshape(28, 28, 1) for d in mnist.test.images] #n_validation = [d.reshape(28, 28, 1) for d in mnist.validation.num_examples] n_validation = mnist.validation.num_examples n_test = mnist.test.num_examples n_input = 784 n_hidden1 = 522 n_hidden2 = 348 n_hidden3 = 232 n_output = 10 learning_rate = 1e-4 n_iterations = 1000 batch_size = 128 dropout = 0.5 #X = tf.placeholder(tf.float32,[None, 28, 28, 1]) #X = tf.placeholder("float", [None, n_input]) X = tf.placeholder(tf.float32 , [None , 784]) #X = tf.reshape(X , [-1 , 784]) Y = tf.placeholder("float", [None, n_output]) keep_prob = tf.placeholder(tf.float32) weights = { 'w1': tf.Variable(tf.truncated_normal([n_input, n_hidden1], stddev=0.1)), 'w2': tf.Variable(tf.truncated_normal([n_hidden1, n_hidden2], stddev=0.1)), 'w3': tf.Variable(tf.truncated_normal([n_hidden2, n_hidden3], stddev=0.1)), 'out': tf.Variable(tf.truncated_normal([n_hidden3, n_output], stddev=0.1)), } biases = { 'b1': tf.Variable(tf.constant(0.1, shape=[n_hidden1])), 'b2': tf.Variable(tf.constant(0.1, shape=[n_hidden2])), 'b3': tf.Variable(tf.constant(0.1, shape=[n_hidden3])), 'out': tf.Variable(tf.constant(0.1, shape=[n_output])) } layer_1 = tf.add(tf.matmul(X, weights['w1']), biases['b1']) layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2']) layer_3 = tf.add(tf.matmul(layer_2, weights['w3']), biases['b3']) layer_drop = tf.nn.dropout(layer_3, keep_prob) output_layer = tf.matmul(layer_drop, weights['out']) + biases['out'] cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=Y, logits=output_layer )) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_pred = tf.equal(tf.argmax(output_layer, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for i in range(n_iterations): batch_x, batch_y = mnist.train.next_batch(batch_size) batch_x = np.reshape(batch_x,(-1,28,28,1)) sess.run(train_step, feed_dict={ X: batch_x, Y: batch_y, keep_prob: dropout }) # print loss and accuracy (per minibatch) if i % 100 == 0: minibatch_loss, minibatch_accuracy = sess.run( [cross_entropy, accuracy], feed_dict={X: batch_x, Y: batch_y, keep_prob: 1.0} ) print( "Iteration", str(i), "\t| Loss =", str(minibatch_loss), "\t| Accuracy =", str(minibatch_accuracy) ) test_accuracy = sess.run(accuracy, feed_dict={X: mnist.test.images, Y: mnist.test.labels, keep_prob: 1.0}) print(" Accuracy on test set:", test_accuracy) img = np.invert(Image.open("n55.png").convert('L')).ravel() prediction = sess.run(tf.argmax(output_layer, 1), feed_dict={X: [img]}) print ("Prediction for test image:", np.squeeze(prediction))
1
1
0
0
0
0
I am a new learner of AI. My assignment requires me to write a program in Python that plays the Game of Nim optimally (using the NegaMax algorithm). If you're not familiar with the game, here is a brief description: Nim is a simple two-player game. We start with a pile of n matches, where n ≥ 3. Two players, Max and Min, take turns to remove k matches from the pile, where k = 1, k = 2, or k = 3. The player who takes the last match loses. This is what I have already written: def NegaMax(state, turn, bestmove): max = -100000000000 if state == 1: if turn == 0: return (-1,bestmove) else: return (1,bestmove) for move in range(1, 4): if state-move > 0: m = NegaMax(state-move, 1-turn, bestmove) m1 = -m[0] if m1 > max: max = m1 bestmove = move return (max,bestmove) def play_nim(state): turn = 0 bestmove = 0 while state != 1: [evaluation,move] = NegaMax(state, turn, bestmove) print(str(state) + ": " + ("MAX" if not turn else "MIN") + " takes " + str(move)) state -= move turn = 1 - turn print("1: " + ("MAX" if not turn else "MIN") + " loses") No matter what number of state I put in, both Min and Max always takes 1 match in every round. I think the problem is that the evaluation is wrong, but I cannot see where I did wrong. Any help would be appreciated! Thanks!
1
1
0
0
0
0
I'm using text classification to classify Arabic dialects, so far I have 4 dialects. However, now I want the classifier to detect the formal(standard or grammatical) language of those dialects which is called MSA(Modern Standard Arabic). Should I use grammatical analysis? build a language model? or I do the same as I did with the dialects by collecting MSA tweets and then train them?
1
1
0
1
0
0
According to the documentation i can load a sense tagged corpus in nltk as such: >>> from nltk.corpus import wordnet_ic >>> brown_ic = wordnet_ic.ic('ic-brown.dat') >>> semcor_ic = wordnet_ic.ic('ic-semcor.dat') I can also get the definition, pos, offset, examples as such: >>> wn.synset('dog.n.01').examples >>> wn.synset('dog.n.01').definition But how can get the frequency of a synset from a corpus? To break down the question: first how to count many times did a synset occurs a sense-tagged corpus? then the next step is to divide by the the count by the total number of counts for all synsets occurrences given the particular lemma.
1
1
0
0
0
0
I have to compare one spacy document to a list of spacy documents and want to get a list of similarity scores as an output. Of course, I can do this using a for loop, but I'm looking for some optimized solution like numpy offers to broadcast etc. I have one document against a list of documents: oneDoc = 'Hello, I want to be compared with a list of documents' listDocs = ["I'm the first one", "I'm the second one"] spaCy offers us a document similarity function: oneDoc = nlp(oneDoc) listDocs = nlp(listDocs) similarity_score = np.zeros(len(listDocs)) for i, doc in enumerate(listDocs): similarity_score[i] = oneDoc.similarity(doc) Since one document is compared with a list of two documents, the similarity score would be like this: [0.7, 0.8] I'm looking for a way to avoid this for loop. In other words, I want to vectorize this function.
1
1
0
0
0
0
I am trying to learn to make chatbots in Google Colab.I found that there are no vectors present in spacy 'en'.Whenever I check for the length of vectors using the nlp.vocab.vectors_length it always returns 0. I have tried running "spacy.cli.download('en')" to install it once again in colab but still the vector length is zero and the shape of vectors is also (0,0) Here is the code: import spacy nlp = spacy.load('en') print(nlp.vocab.vectors_length) The expected output was 300 but it is always 0.Can someone please tell me what's the problem.I am a total beginner to this spacy library and natural language processing.Any help would be appreciated.
1
1
0
0
0
0
I have so far used the stanfordnlp library in python and I have tokenized and POS tagged a dataframe of text. I would now like to try to extract noun phrases. I have tried two different things, and I am having probles with both: From what I can see, the stanfordnlp python library doesn't seem to offer NP chunking out of the box, at least I haven't been able to find a way to do it. I have tried making a new dataframe of all words in order with their POS tags, and then checking if nouns are repeated. However, this is very crude and quite complicated for me. I have been able to do it with English text using nltk, so I have also tried to use the Stanford CoreNLP API in NLTK. My problem in this regard is that I need a Danish model when setting CoreNLP up with Maven (which I am very inexperienced with). For problem 1 of this text, I have been using the Danish model found here. This doesn't seem to be the kind of model I am asked to find - again, I don't exactly now what I am doing so apologies if I am misunderstanding something here. My questions then are (1) whether it is in fact possible to do chunking of NPs in stanfordnlp in python, (2) whether I can somehow parse the POS-tagged+tokenized+lemmatized words from stanfordnlp to NLTK and do the chunking there, or (3) whether it is possible to set up CoreNLP in Danish and then use the CoreNLP api witih NLTK. Thank you, and apologies for my lack of clarity here.
1
1
0
0
0
0
I have a data-set with a lot of lists of lists of tokenized words. for example: ['apple','banana','tomato'] ['tomato','tree','pikachu'] I have around 40k lists like those, and I want to count the 10 most common words from all of the 40k lists together. Anyone have any idea?
1
1
0
0
0
0
Given is a list of text files. Each text file describes a topic. Input is a mental concept that I describe with a few sentences. The text files contain umlauts. The algorithm should output the files and probability for each that the concept described is being dealt with. My Pseudocode: split the concept by the space literal and put words into an array, while omitting stopwords iterate over each text file split by the space literal and put words into an array, while omitting stopwords i = 0 iterate over vector if vectorword in concept i++ determine percentage by using i/vectorcount * 100 save the percentage in a dictionary filename - percentage sort dictionary by percentage descendingly output Drawbacks I see in this approach: The output would not include similar words but only the words used. The code is redundant, iterating over each text file should only be done once and then one should work with a faster approach, like a database
1
1
0
0
0
0
I am new to tensorflow so I am trying to get my hands dirty by working on a binary classification problem on kaggle. I have trained the model using sigmoid function and got a very good accuracy when tested but when I try to export the prediction to df for submission, I get the error below...I have attached the code and the prediction and the output, please suggest what I am doing wrong, I suspect it has to do with my sigmoid function, thanks. This is output of the predictions....the expected is 1s and 0s INFO:tensorflow:Restoring parameters from ./movie_review_variables Prections are [[3.8743019e-07] [9.9999821e-01] [1.7650980e-01] ... [9.9997473e-01] [1.4901161e-07] [7.0333481e-06]] #Importing tensorflow import tensorflow as tf #defining hyperparameters learning_rate = 0.01 training_epochs = 1000 batch_size = 100 num_labels = 2 num_features = 5000 train_size = 20000 #defining the placeholders and encoding the y placeholder X = tf.placeholder(tf.float32, shape=[None, num_features]) Y = tf.placeholder(tf.int32, shape=[None]) y_oneHot = tf.one_hot(Y, 1) #defining the model parameters -- weight and bias W = tf.Variable(tf.zeros([num_features, 1])) b = tf.Variable(tf.zeros([1])) #defining the sigmoid model and setting up the learning algorithm y_model = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b)) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_model, labels=y_oneHot) train_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #defining operation to measure success rate correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(y_oneHot, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) #saving variables saver = tf.train.Saver() #executing the graph and saving the model variables with tf.Session() as sess: #new session tf.global_variables_initializer().run() #Iteratively updating parameter batch by batch for step in range(training_epochs * train_size // batch_size): offset = (step * batch_size) % train_size batch_xs = x_train[offset:(offset + batch_size), :] batch_labels = y_train[offset:(offset + batch_size)] #run optimizer on batch err, _ = sess.run([cost, train_optimizer], feed_dict={X:batch_xs, Y:batch_labels}) if step % 1000 ==0: print(step, err) #print ongoing result #Print final learned parameters w_val = sess.run(W) print('w', w_val) b_val = sess.run(b) print('b', b_val) print('Accuracy', accuracy.eval(feed_dict={X:x_test, Y:y_test})) save_path = saver.save(sess, './movie_review_variables') print('Model saved in path {}'.format(save_path)) #creating csv file for kaggle submission with tf.Session() as sess: saver.restore(sess, './movie_review_variables') predictions = sess.run(y_model, feed_dict={X: test_data_features}) subm2 = pd.DataFrame(data={'id':test['id'],'sentiment':predictions}) subm2.to_csv('subm2nlp.csv', index=False, quoting=3) print("I am done predicting") INFO:tensorflow:Restoring parameters from ./movie_review_variables --------------------------------------------------------------------------- Exception Traceback (most recent call last) <ipython-input-85-fd74ed82109c> in <module>() 5 # print('Prections are {}'.format(predictions)) 6 ----> 7 subm2 = pd.DataFrame(data={'id':test['id'], 'sentiment':predictions}) 8 subm2.to_csv('subm2nlp.csv', index=False, quoting=3) 9 print("I am done predicting") Exception: Data must be 1-dimensional
1
1
0
1
0
0
I'm a Keras beginner, and am trying to build the simplest possible autoencoder. It consists of three layers: an input layer, an encoded representation layer, and an output layer. My data (training and validation images) are an ndarray where each image is 214x214x3 (pixels x pixels x RGB channels). I thought I could just use the input shape of the images in the Input layer, but somehow I keep encountering errors. I tried flattening the data, and that works just fine. I can of course just do that, and reshape the output, but I'm curious why this doesn't work. # Shape and size of single image input_shape = x_tr.shape[1:] # --> (214, 214, 3) input_size = x_tr[0].size # Size of encoded representation encoding_dim = 32 compression_factor = float(input_size / encoding_dim) # Build model autoencoder = Sequential() autoencoder.add(Dense(encoding_dim, input_shape=input_shape, activation='relu')) autoencoder.add(Dense(input_shape, activation='softmax')) input_img = Input(shape=(input_shape,)) encoder_layer = autoencoder.layers[0] encoder = Model(input_img, encoder_layer(input_img)) autoencoder.compile(optimizer='adadelta', loss='mean_squared_error') autoencoder.fit(x_tr, x_tr, epochs=50, batch_size=32, shuffle=True, verbose=1, validation_data=(x_va, x_va), callbacks=[TensorBoard(log_dir='/tmp/autoencoder2')]) I get this error: TypeError: unsupported operand type(s) for +: 'int' and 'tuple' I gather that it's not expecting the input shape to look like that, but am unsure of how to fix it to accept input in the shape of 214x214x3 rather than a vector of length 137388.
1
1
0
0
0
0
I need to extract the names of Institutes from the given data. Institues names will look similar ( Anna University, Mashsa Institute of Techology , Banglore School of Engineering, Model Engineering College). It will be a lot of similar data. I want to extract these from text. How can I create a model to extract these names from data(I need to extract from resumes-C.V) I tried adding new NER in spacy but even after training, the loss doesnt decrease and predictions are wrong. That is why I want to make a new model just for this.
1
1
0
0
0
0
I want to make a tensor of 0.9 of an specific shape. In tensorflow there is this command: tf.ones_like() So I put there the shape and I have my vector of ones. I want to do the same but with other value like 0.9, so I want to put the shape like in tf.ones_like but I want my tensor with this number 0.9 How can I do it?
1
1
0
1
0
0
I'm going dialect text classification and I have this code: from sklearn.naive_bayes import MultinomialNB from sklearn.feature_extraction.text import CountVectorizer vectorizerN = CountVectorizer(analyzer='char',ngram_range=(3,4)) XN = vectorizerN.fit_transform(X_train) vectorizerMX = CountVectorizer(vocabulary=a['vocabs']) MX = vectorizerMX.fit_transform(X_train) from sklearn.pipeline import FeatureUnion combined_features = FeatureUnion([('CountVectorizer', MX),('CountVect', XN)]) combined_features.transform(test_data) When I run this code I get this error: TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' I was following the code in this post: Merging CountVectorizer in Scikit-Learn feature extraction Also, how can I train and predict afterwards?
1
1
0
0
0
0
I'm doing dialect text classification and I'm using countVectorizer with naive bayes. The number of features are too many, I have collected 20k tweets with 4 dialects. every dialect have 5000 tweets. And the total number of features are 43K. I was thinking maybe that's why I could be having overfitting. Because the accuracy has dropped a lot when I tested on new data. So how can I fix the number of features to avoid overfitting the data?
1
1
0
0
0
0
I am looking for a way of creating a pandas DataFrame and then add it in an excel file using pandas from a list of dictionary. The first dictionary has 3 values (integer) and the second one has one value which correspond to a set of words. The key for the two dictionaries are the same but to be sure there is not error in the excel file I prefer to have them in the DataFrame. d1 = {'1': ['45', '89', '96'], '2': ['78956', '50000', '100000'], '3': ['0', '809', '656']} d2 = {'1': ['connaître', 'rien', 'trouver', 'être', 'emmerder', 'rien', 'suffire', 'mettre', 'multiprise'], '2': ['trouver', 'être', 'emmerder'], '3' : ['con', 'ri', 'trou', 'êt', 'emmer',]} I am getting error at each tentative and i am really block and I need a solution df = pd.read_csv(sys.argv[1], na_values=['no info', '.'], encoding='Cp1252', delimiter=';') df1 = pd.DataFrame(d1).T.reset_index() df1['value1_d2'] = '' # iterate over the dict and add the lists of words in the new column for k,v in d2.items(): df1.at[int(k) - 1, 'value1_d2'] = v #print(df1) df1.columns = ['id','value_1_Dict1','value_2_Dict1','value_3_Dict1',' value_2_Dict2'] cols = df1.columns.tolist() cols = cols[-1:] + cols[:-1] df1 = df1[cols] print(df1) df = pd.concat([df, df1], axis = 1) df.to_excel('exit.xlsx') I do not have an error but the filling of the dataframe start after the real columns like in the example and I have more then 2000 lines Expected output: I add it in an existing file : score freq **value1_d2 id value1 value2 value3 ** 0 0.5 2 **['connaître', 'rien', 'trouver'] 1 45 89 96 ** 1 0.8 5 ** ['trouver', 'être', 'emmerder'] 2 78956 5000 100000 ** 2 0.1 5 **['con', 'ri', 'trou', 'êt', 'emmer',] 3 0 809 65 ** When trying to add to excel file I have the following error, I want to start writing from the first column so that the key will be the same. Is there a way to solve it using pandas (I have to use pandas for this seminar. Thank you.
1
1
0
0
0
0
Can anyone shed some light on the difference between the neural pipeline used in the new native Python StanfordNLP package: https://stanfordnlp.github.io/stanfordnlp/ and the python wrapper to the Java coreNLP package https://stanfordnlp.github.io/CoreNLP/? Are these two different implementations? I saw that the StanfordNLP package has native neural implementations but also had a wrapper to the CoreNLP package and was wondering why you would need this wrapper if everything was migrated to python anyway?
1
1
0
0
0
0
I have a string(a Javadoc comment) that contains <code>...</code> tags. It looks something like this, <code>System.out</code>. @param project The project to display a description of. Must not be <code>null;</code>. I want to be able to remove comma(,), full stop(.) and semi-colon(;) between the <code>..</code> tags. It should look something like this: <code>Systemout</code>. @param project The project to display a description of. Must not be <code>null</code>. I have tried the following: from bs4 import BeautifulSoup var = '''Prints the description of a project (if there is one) to <code>System.out</code>. @param project The project to display a description of. Must not be <code>null;</code>.''' soup = BeautifulSoup(var, 'html.parser') for a in soup.find_all('code'): print (a.string) But this is extracting the text in between. I don't really know to remove the comma, full stop, and semicolon and append it back to the original string. Any help will be greatly appreciated! SOLUTION matches = re.sub('<code>(.*?)</code>', lambda m: "<code>{}</code>".format( m.group(1).replace(".","").replace(",","").replace(";","")), var, flags=re.DOTALL)
1
1
0
0
0
0
A list containing text strings (fulltexts of newspaper articles) cannot be successfully deduplicated. The only solution is to find the most common sentences, select list items containing these sentences, and then do the deduplication at the level of these sublists. After reading through the myriad of similar questions here, I still have no solution. Here are four different methods that I have tried: 1] x = list(dict.fromkeys(lst)) 2] x = set(lst) 3] from iteration_utilities import unique_everseen x = list(unique_everseen(lst)) 4] using pandas df = df.drop_duplicates(subset=['article_body'], keep='first') All these return the same amount of list items. However, when I check frequency distribution of the most common 'sentences' and search for one. I still find around 45 hits as this sentence appears in several texts, some of them being identical. when these texts are all lumped into one list, I can them use the x = list(dict.fromkeys(lst)). This results in only 9 list items. How is this possible? df = pd.read_json('UK data/2010-11.json') len(df) 13288 df = df.drop_duplicates(subset=['article_body'], keep='first') len(df) 6118 lst = df['article_body'].tolist() len(lst) 6118 # taking this solution as a reference point, here it returns 6118 at the level # of the whole list len(list(dict.fromkeys(lst))) 6118 from nltk.tokenize import sent_tokenize searchStr = 'Lines close at midnight.' found = [] for text in lst: sentences = sent_tokenize(text) for sentence in sentences: if sentence == searchStr: found.append(text) len(found) 45 # when the function is used only on a subset of the full-texts, it can suddenly # identify more duplicates len(list(dict.fromkeys(found))) 9 EDIT: Please check the full demonstration in jupyter notebook available here: https://colab.research.google.com/drive/1EF6PL8aduZIO--Ok0hGMzLWFIquz6F_L I would expect that using the very same function on the full list would result in removing ALL duplicates, but this is clearly not the case. Why cannot I remove the duplicates from the whole list? How can I assure that each list item is compared with all the others?
1
1
0
0
0
0
I'm trying to install 'polyglot' using the below command pip install polyglot But I'm getting the below error Command "python setup.py egg_info" failed with error code 1 in C:\Users\K~1.SHA\AppData\Local\Temp\pip-install-tcez0ptg\polyglot\ My python version is Python 3.6.4 Since I'm new to python I tried the below commands which I found online but they haven't helped python -m pip install --upgrade pip python -m pip install --upgrade setuptools pip install --upgrade pip setuptools wheel How can I install polyglot successfully? Any help on this is appreciated.
1
1
0
0
0
0
I want to train a simple sentiment classifier on the IMDB dataset using pretrained GLoVe vectors, an LSTM and final dense layer with sigmoid activation. The problem I have is that the obtained accuracy is relatively low: 78% . This is lower than the 82% accuracy when using a trainable embedding layer instead of GLoVe vectors. I think the main reason for this is because only 67.9% of words in the dataset are found in the GLoVe file (I am using the 6B corpus). I looked at some words which were not found in the GLoVe file and some examples are : grandmother's twin's Basically a lot of words that have a quote are not found in the GLoVe file. I wonder if the data needs to be preprocessed differently. Currently, the preprocessing is taken care by the function imdb.load_data(). I tried using the larger 42B words corpus, but that only resulted in 76.5% coverage. I wonder if the data ought to be tokenized differently to get a good coverage. The code is this: load_embeddings.py from numpy import asarray import time def load_embeddings(filename): start_time = time.time() embeddings_index = dict() f = open(filename, encoding = 'utf8') for line in f: values = line.split() word = values[0] embedding_vector = asarray(values[1:], dtype='float32') embeddings_index[word] = embedding_vector f.close() end_time = time.time() print('Loaded %s word vectors in %f seconds' % (len(embeddings_index), end_time- start_time)) return embeddings_index train.py from __future__ import print_function import numpy as np from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding from keras.layers import LSTM from keras.datasets import imdb from load_embeddings import load_embeddings maxlen = 80 batch_size = 32 print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data() print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape) word_to_index = imdb.get_word_index() vocab_size = len(word_to_index) print('Vocab size : ', vocab_size) words_freq_list = [] for (k,v) in imdb.get_word_index().items(): words_freq_list.append((k,v)) sorted_list = sorted(words_freq_list, key=lambda x: x[1]) print("50 most common words: ") print(sorted_list[0:50]) # dimensionality of word embeddings EMBEDDING_DIM = 100 # Glove file GLOVE_FILENAME = 'data/glove.6B.100d.txt' # Word from this index are valid words. i.e 3 -> 'the' which is the # most frequent word INDEX_FROM = 3 word_to_index = {k:(v+INDEX_FROM-1) for k,v in imdb.get_word_index().items()} word_to_index["<PAD>"] = 0 word_to_index["<START>"] = 1 word_to_index["<UNK>"] = 2 embeddings_index = load_embeddings(GLOVE_FILENAME) # create a weight matrix for words in training docs embedding_matrix = np.zeros((vocab_size+INDEX_FROM, EMBEDDING_DIM)) # unknown words are mapped to zero vector embedding_matrix[0] = np.array(EMBEDDING_DIM*[0]) embedding_matrix[1] = np.array(EMBEDDING_DIM*[0]) embedding_matrix[2] = np.array(EMBEDDING_DIM*[0]) for word, i in word_to_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector # uncomment below to see which words were not found # else : # print(word, ' not found in GLoVe file.') nonzero_elements = np.count_nonzero(np.count_nonzero(embedding_matrix, axis=1)) coverage = nonzero_elements / vocab_size print('Coverage = ',coverage) # Build and train model print('Build model...') model = Sequential() model.add(Embedding(vocab_size+INDEX_FROM, EMBEDDING_DIM, weights=[embedding_matrix], trainable=False, name= 'embedding')) model.add(LSTM(EMBEDDING_DIM, dropout=0.2, recurrent_dropout=0.2, name = 'lstm')) model.add(Dense(1, activation='sigmoid', name='out')) # try using different optimizers and different optimizer configs model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print('Train...') model.fit(x_train, y_train, batch_size=batch_size, epochs=10, validation_data=(x_test, y_test)) score, acc = model.evaluate(x_test, y_test, batch_size=batch_size) print('Test score:', score) print('Test accuracy:', acc)
1
1
0
0
0
0
I am trying to do the following: reticulate::use_python(python = "/usr/bin/python3", required = TRUE) py_discover_config(): python: /usr/bin/python3 libpython: [NOT FOUND] pythonhome: /usr:/usr version: 3.7.1 (default, Oct 22 2018, 11:21:55) [GCC 8.2.0] numpy: /home/ssolun/.local/lib/python3.7/site-packages/numpy numpy_version: 1.16.3 NOTE: Python version was forced by RETICULATE_PYTHON reticulate::py_config(): Error in initialize_python(required_module, use_environment) : Python shared library not found, Python bindings not loaded. cleanNLP::cnlp_init_spacy(): Error: Python not available See reticulate::use_python() to set python path, then retry Please advise how to solve these errors? I am trying to init spacy for NLP analysis.
1
1
0
0
0
0
I need to cluster sentences according to common n-grams they contain. I am able to extract n-grams easily with nltk but I have no idea how to perform clustering based on n-gram overlap. That is why I couldn't write such a real code, first of all I am sorry for it. I wrote 6 simple sentences and expected output to illustrate the problem. import nltk Sentences= """I would like to eat pizza with her. She would like to eat pizza with olive. There are some sentences must be clustered. These sentences must be clustered according to common trigrams. The quick brown fox jumps over the lazy dog. Apples are red, bananas are yellow.""" sent_detector = nltk.data.load('tokenizers/punkt/'+'English'+'.pickle') sentence_tokens = sent_detector.tokenize(sentences.strip()) mytrigrams=[] for sentence in sentence_tokens: trigrams=ngrams(sentence.lower().split(), 3) mytrigrams.append(list(trigrams)) After this I have no idea (I am not even sure whether this part is okay.) how to cluster them according to common trigrams. I tried to do with itertools-combinations but I got lost, and I didn't know how to generate multiple clusters, since the number of clusters can not be known without comparing each sentence with each other. The expected output is given below, thanks in advance for any help. Cluster1: 'I would like to eat pizza with her.' 'She would like to eat pizza with olive.' Cluster2: 'There are some sentences must be clustered.' 'These sentences must be clustered according to common trigrams.' Sentences do not belong to any cluster: 'The quick brown fox jumps over the lazy dog.' 'Apples are red, bananas are yellow.' EDIT: I have tried with combinations one more time, but it didn't cluster at all, just returned the all sentence pairs. (obviously I did something wrong). from itertools import combinations new_dict = {k: v for k, v in zip(sentence_tokens, mytrigrams)} common=[] no_cluster=[] sentence_pairs=combinations(new_dict.keys(), 2) for keys, values in new_dict.items(): for values in sentence_pairs: sentence1= values[0] sentence2= values[1] #print(sentence1, sentence2) if len(set(sentence1) & set(sentence2))!=0: common.append((sentence1, sentence2)) else: no_cluster.append((sentence1, sentence2)) print(common) But even if this code worked it would not give the output I expect, as I don't know how to generate multiple clusters based on common n-grams
1
1
0
0
0
0
I have four strings: A = "eat apple" B = "eat apples" C = "eats apple" D = "eats apples" The four strings mean the same thing but only have very little difference in the string construct. Is there some python code can detect that those four string are the same or high similar? Thanks.
1
1
0
0
0
0
code output How does for "good movies" it comes out to be 0.707107 , according to me it should be : 1/1*ln(5/2) = 0.91629 . from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd texts = [ "good movie", "not a good movie", "did not like", "i like it", "good one" ] # using default tokenizer in TfidfVectorizer tfidf = TfidfVectorizer(min_df=2, max_df=0.5, ngram_range=(1, 2)) features = tfidf.fit_transform(texts) pd.DataFrame( features.todense(), columns=tfidf.get_feature_names() )
1
1
0
1
0
0
I want to extract all country and nationality mentions from text using nltk, I used POS tagging to extract all GPE labeled tokens but the results were not satisfying. abstract="Thyroid-associated orbitopathy (TO) is an autoimmune-mediated orbital inflammation that can lead to disfigurement and blindness. Multiple genetic loci have been associated with Graves' disease, but the genetic basis for TO is largely unknown. This study aimed to identify loci associated with TO in individuals with Graves' disease, using a genome-wide association scan (GWAS) for the first time to our knowledge in TO.Genome-wide association scan was performed on pooled DNA from an Australian Caucasian discovery cohort of 265 participants with Graves' disease and TO (cases) and 147 patients with Graves' disease without TO (controls). " sent = nltk.tokenize.wordpunct_tokenize(abstract) pos_tag = nltk.pos_tag(sent) nes = nltk.ne_chunk(pos_tag) places = [] for ne in nes: if type(ne) is nltk.tree.Tree: if (ne.label() == 'GPE'): places.append(u' '.join([i[0] for i in ne.leaves()])) if len(places) == 0: places.append("N/A") The results obtained are : ['Thyroid', 'Australian', 'Caucasian', 'Graves'] Some are nationalities but others are just nouns. So what am I doing wrong or is there another way to extract such info?
1
1
0
0
0
0
This was the question I got from an onsite interview with a tech firm, and one that I think ultimately killed my chances. You're given a sentence, and a dictionary that has words as keys and parts of speech as values. The goal is to write a function in which when you're given a sentence, change each word to its part of speech given in the dictionary in order. We can assume that all the stuffs in sentence are present as keys in dictionary. For instance, let's assume that we're given the following inputs: sentence='I am done; Look at that, cat!' dictionary={'!': 'sentinel', ',': 'sentinel', 'I': 'pronoun', 'am': 'verb', 'Look': 'verb', 'that': 'pronoun', 'at': 'preposition', ';': 'preposition', 'done': 'verb', ',': 'sentinel', 'cat': 'noun', '!': 'sentinel'} output='pronoun verb verb sentinel verb preposition pronoun sentinel noun sentinel' The tricky part was catching sentinels. If part of speech didn't have sentinels, this can be easily done. Is there an easy way of doing it? Any library?
1
1
0
0
0
0
It's not a new question, references I found without any solution working for me first and second. I'm a newbie to PyTorch, facing AttributeError: 'Field' object has no attribute 'vocab' while creating batches of the text data in PyTorch using torchtext. Following up the book Deep Learning with PyTorch I wrote the same example as explained in the book. Here's the snippet: from torchtext import data from torchtext import datasets from torchtext.vocab import GloVe TEXT = data.Field(lower=True, batch_first=True, fix_length=20) LABEL = data.Field(sequential=False) train, test = datasets.IMDB.splits(TEXT, LABEL) print("train.fields:", train.fields) print() print(vars(train[0])) # prints the object TEXT.build_vocab(train, vectors=GloVe(name="6B", dim=300), max_size=10000, min_freq=10) # VOCABULARY # print(TEXT.vocab.freqs) # freq # print(TEXT.vocab.vectors) # vectors # print(TEXT.vocab.stoi) # Index train_iter, test_iter = data.BucketIterator.splits( (train, test), batch_size=128, device=-1, shuffle=True, repeat=False) # -1 for cpu, None for gpu # Not working (FROM BOOK) # batch = next(iter(train_iter)) # print(batch.text) # print() # print(batch.label) # This also not working (FROM Second solution) for i in train_iter: print (i.text) print (i.label) Here's the stacktrace: AttributeError Traceback (most recent call last) <ipython-input-33-433ec3a2ca3c> in <module>() 7 8 ----> 9 for i in train_iter: 10 print (i.text) 11 print (i.label) /anaconda3/lib/python3.6/site-packages/torchtext/data/iterator.py in __iter__(self) 155 else: 156 minibatch.sort(key=self.sort_key, reverse=True) --> 157 yield Batch(minibatch, self.dataset, self.device) 158 if not self.repeat: 159 return /anaconda3/lib/python3.6/site-packages/torchtext/data/batch.py in __init__(self, data, dataset, device) 32 if field is not None: 33 batch = [getattr(x, name) for x in data] ---> 34 setattr(self, name, field.process(batch, device=device)) 35 36 @classmethod /anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in process(self, batch, device) 199 """ 200 padded = self.pad(batch) --> 201 tensor = self.numericalize(padded, device=device) 202 return tensor 203 /anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in numericalize(self, arr, device) 300 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr] 301 else: --> 302 arr = [self.vocab.stoi[x] for x in arr] 303 304 if self.postprocessing is not None: /anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in <listcomp>(.0) 300 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr] 301 else: --> 302 arr = [self.vocab.stoi[x] for x in arr] 303 304 if self.postprocessing is not None: AttributeError: 'Field' object has no attribute 'vocab' If not using BucketIterator, what else I can use to get a similar output?
1
1
0
0
0
0
I'm doing dialect text classification. The problem is some tweets, can be classified as both dialect A and B, how can I do that? I want to do it and then automatically calculate the accuracy, I don't want to do it manually. When I don't classify them as both A and B, it gives me many misclassified texts. In the training though, they're not classified as both dialect A and B. but separately.
1
1
0
0
0
0
I have a dataset which consists of 50 subfolders and each of these subfolders has 20-30 files without extension. What I wanted to do is tokenizing the texts in the files for each subfolders, and write it to file with subfolder's name. For example; Let's say subfolder1 has 25 files and I want to tokenize those 25 files together, and write it to a file named "subfolder1". And I want to do it for all the subfolders in the main folder. I have tried some pieces of this code but it gives PermissionError since it can not read a folder. mainfolder="path\\to\\mainfolder" def ls(path): return [os.path.join(path, item) for item in os.listdir(path)] def load_file_sents(path): return [sent.lower() for sent in tokenize.sent_tokenize(open(path).read())] def load_collection_sents(path): sents = [] for f in ls(path): sents.extend(load_file_sents(f)) return sents def get_sentences(path): """ loads sentences from the given path (collection or file) """ sents = [] try: # treat as a single file open(path).read() sents = load_file_sents(path) except IOError: # it's a directory! sents = load_collection_sents(path) return sents def get_toks(path): return [tokenize.word_tokenize(sent) for sent in get_sentences(path)] get_toks(mainfolder) This is the error it gives: PermissionError Traceback (most recent call last) <ipython-input-52-a6f316499b2c> in get_sentences(path) 37 # treat as a single file ---> 38 open(path).read() 39 sents = load_file_sents(path) PermissionError: [Errno 13] Permission denied: I have tried merging the first two functions into one, and make sure it will read files, but this time it just returned tokens of the first file of the first subfolder. If you know how to solve this issue or a better way to do it, your help would be greatly appreciated! Thanks.
1
1
0
0
0
0
I'm following this tutorial on Matrix Factorization for Movie Recommendations in Python using Singular Value Decomposition (SVD): here Using SVD, a dataset is approximated using SVD into three components: M ≈ U ⋅ S ⋅ Vt So you go from left (M) to the three components and back again, now you can use approx. M as a recommendation matrix. Now, i want to use train/test validation sets on this matrix, because you need to find the optimal k (number) approximation for M. How does one apply a separate test set on a trained model to get the predictions for the unseen test set? What is the math / algorithm for this? Thanks
1
1
0
1
0
0
Let's suppose that I have a document like that: document = ["This is a document which has to be splitted OK/Right?"] and I would like to split this document (for start) wherever I encounter ' ' or '/'. So the document above should be transformed to the following one: document = ["This is a document", "which has to be splitted", "OK", "Right?"] How can I do this? Keep in mind that there may be other special characters etc in the text and I do not want to remove them for now.
1
1
0
0
0
0
From this post I found how to remove everything from a text than spaces and alphanumeric: Python: Strip everything but spaces and alphanumeric. In this way: re.sub(r'([^\s\w]|_)+', '', document) I wanted basically to remove all the special characters. However, now I want to do the same (i.e. to remove all the special characters) but without removing the following special characters: / How do I do this?
1
1
0
0
0
0
I do the following: re.sub(r'[^ A-Za-z0-9/]+', '', document) to remove every character which is not alphanumeric, space, newline, or forward slash. So I basically I want to remove all special characters except for the newline and the forward slash. However, I do not want to remove the accented letters which various languages have such as in French, German etc. But if I run the code above then for example the word Motörhead becomes Motrhead and I do not want to do this. So how do I run the code above but without removing the accented letters? UPDATE: @MattM below has suggested a solution which does work for languages such as English, French, German etc but it certainly does not work for languages such as Polish where all the accented letters were still removed.
1
1
0
0
0
0
I am learning Natural Language Processing using NLTK. I came across the code using PunktSentenceTokenizer whose actual use I cannot understand in the given code. The code is given : import nltk from nltk.corpus import state_union from nltk.tokenize import PunktSentenceTokenizer train_text = state_union.raw("2005-GWBush.txt") sample_text = state_union.raw("2006-GWBush.txt") custom_sent_tokenizer = PunktSentenceTokenizer(train_text) #A tokenized = custom_sent_tokenizer.tokenize(sample_text) #B def process_content(): try: for i in tokenized[:5]: words = nltk.word_tokenize(i) tagged = nltk.pos_tag(words) print(tagged) except Exception as e: print(str(e)) process_content() So, why do we use PunktSentenceTokenizer. And what is going on in the line marked A and B. I mean there is a training text and the other a sample text, but what is the need for two data sets to get the Part of Speech tagging. Line marked as A and B is which I am not able to understand. PS : I did try to look in the NLTK book but could not understand what is the real use of PunktSentenceTokenizer
1
1
0
0
0
0
I want to extract features from pre-trained Glove embedding. But I got Keyerror for certain words. Here is the list of word token. words1=['nuclear','described', 'according', 'called','physics', 'account','interesting','holes','theoretical','like','space','radiation','property','impulsed','darkfield'] I got Keyerror from 'impulsed', 'darkfield' words because probably these are the unseen words. How can I avoid this error ? . Here is my full code: gloveFile = "glove.6B.50d.txt" import numpy as np def loadGloveModel(gloveFile): print ("Loading Glove Model") with open(gloveFile, encoding="utf8" ) as f: content = f.readlines() model = {} for line in content: splitLine = line.split() word = splitLine[0] embedding = np.array([float(val) for val in splitLine[1:]]) model[word] = embedding print ("Done.",len(model)," words loaded!") return model model = loadGloveModel(gloveFile) words1=['nuclear','described', 'according', 'called','physics', 'account','interesting','holes','theoretical','like','space','radiation','property','impulsed','darkfield'] import numpy as np vector_2 = np.mean([model[word] for word in words1],axis=0) ## Got error message Error message for 'impulsed' word Is there any way to skip these unseen words?.
1
1
0
0
0
0
I'm trying to write my own chatbot with the RASA framework. Right now I'm just playing around with it and I have the following piece of code for training purposes. from rasa.nlu.training_data import load_data from rasa.nlu.config import RasaNLUModelConfig from rasa.nlu.model import Trainer from rasa.nlu import config training_data = load_data("./data/nlu.md") trainer = Trainer(config.load("config.yml")) interpreter = trainer.train(training_data) model_directory = trainer.persist("./models/nlu",fixed_model_name="current") Now, I read that if I wanted to test it I should do something like this. from rasa.nlu.evaluate import run_evaluation run_evaluation("nlu.md", model_directory) But this code is not available anymore in rasa.nlu.evaluate nor in rasa.nlu.test! What's the way, then, of testing a RASA model?
1
1
0
0
0
0
I want to tokenize text with gensim.utils.tokenize(). And I want to add some phrases that would be recognized as single tokens, for example: 'New York', 'Long Island'. Is it possible with gensim? If not, what other libraries is it possible to use?
1
1
0
0
0
0
I am using Spacy for text tokenization and getting stuck with it: import spacy nlp = spacy.load("en_core_web_sm") mytext = "This is some sentence that spacy will not appreciate" doc = nlp(mytext) for token in doc: print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_, token.is_alpha, token.is_stop) returns something that seems to me to say that tokenisation was succesful: This this DET DT nsubj Xxxx True False is be VERB VBZ ROOT xx True True some some DET DT det xxxx True True sentence sentence NOUN NN attr xxxx True False that that ADP IN mark xxxx True True spacy spacy NOUN NN nsubj xxxx True False will will VERB MD aux xxxx True True not not ADV RB neg xxx True True appreciate appreciate VERB VB ccomp xxxx True False but on the other hand [token.text for token in doc[2].lefts] returns an empty list. Is there a bug in lefts/rights? Beginner at natural language processing, hope I am not falling into a conceptual trap. Using Spacy v'2.0.4'.
1
1
0
0
0
0
I'm trying to analyze some data from app reviews. I want to use nltk's FreqDist to see the most frequently occurring phrases in a file. It can be a single token or key phrases. I don't want to tokenize the data because that would give me most frequent tokens only. But right now, the FreqDist function is processing each review as one string, and is not extracting the words in each review. df = pd.read_csv('Positive.csv') def pre_process(text): translator = str.maketrans("", "", string.punctuation) text = text.lower().strip().replace(" ", " ").replace("’", "").translate(translator) return text df['Description'] = df['Description'].map(pre_process) df = df[df['Description'] != ''] word_dist = nltk.FreqDist(df['Description']) ('Description' is the body/message of the reviews.) For example, I want to get something like Most Frequent terms: "I like", "useful", "very good app" But instead I'm getting Most Frequent terms: "I really enjoy this app because bablabla" (entire review) And that's why when I'm plotting the FreqDist I get this:
1
1
0
1
0
0
i used pytesseract to identify text from image pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' then i used below code to identify text textImg = pytesseract.image_to_string(Image.open(imgLoc+"/"+imgName)) print(textImg) text_file = open(imgLoc+"/"+"oriText.txt", "w") text_file.write(textImg) text_file.close() this is my input image this is an image of my output text file is there any way to identify the text clearly from image
1
1
0
0
0
0
i'm using nltk as interface for Stanford NER Tagger. I have question that are there any options to get NER result as IOB format using NLTK? I've read this question but it's for java user NLTK version: 3.4 Java version: jdk1.8.0_211/bin Stanford NER model: english.conll.4class.distsim.crf.ser.gz Input: My name is Donald Trumph Expected output: My/O name/O is/O Donald/B-PERSON Trumph/I-PERSON
1
1
0
0
0
0
I am trying to create a simple model to predict the next word in a sentence. I have a big .txt file that contains sentences seperated by ' '. I also have a vocabulary file which lists every unique word in my .txt file and a unique ID. I used the vocabulary file to convert the words in my corpus to their corresponding IDs. Now I want to make a simple model which reads the IDs from txt file and find the word pairs and how many times this said word pairs were seen in the corpus. I have managed to write to code below: tuples = [[]] #array for word tuples to be stored in data = [] #array for tuple frequencies to be stored in data.append(0) #tuples array starts with an empty element at the beginning for some reason. # Adding zero to the beginning of the frequency array levels the indexes of the two arrays with open("markovData.txt") as f: contentData = f.readlines() contentData = [x.strip() for x in contentData] lineIndex = 0 for line in contentData: tmpArray = line.split() #split line to array of words tupleIndex = 0 tmpArrayIndex = 0 for tmpArrayIndex in range(len(tmpArray) - 1): #do this for every word except the last one since the last word has no word after it. if [tmpArray[tmpArrayIndex], tmpArray[tmpArrayIndex + 1]] in tuples: #if the word pair is was seen before data[tuples.index([tmpArray[tmpArrayIndex], tmpArray[tmpArrayIndex + 1]])] += 1 #increment the frequency of said pair else: tuples.append([tmpArray[tmpArrayIndex], tmpArray[tmpArrayIndex + 1]]) #if the word pair is never seen before data.append(1) #add the pair to list and set frequency to 1. #print every 1000th line to check the progress lineIndex += 1 if ((lineIndex % 1000) == 0): print(lineIndex) with open("markovWindowSize1.txt", 'a', encoding="utf8") as markovWindowSize1File: #write tuples to txt file for tuple in tuples: if (len(tuple) > 0): # if tuple is not epmty markovWindowSize1File.write(str(element[0]) + "," + str(element[1]) + " ") markovWindowSize1File.write(" ") markovWindowSize1File.write(" ") #blank spaces between two data #write frequencies of the tuples to txt file for element in data: markovWindowSize1File.write(str(element) + " ") markovWindowSize1File.write(" ") markovWindowSize1File.write(" ") This code seems to be working well for the first couple thousands of lines. Then things start to get slower because the tuple list keeps getting bigger and I have to search the whole tuple list to check if the next word pair was seen before or not. I managed to get the data of 50k lines in 30 minutes but I have much bigger corpuses with millions of lines. Is there a way to store and search for the word pairs in a more efficient way? Matrices would probably work a lot faster but my unique word count is about 300.000 words. Which means I have to create a 300k*300k matrix with integers as data type. Even after taking advantage of symmetric matrices, it would require a lot more memory than what I have. I tried using memmap from numpy to store the matrix in disk rather than memory but it required about 500 GB free disk space. Then I studied the sparse matrices and found out that I can just store the non-zero values and their corresponding row and column numbers. Which is what I did in my code. Right now, this model works but it is very bad at guessing the next word correctly ( about 8% success rate). I need to train with bigger corpuses to get better results. What can I do to make this word pair finding code more efficient? Thanks. Edit: Thanks to everyone answered, I am now able to process my corpus of ~500k lines in about 15 seconds. I am adding the final version of the code below for people with similiar problems: import numpy as np import time start = time.time() myDict = {} # empty dict with open("markovData.txt") as f: contentData = f.readlines() contentData = [x.strip() for x in contentData] lineIndex = 0 for line in contentData: tmpArray = line.split() #split line to array of words tmpArrayIndex = 0 for tmpArrayIndex in range(len(tmpArray) - 1): #do this for every word except the last one since the last word has no word after it. if (tmpArray[tmpArrayIndex], tmpArray[tmpArrayIndex + 1]) in myDict: #if the word pair is was seen before myDict[tmpArray[tmpArrayIndex], tmpArray[tmpArrayIndex + 1]] += 1 #increment the frequency of said pair else: myDict[tmpArray[tmpArrayIndex], tmpArray[tmpArrayIndex + 1]] = 1 #if the word pair is never seen before #add the pair to list and set frequency to 1. #print every 1000th line to check the progress lineIndex += 1 if ((lineIndex % 1000) == 0): print(lineIndex) end = time.time() print(end - start) keyText= "" valueText = "" for key1,key2 in myDict: keyText += (str(key1) + "," + str(key2) + " ") valueText += (str(myDict[key1,key2]) + " ") with open("markovPairs.txt", 'a', encoding="utf8") as markovPairsFile: markovPairsFile.write(keyText) with open("markovFrequency.txt", 'a', encoding="utf8") as markovFrequencyFile: markovFrequencyFile.write(valueText)
1
1
0
0
0
0
I installed spacy with pip and wanted to load spacy. The following python code with spacy: import spacy nlp = spacy.load('de',disable=['parser', 'tagger','ner']) nlp.max_length = 1198623 Unfortunately, the code is throwing the following error: OSError: [E050] Can't find model 'de'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
1
1
0
0
0
0
The dataset contains different items row-wise and the columns contain the samples recorded where half are of positive class and the other half are of negative class. Now, I want to create and train a model to classify unseen item sample as positive or negative. Question: How do I handle(use) such a dataset? And, any recommendation for model as the number of rows is more than 50k and the number of columns are 12 positive and 12 negative. Now, from this data, a model is to created that can classify x(or y or z) as positive or negative based on the value provided. For example, if the value provided for x is 12, then the model evaluates x as positive.
1
1
0
1
0
0
I realize that this is slightly outside the realm of what sort of questions are normally asked here, so please forgive that. I have been tasked with an open ended technical screening for a job as a data scientist. This is my first job that has asked for something like this, so I want to make sure that I am submitting really good work. I was given a dataset and asked to identify the problem and how to use machine learning to solve it, give stats on the target feature, pre-process the data data, model the data, and interpret the results. I am looking for feedback about if I am missing anything huge in my results. High level feedback is fine. Hopefully some of you are data scientists and have either had to complete a technical screening like this or have had to review one and can offer some valuable feedback to an up-and-coming data scientist. Thank you! Github Link to Project
1
1
0
1
0
0
After I trained my model for the toxic challenge at Keras the accuracy of the prediction is bad. I'm not sure if I'm doing something wrong, but the accuracy during the training period was pretty good ~0.98. How I trained import sys, os, re, csv, codecs, numpy as np, pandas as pd import matplotlib.pyplot as plt from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation from keras.layers import Bidirectional, GlobalMaxPool1D from keras.models import Model from keras import initializers, regularizers, constraints, optimizers, layers train = pd.read_csv('train.csv') list_classes = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"] y = train[list_classes].values list_sentences_train = train["comment_text"] max_features = 20000 tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(list_sentences_train)) list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train) maxlen = 200 X_t = pad_sequences(list_tokenized_train, maxlen=maxlen) inp = Input(shape=(maxlen, )) embed_size = 128 x = Embedding(max_features, embed_size)(inp) x = LSTM(60, return_sequences=True,name='lstm_layer')(x) x = GlobalMaxPool1D()(x) x = Dropout(0.1)(x) x = Dense(50, activation="relu")(x) x = Dropout(0.1)(x) x = Dense(6, activation="sigmoid")(x) model = Model(inputs=inp, outputs=x) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) batch_size = 32 epochs = 2 print(X_t[0]) model.fit(X_t,y, batch_size=batch_size, epochs=epochs, validation_split=0.1) model.save("m.hdf5") This is how I predict model = load_model('m.hdf5') list_sentences_train = np.array(["I love you Stackoverflow"]) max_features = 20000 tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(list_sentences_train)) list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train) maxlen = 200 X_t = pad_sequences(list_tokenized_train, maxlen=maxlen) print(X_t) print(model.predict(X_t)) Output [[ 1.97086316e-02 9.36032447e-05 3.93966911e-03 5.16672269e-04 3.67353857e-03 1.28102733e-03]]
1
1
0
1
0
0
I am stuck with a basic thing but I could not figure out how to make it work. My apologies if it is something super basic. It is just that I am very new to Spacy and do not know how to do this. Could not find any resource on the internet as well. I have a bunch of sentences like so a = "<sos> Hello There! <eos>" I am using this following lines of code to tokenize it using Spacy import spacy nlp = spacy.load('en_core_web_sm') for token in nlp(a): print(token.text) What it prints is something like this < sos > Hello There ! < eos > As you can see, it parsed the <sos> and <eos> metatags. How can I avoid that? The output I would like to see is something like the following <sos> Hello There ! <eos> I could not figure out how to achieve this. Any help will be great. Thanks in advance
1
1
0
0
0
0
I have created a model of input_shape(28,28,3) and I have a numpy array of data that I need to fit in the model of shape(28500, 784). Can anyone help me fit the training data into my model? model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,3))) x_train.shape output = (28500, 784)
1
1
0
0
0
0
I'm trying to cluster some descriptions using LSI. As the dataset that I have is too long, I'm clustering based on the vectors obtained from the models instead of using the similarity matrix, which requires too much memory, and if I pick a sample, the matrix generated doesn't correspond to a square (this precludes the use of MDS). However, after running the model and looking for the vectors I'm getting different vector's lengths in the descriptions. Most of them have a length of 300 (the num_topics argument in the model), but some few, with the same description, present a length of 299. Why is this happening? Is there a way to correct it? dictionary = gensim.corpora.Dictionary(totalvocab_lemmatized) dictionary.compactify() corpus = [dictionary.doc2bow(text) for text in totalvocab_lemmatized] ###tfidf model tfidf = gensim.models.TfidfModel(corpus, normalize = True) corpus_tfidf = tfidf[corpus] ###LSI model lsi = gensim.models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300) vectors =[] for n in lemmatized[:100]: vec_bow = dictionary.doc2bow(n) vec_lsi = lsi[vec_bow] print(len(vec_lsi))
1
1
0
0
0
0
I have a list of strings like below. I would like to see similarity between list1 and list2 using Doc2Vec. list1 = [['i','love','machine','learning','its','awesome'],['i', 'love', 'coding', 'in', 'python'],['i', 'love', 'building', 'chatbots']] list2 = ['i', 'love', 'chatbots']
1
1
0
0
0
0
I need for my function to return true if the first word in an input is a verb. I tried this but it did not work(aka didn't return anything despite it being a verb), can someone show me an example of what im doing wrong. Also an example of what the correct way to do this is, Thank you! def All(): what_person_said = input() what_person_said_wt = nltk.word_tokenize(what_person_said) result = nltk.pos_tag(what_person_said_wt[0]) if result == 'VB': print ("First word is a verb") return True
1
1
0
0
0
0