text
stringlengths
0
27.6k
python
int64
0
1
DeepLearning or NLP
int64
0
1
Other
int64
0
1
Machine Learning
int64
0
1
Mathematics
int64
0
1
Trash
int64
0
1
I am setting up nlp preprocessing using pretrained FastText model to query and save word vectors. I ran into FileNotFoundError: [Errno 2] No such file or directory: 'fasttext': 'fasttext' and unable resolve it at this point. This is for a nlp clinical text similarity project that I am working on. I doubled checked to make sure all the files and folders are present in the directory. I also want to note that I am used both floydhub and google colab to make sure it wasn't a environment issue. I went through the process twice and ended up with the same error. A second set eyes can definitely help. The code to replicate the command fasttext print-vectors model.bin < words.txt >> vectors.vec is below: with open(VOCAB_FILE) as f_vocab: with open(OUTPUT_FILE, 'a') as f_output: subprocess.run( [FASTTEXT_EXECUTABLE, 'print-word-vectors', PRETRAINED_MODEL_FILE], stdin=f_vocab, stdout=f_output) The traceback error I am getting is below: FileNotFoundError Traceback (most recent call last) <ipython-input-150-7b469ee34f75> in <module>() 4 [FASTTEXT_EXECUTABLE, 'print-word-vectors', PRETRAINED_MODEL_FILE], 5 stdin=f_vocab, ----> 6 stdout=f_output) /usr/local/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs) 401 kwargs['stdin'] = PIPE 402 --> 403 with Popen(*popenargs, **kwargs) as process: 404 try: 405 stdout, stderr = process.communicate(input, timeout=timeout) /usr/local/lib/python3.6/subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors) 707 c2pread, c2pwrite, 708 errread, errwrite, --> 709 restore_signals, start_new_session) 710 except: 711 # Cleanup if the child failed starting. /usr/local/lib/python3.6/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session) 1342 if errno_num == errno.ENOENT: 1343 err_msg += ': ' + repr(err_filename) -> 1344 raise child_exception_type(errno_num, err_msg, err_filename) 1345 raise child_exception_type(err_msg) 1346 FileNotFoundError: [Errno 2] No such file or directory: 'fasttext': 'fasttext' The expected outcome is to be able to query and save fasttext vectors. The code snippet above us obtain from github repo and was used on Kaggles Quora Question Pairs.
1
1
0
0
0
0
Below code is an example training loop for SpaCy's named entity recognition(NER). for itn in range(100): random.shuffle(train_data) for raw_text, entity_offsets in train_data: doc = nlp.make_doc(raw_text) gold = GoldParse(doc, entities=entity_offsets) nlp.update([doc], [gold], drop=0.5, sgd=optimizer) nlp.to_disk("/model") drop as per spacy is the drop out rate. Can somebody explain the meaning of the same in detail?
1
1
0
0
0
0
I want to read some text file and Find out how many times each word is repeated per line? this is my text file خواب خودرو چگونه محاسبه می گردد؟ برای دریافت آن چه باید كرد؟ مهلت زمانی تامین قطعه پس از درخواست مشتری چند روز است؟ آیا در مراجعه مجدد برای ایرادی كه پس از تعمیرات رفع نشده است باید هزینه ای پرداخت گردد؟ چرا؟ چرا توزیع قطعات در نمایندگی ها مختلف شهر متفاوت است؟ and make a output like this line# word#1 word#2 word#3 ...... 1 2 0 1 2 0 0 2 . . . i want to create a function to do this , i can't use countvectorizer function for persian language
1
1
0
0
0
0
I am trying to understand, how to use BERT for QnA and found a tutorial on how to start on PyTorch (here). Now I would like to use these snippets to get started, but i do not understand how to project the output back on the example text. text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" (...) # Predict the start and end positions logits with torch.no_grad(): start_logits, end_logits = questionAnswering_model(tokens_tensor, segments_tensors) # Or get the total loss start_positions, end_positions = torch.tensor([12]), torch.tensor([14]) multiple_choice_loss = questionAnswering_model( tokens_tensor, segments_tensors, start_positions=start_positions, end_positions=end_positions) start_logits (shape: [1, 16]):tensor([[ 0.0196, 0.1578, 0.0848, 0.1333, -0.4113, -0.0241, -0.1060, -0.3649, 0.0955, -0.4644, -0.1548, 0.0967, -0.0659, 0.1055, -0.1488, -0.3649]]) end_logits (shape: [1, 16]):tensor([[ 0.1828, -0.2691, -0.0594, -0.1618, 0.0441, -0.2574, -0.2883, 0.2526, -0.0551, -0.0051, -0.1572, -0.1670, -0.1219, -0.1831, -0.4463, 0.2526]]) If my assumption is correct start_logits and end_logits need to be projected back on text, but how do i compute this? Additionally do you have any resources/guides/tutorials you could recommend to continue further into QnA (except google-research/bert github and the paper for bert)? Thank you in advance.
1
1
0
0
0
0
I am creating a camera application using opencv and pyautogui.The function is not getting evaluated. from utils import CFEVideoConf, image_resize def recog(): cap = cv2.VideoCapture(0) save_path = 'saved-media/video.avi' frames_per_seconds = 24.0 config = CFEVideoConf(cap, filepath=save_path, res='720p') out = cv2.VideoWriter(save_path, config.video_type, frames_per_seconds, config.dims) while (True): # Capture frame-by-frame ret, frame = cap.read() out.write(frame) # Display the resulting frame cv2.imshow('frame',frame) if cv2.waitKey(20) & 0xFF == ord('q'): op = pyautogui.confirm("") if op == 'OK': print("Out") break cap.release() out.release() cv2.destroyAllWindows() opt =pyautogui.confirm(text= 'Chose an option', title='Camcorder', buttons=['Record', 'Capture', 'Exit']) if opt == 'START': print("Starting the app") recog() if opt == 'Exit': print("Quit the app") Please correct the mistakes if there are any.
1
1
0
0
0
0
I am using the latest version of spacy_hunspell with Portuguese dictionaries. And, I realized that when I have inflected verbs containing special characters, such as the acute accent (`) and the tilde (~), the spellchecker fails to retrieve the correct verification: import hunspell spellchecker = hunspell.HunSpell('/usr/share/hunspell/pt_PT.dic', '/usr/share/hunspell/pt_PT.aff') #Verb: fazer spellchecker.spell('fazer') # True, correct spellchecker.spell('faremos') # True, correct spellchecker.spell('fará') # False, incorrect spellchecker.spell('fara') # True, incorrect spellchecker.spell('farão') # False, incorrect #Verb: andar spellchecker.spell('andar') # True, correct spellchecker.spell('andamos') # True, correct spellchecker.spell('andará') # False, incorrect spellchecker.spell('andara') # True, correct #Verb: ouvir spellchecker.spell('ouvir') # True, correct spellchecker.spell('ouço') # False, incorrect Another problem is when the verb is irregular, like ir: spellchecker.spell('vamos') # False, incorrect spellchecker.spell('vai') # False, incorrect spellchecker.spell('iremos') # True, correct spellchecker.spell('irá') # False, incorrect As far as noticed, the problem does not happen with nouns with special characters: spellchecker.spell('coração') # True, correct spellchecker.spell('órgão') # True, correct spellchecker.spell('óbvio') # True, correct spellchecker.spell('pivô') # True, correct Any suggestions?
1
1
0
0
0
0
I have around 20k documents with 60 - 150 words. Out of these 20K documents, there are 400 documents for which the similar document are known. These 400 documents serve as my test data. I am trying to find similar documents for these 400 datasets using gensim doc2vec. The paper "Distributed Representations of Sentences and Documents" says that "The combination of PV-DM and PV-DBOW often work consistently better (7.42% in IMDB) and therefore recommended." So I would like to combine the vectors of these two methods and find cosine similarity with all the train documents and select the top 5 with the least cosine distance. So what's the effective method to combine the vectors of these 2 methods: adding or averaging or any other method ??? After combining these 2 vectors I can normalise each vector and then find the cosine distance.
1
1
0
0
0
0
I am planning to build an AI system that learns from the corpus (text file) and needs to answer to question for user like chatbot to be short chatbot without any predefined data. Until now I webscraped some data and stored as a text file and I used TF-IDF(cosine similarity) method to make the system to answer questions but the accuracy level is moderate only def response(user_response): robo_response='' sent_tokens.append(user_response) TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english') tfidf = TfidfVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx=vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] if(req_tfidf==0): robo_response=robo_response+"cant understand" return robo_response else: robo_response = robo_response+sent_tokens[idx] return robo_response TD-IDF method which I used Is there any other way to build a system to do the work somewhat accurately?
1
1
0
0
0
0
Hello I have been trying to contextual extract word embedding using the novel XLNet but without luck. Running on Google Colab with TPU I would like to note that I get this error when I use TPU so thus I switch to GPU to avoid the error xlnet_config = xlnet.XLNetConfig(json_path=FLAGS.model_config_path) AttributeError: module 'xlnet' has no attribute 'XLNetConfig' However I get another error when I use GPU run_config = xlnet.create_run_config(is_training=True, is_finetune=True, FLAGS=FLAGS) AttributeError: use_tpu I will post the whole code below: I am using a small sentence as an input till it work and I switch to big data then Main Code: import sentencepiece as spm import numpy as np import tensorflow as tf from prepro_utils import preprocess_text, encode_ids import xlnet import sentencepiece as spm text = "The metamorphic rocks of western Crete form a series some 9000 to 10,000 ft." sp_model = spm.SentencePieceProcessor() sp_model.Load("/content/xlnet_cased_L-24_H-1024_A-16/spiece.model") text = preprocess_text(text) ids = encode_ids(sp_model, text) #print('ids',ids) # some code omitted here... # initialize FLAGS # initialize instances of tf.Tensor, including input_ids, seg_ids, and input_mask # XLNetConfig contains hyperparameters that are specific to a model checkpoint. xlnet_config = xlnet.XLNetConfig(json_path=FLAGS.model_config_path) **ERROR 1 HERE** from absl import flags import sys FLAGS = flags.FLAGS # RunConfig contains hyperparameters that could be different between pretraining and finetuning. run_config = xlnet.create_run_config(is_training=True, is_finetune=True, FLAGS=FLAGS) **ERROR 2 HERE** xp = [] xp.append(ids) input_ids = np.asarray(xp) xlnet_model = xlnet.XLNetModel( xlnet_config=xlnet_config, run_config=run_config, input_ids=input_ids, seg_ids=None, input_mask=None) embed1=tf.train.load_variable('../data/xlnet_cased_L-24_H-1024_A-16/xlnet_model.ckpt','model/transformer/word_embedding/lookup_table:0')` Before the main code I'm cloning Xlnet from GitHub and so on (I will also post it) ! pip install sentencepiece #Download the pretrained XLNet model and unzip only needs to be done once ! wget https://storage.googleapis.com/xlnet/released_models/cased_L-24_H-1024_A-16.zip ! unzip cased_L-24_H-1024_A-16.zip ! git clone https://github.com/zihangdai/xlnet.git SCRIPTS_DIR = 'xlnet' #@param {type:"string"} DATA_DIR = 'aclImdb' #@param {type:"string"} OUTPUT_DIR = 'proc_data/imdb' #@param {type:"string"} PRETRAINED_MODEL_DIR = 'xlnet_cased_L-24_H-1024_A-16' #@param {type:"string"} CHECKPOINT_DIR = 'exp/imdb' #@param {type:"string"} train_command = "python xlnet/run_classifier.py \ --do_train=True \ --do_eval=True \ --eval_all_ckpt=True \ --task_name=imdb \ --data_dir="+DATA_DIR+" \ --output_dir="+OUTPUT_DIR+" \ --model_dir="+CHECKPOINT_DIR+" \ --uncased=False \ --spiece_model_file="+PRETRAINED_MODEL_DIR+"/spiece.model \ --model_config_path="+PRETRAINED_MODEL_DIR+"/xlnet_config.json \ --init_checkpoint="+PRETRAINED_MODEL_DIR+"/xlnet_model.ckpt \ --max_seq_length=128 \ --train_batch_size=8 \ --eval_batch_size=8 \ --num_hosts=1 \ --num_core_per_host=1 \ --learning_rate=2e-5 \ --train_steps=4000 \ --warmup_steps=500 \ --save_steps=500 \ --iterations=500" ! {train_command}
1
1
0
0
0
0
I am working on a way to classify mail by using Keras. I read the mail that have already been classified, tokenize them to create a dictionary which is link to a folder. So I created a dataframe with pandas: data = pd.DataFrame(list(zip(lst, lst2)), columns=['text', 'folder']) The text column is where reside all the words present in an email and the folder column is the class (the path) that the email belongs to. Thanks to that I created my model which gives me those results: 3018/3018 [==============================] - 0s 74us/step - loss: 0.0325 - acc: 0.9950 - val_loss: 0.0317 - val_acc: 0.9950 On 100 Epoch The evaluation of my model 755/755 [==============================] - 0s 28us/step Test score: 0.0316697002592071 Test accuracy: 0.995000006268356 So the last that I need to do is predict the class of a random mail but the model.predict_classes(numpy.array) call gives me a 2D array full of integer but I still don't know to which "folder/class" it belongs. Here is my code: #lst contains all the words in the mail #lst2 the class/path of lst data = pd.DataFrame(list(zip(lst, lst2)), columns=['text', 'folder']) train_size = int(len(data) * .8) train_posts = data['text'][:train_size] train_tags = data['folder'][:train_size] test_posts = data['text'][train_size:] test_tags = data['folder'][train_size:] num_labels = 200 #The numbers of total classes #the way I tokenize and encode my data tokenizer = Tokenizer(num_words=len(lst)) tokenizer.fit_on_texts(pd.concat([train_posts, test_posts], axis = 1)) x_train = tokenizer.texts_to_matrix(train_posts, mode=TOKENISER_MODE) x_test = tokenizer.texts_to_matrix(test_posts, mode=TOKENISER_MODE) encoder = preprocessing.LabelBinarizer() encoder.fit(train_tags) y_train = encoder.transform(train_tags) y_test = encoder.transform(test_tags) #my model, vocab_size = len(lst) = number of the words present in the mails model = Sequential() model.add(Dense(16, input_shape=(vocab_size,))) model.add(Activation('elu')) model.add(Dropout(0.2)) model.add(Dense(32)) model.add(Activation('elu')) model.add(Dropout(0.2)) model.add(Dense(16)) model.add(Activation('elu')) model.add(Dropout(0.2)) model.add(Dense(num_labels)) model.add(Activation('sigmoid')) model.summary() #compile training and evaluate model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=batch_size, epochs=100, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=1) print('Test score:', score[0]) print('Test accuracy:', score[1]) #read the random file sentences = read_files("mail.eml") sentences = ' '.join(sentences) sentences = sentences.lower() salut = unidecode.unidecode(sentences) #predict pred = model.predict_classes(salut, batch_size=batch_size, verbose=1) print(pred) The actual output of pred: [125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125 125] I don't why but the output each time I launch it is always full of the same number. And the output I am looking for is: ['medecine/AIDS/', help/, project/classification/] sorted by probabilities of being the right one. The read_files call is just a function that read the mail and return a list of all the words present in the mail. Is there a way to obtain the class of the mail with model.predict_classes() or do I need to use something else?
1
1
0
1
0
0
I'm new to Tensorflow and AI, so I'm having trouble researching my question. Either that, or my question hasn't been answered. I'm trying to make a text classifier to put websites into categories based on their keywords. I have at minimum 5,000 sites and maximum 37,000 sites to train with. What I'm trying to accomplish is: after the model is trained, I want it to continue to train as it makes predictions about the category a website belongs in. The keywords that the model is trained on is chosen by clients, so it can always be different than the rest of the websites in its category. How can I make Tensorflow retrain it's model based on corrections made by me if it's prediction is inaccurate? Basically, to be training for ever.
1
1
0
0
0
0
I get a UserWarning thrown every time I execute this function. Here user_input is a list of words, and article_sentences a list of lists of words. I've tried to remove all stop words out of the list beforehand but this didn't change anything. def generate_response(user_input): sidekick_response = '' article_sentences.append(user_input) word_vectorizer = TfidfVectorizer(tokenizer=get_processed_text, stop_words='english') all_word_vectors = word_vectorizer.fit_transform(article_sentences) # this is the problematic line similar_vector_values = cosine_similarity(all_word_vectors[-1], all_word_vectors) similar_sentence_number = similar_vector_values.argsort()[0][-2] this is a part of a function for a simple chatbot I found here: https://stackabuse.com/python-for-nlp-creating-a-rule-based-chatbot/ it should return a sorted list of sentences sorted by how much they match the user_input, which it does but it also throws this UserWarning: Your stop_words may be inconsistent with your preprocessing. Tokenizing the stop words generated tokens ['ha', 'le', 'u', 'wa'] not in stop_words.
1
1
0
0
0
0
I wanted to transform a dataset or create a new one that takes a dataset column with labels as input which automatically has sequences of strings according to a pre-defined length (and pads if necessary). The example below should demonstrate what I mean. I was able to manually create a new dataframe based on ngrams. This is obviously computationally expensive and creates many columns with repetitive words. text labels 0 from dbl visual com david b lewis subject comp... 5 1 from johan blade stack urc tue nl johan wevers... 11 2 from mzhao magnus acs ohio state edu min zhao ... 6 3 from lhawkins annie wellesley edu r lee hawkin... 14 4 from seanmcd ac dal ca subject powerpc ruminat... 4 for example for sequence length 4 into something like this: text labels 0 from dbl visual com 5 1 david b lewis subject 5 2 comp windows x frequently 5 3 asked questions <PAD> <PAD> 5 4 from johan blade stack 11 5 urc tue nl johan 11 6 wevers subject re <PAD> 11 7 from mzhao magnus acs 6 8 ohio state edu min 6 9 zhao subject composite <PAD> 6 As explained I was able to create a new dataframe based on ngrams. I could theoretically delete every n-rows afterwards again. df = pd.read_csv('data.csv') longform = pd.DataFrame(columns=['text', 'labels']) for idx, content in df.iterrows(): name_words = (i.lower() for i in content[0].split()) ngramlis = list(ngrams(name_words,20)) longform = longform.append( [{'words': ng, 'labels': content[1]} for ng in ngramlis], ignore_index=True ) longform['text_new'] = longform['words'].apply(', '.join) longform['text_new'] = longform['text_new'].str.replace(',', '') This is really bad code which is why I am quite confident that someone might come up with a better solutions. Thanks in advance!
1
1
0
0
0
0
So I'm using the spacy library (NLP), to assign certain attributes to data. But it's a lot of data (100,000+ questions and answers). It takes about a minute to assign attributes to all the data. I was wondering if I could save the data with the given attributes somewhere, and next time I compile it doesn't need to spend all the time reattributing them but rather it can look for the data with the given attributes that is saved somewhere.
1
1
0
0
0
0
In Spacy NLP, I am not able to get exact output for named entity. My string value is on multiple lines. Please check below code: from spacy import displacy from collections import Counter import en_core_web_sm nlp = en_core_web_sm.load() m = (u"""Release the container 6th August USG11223 USG12224 USG21113""") doc = nlp(m) print([(X.text, X.label_) for X in doc.ents]) OUTPUT: [('6th August', 'DATE')] But i want output like ['USG11223', 'USG12224', 'USG21113',6th August]
1
1
0
0
0
0
In the chapter seven of this book "TensorFlow Machine Learning Cookbook" the author in pre-processing data uses fit_transform function of scikit-learn to get the tfidf features of text for training. The author gives all text data to the function before separating it into train and test. Is it a true action or we must separate data first and then perform fit_transform on train and transform on test?
1
1
0
1
0
0
I am trying to create cluster out of text contained in an excel file but I'm getting the error "AttributeError: 'int' object has no attribute 'lower'". Sample.xlsx is a file containing data like this: I have created a list called corpus which has unique text according to each row and I get that problem while vectorizing the corpus. '''python import pandas as pd import numpy as np data=pd.read_excel('sample.xlsx') idea=data.iloc[:,0:1] #Selecting the first column that has text. #Converting the column of data from excel sheet into a list of documents, where each document corresponds to a group of sentences. corpus=[] for index,row in idea.iterrows(): corpus.append(row['_index_text_data']) #Count Vectoriser then tidf transformer from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() X = vectorizer.fit_transform(corpus) #ERROR AFTER EXECUTING THESE #LINES #vectorizer.get_feature_names() #print(X.toarray()) from sklearn.feature_extraction.text import TfidfTransformer transformer = TfidfTransformer(smooth_idf=False) tfidf = transformer.fit_transform(X) print(tfidf.shape ) from sklearn.cluster import KMeans num_clusters = 5 #Change it according to your data. km = KMeans(n_clusters=num_clusters) km.fit(tfidf) clusters = km.labels_.tolist() idea={'Idea':corpus, 'Cluster':clusters} #Creating dict having doc with the corresponding cluster number. frame=pd.DataFrame(idea,index=[clusters], columns=['Idea','Cluster']) # Converting it into a dataframe. print(" ") print(frame) #Print the doc with the labeled cluster number. print(" ") print(frame['Cluster'].value_counts()) #Print the counts of doc belonging `#to each cluster.` Expected result : Error: "AttributeError: 'int' object has no attribute 'lower'"
1
1
0
0
0
0
I have created a model using this dataset and I would like to insert some sentences to see how they would be classified. How can I do that? Here is the code that makes the model: from sklearn.datasets import fetch_20newsgroups from sklearn.naive_bayes import MultinomialNB from sklearn.feature_extraction.text import TfidfVectorizer from sklearn import metrics cats = ['sci.space','rec.autos'] newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'), categories = cats) newsgroups_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'), categories = cats) vectors_test = vectorizer.transform(newsgroups_test.data) vectorizer = TfidfVectorizer() vectors = vectorizer.fit_transform(newsgroups_train.data) clf = MultinomialNB(alpha=.01) clf.fit(vectors, newsgroups_train.target) vectors_test = vectorizer.transform(newsgroups_test.data) pred = clf.predict(vectors_test) metrics.f1_score(newsgroups_test.target, pred, average='macro') the accuracy it returns is: 0.97 which shows that there is overfitting. As mentioned, I would like to test how the classification of unseen data would occur. How can I proceed? Example I tried: texts = ["The space shuttle is made in 2018", "The exhaust is noisy.", "the windows are transparent."] text_features = tfidf.transform(texts) predictions = model.predict(text_features) for text, predicted in zip(texts, predictions): print('"{}"'.format(text)) print(" - Predicted as: '{}'".format(id_to_category[predicted])) print("") #this does not work as it is It should classify each sentence to one of the two (sci.space, rec.autos) categories. Furthermore, any other suggestions you may have about the whole code are welcome. I want to learn these processes very well.
1
1
0
0
0
0
i created the class webviewThread in which i have created the run function in which i am passing 2 arguments "self, openWhat" but it gives error on runtime. here is my code class webviewThread(Thread): def run(self,openWhat): if openWhat=="facebook": webview.create_window('Facebook', 'http://www.fb.com') webview.start() elif openWhat=="youtube": webview.create_window('Facebook', 'http://www.youtube.com') webview.start() webObj=webviewThread() def openfacebook(): webObj.start("facebook") i am passing the value of argument but it gives error
1
1
0
0
0
0
I'm working on a project to read the text and make a prediction of the outcome. As part of cleaning the data I am trying to remove all of the stopwords. When I try to do this, I need the output to be in a datafram format but I am running into issues there. So, after much cleaning I got the data to the point where it looks like this. The labels are in a different dataframe that I would have to merge but that is besides the point. What I am trying to do now is remove all of the stopwords from each string in each row. After some research the code I am using looks like this: import nltk from nltk.corpus import stopwords nltk.download('stopwords') stop_words = set(stopwords.words('english')) ht_comments_only_no_stop['All_Comments'] = ht_comments_only_summary['All_Comments'].apply(lambda x: [item for item in x if item not in stop_words]) The ht_comments_only_summary is basically what you see in the first picture above. The problem is that now when I try looking at "ht_comments_only_no_stop" I see: But what I need is the output to just look like the first picture in dataframe format minus all the stopwords under the "All_Comments" column. Any help would be greatly appreciated.
1
1
0
0
0
0
I am working on an NLP task that requires using a corpus of the language called Yoruba. Yoruba is a language that has diacritics (accents) and under dots in its alphabets. For instance, this is a Yoruba string: "ọmọàbúròẹlẹ́wà", and I need to remove the accents and keep the under dots. I have tried using the unidecode library in Python, but it removes accents and under dots. import unidecode ac_stng = "ọmọàbúròẹlẹ́wà" unac_stng = unidecode.unidecode(ac_stng) I expect the output to be "ọmọaburoẹlẹwa". However, when I used the unidecode library in Python, I got "omoaburoelewa".
1
1
0
0
0
0
How would one go about extracting text from documents such as a job application and have it sorted into a nice data set with feature such as dob/SSN/ address/ etc etc. With each field in the application serving as a column for my data set?
1
1
0
1
0
0
Am working on sentiment analysis problem. Tried to use autocorrect but that requires a lot computing power which I don't have access to because of the size of corpus. So came up with a different approach of solving the problem by creating a dictionary of {key = 'incorrect', value = 'correct'} and then manually correcting all words. The problem is that how should I get that dictionary of miss-spelled words in the dictionary. Is this link same as the solution to my problem?(Rather than misspelled words should I look for OOV words?) And if not, please suggest some better method. Code used for autocorrect: !pip install autocorrect from autocorrect import spell train['text'] = [' '.join([spell(i) for i in x.split()]) for x in train['text']]
1
1
0
1
0
0
I am trying to do anaphora resolution and for that below is my code. first i navigate to the folder where i have downloaded the stanford module. Then i run the command in command prompt to initialize stanford nlp module java -mx4g -cp "*;stanford-corenlp-full-2017-06-09/*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000 After that i execute below code in Python from pycorenlp import StanfordCoreNLP nlp = StanfordCoreNLP('http://localhost:9000') I want to change the sentence Tom is a smart boy. He know a lot of thing. into Tom is a smart boy. Tom know a lot of thing. and there is no tutorial or any help available in Python. All i am able to do is annotate by below code in Python coreference resolution output = nlp.annotate(sentence, properties={'annotators':'dcoref','outputFormat':'json','ner.useSUTime':'false'}) and by parsing for coref coreferences = output['corefs'] i get below JSON coreferences {u'1': [{u'animacy': u'ANIMATE', u'endIndex': 2, u'gender': u'MALE', u'headIndex': 1, u'id': 1, u'isRepresentativeMention': True, u'number': u'SINGULAR', u'position': [1, 1], u'sentNum': 1, u'startIndex': 1, u'text': u'Tom', u'type': u'PROPER'}, {u'animacy': u'ANIMATE', u'endIndex': 6, u'gender': u'MALE', u'headIndex': 5, u'id': 2, u'isRepresentativeMention': False, u'number': u'SINGULAR', u'position': [1, 2], u'sentNum': 1, u'startIndex': 3, u'text': u'a smart boy', u'type': u'NOMINAL'}, {u'animacy': u'ANIMATE', u'endIndex': 2, u'gender': u'MALE', u'headIndex': 1, u'id': 3, u'isRepresentativeMention': False, u'number': u'SINGULAR', u'position': [2, 1], u'sentNum': 2, u'startIndex': 1, u'text': u'He', u'type': u'PRONOMINAL'}], u'4': [{u'animacy': u'INANIMATE', u'endIndex': 7, u'gender': u'NEUTRAL', u'headIndex': 4, u'id': 4, u'isRepresentativeMention': True, u'number': u'SINGULAR', u'position': [2, 2], u'sentNum': 2, u'startIndex': 3, u'text': u'a lot of thing', u'type': u'NOMINAL'}]} Any help on this?
1
1
0
0
0
0
I'm working in some kind of NLP. I compare a daframe of articles with inputs words. The main goal is classify text if a bunch of words were found I've tried to extract the values in the dictionary and convert into a list and then apply stemming to it. The problem is that later I'll do another process to split and compare according to the keys. I think if more practical to work directly in the dictionary. search = {'Tecnology' : ['computer', 'digital', 'sistem'], 'Economy' : ['bank', 'money']} words_list = list() for key in search.keys(): words_list.append(search[key]) search_values = [val for sublist in words_list for val in sublist] search_values_stem = [stemmer.stem(word) for word in test] I expect a dictionary stemmed to compare directly with the column of the articles stemmed
1
1
0
0
0
0
When extracting keywords from a text, I realized that I get back mostly the same words in different formats. Is there a way to enable the same word to show up only once? Example: updated updates update updating | research researched researchers | files filed file Code: Summa (TextRank) package used here: k_words = keywords.keywords((str(document)), words=10, ratio=0.2, language='english')
1
1
0
0
0
0
I have to identify cities in a document (has only characters), I do not want to maintain an entire vocabulary as it is not a practical solution. I also do not have Azure text analytics api account. I have already tried using Spacy, I did ner and identified geolocation and that output is passed to spellchecker() to train the model. But the issue with this is that ner requires sentences and my input has words. I am relatively new to this field.
1
1
0
0
0
0
In most cases, I am finding that polarity_scores returning output as "Neutral" whereas there should be some % of negative and positive sentiments highlighted e.g. consider the following cases, I found {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0} for all the 3 cases mentioned below. case 1: the renewal manager is not qualified at all for the job case 2: John was very transparent and extremely diligent in providing information and setting up meetings for collaboration case 3: "Still do not have access to the product ordered. It has been more than a week since docs were signed" Code: from nltk.sentiment.vader import SentimentIntensityAnalyzer sid = SentimentIntensityAnalyzer() a = "Our sales representative, Tom, was very attentive to our needs." sid.polarity_scores(a) Output: {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0} I expect some % of negative and positive sentiments highlighted taking above examples versus getting 'neu' = 1.0 and 'compound' = 0.0. Can anyone advise how to get better results matching to the actual sentiment of the given text string? I am willing to explore other libraries or packages if they are better than Vader. Thanks for advising.
1
1
0
0
0
0
I'm trying to download Google's new pretrained multilingual universal sentence encoder that was just published July this year. I have followed the test found at their website using Colab and works well, but when I try to do it locally it hangs forever while trying to download it (code copied from tf's site): import tensorflow as tf import tensorflow_hub as hub import numpy as np import tf_sentencepiece # Some texts of different lengths. english_sentences = ["dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog."] italian_sentences = ["cane", "I cuccioli sono carini.", "Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane."] japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"] #hangs here: embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-multilingual/1") I have installed all dependencies and packages. Other simpler models work (English sentence encoder for example), only happens with this new one. Any ideas? Thank you all!
1
1
0
0
0
0
I am begginer with NLP. I am using spaCy python library for my NLP project. Here is my requirement, I have a JSON File with all country names. Now i need to parse and get goldmedal count for the each countries in the document. Given below the sample sentence, "Czech Republic won 5 gold medals at olympics. Slovakia won 0 medals olympics" I am able to fetch country names but not it medal count. Given below my code. Please help to proceed further. import json from spacy.lang.en import English from spacy.matcher import PhraseMatcher with open("C:\Python36\srclcl\countries.json") as f: COUNTRIES = json.loads(f.read()) nlp = English() nlp.add_pipe(nlp.create_pipe('sentencizer')) doc = nlp("Czech Republic won 5 gold medals at olympics. Slovakia won 0 medals olympics") matcher = PhraseMatcher(nlp.vocab) patterns = list(nlp.pipe(COUNTRIES)) matcher.add("COUNTRY", None, *patterns) for sent in doc.sents: subdoc = nlp(sent.text) matches = matcher(subdoc) print (sent.text) for match_id, start, end in matches: print(subdoc[start:end].text) Also, if the given text is like , "Czech Republic won 5 gold medals at olympics in 1995. Slovakia won 0 medals olympics"
1
1
0
0
0
0
I have a bit of code that uses newspaper to go take a look at various media outlets and download articles from them. This has been working fine for a long time but has recently started acting up. I can see what the problem is but as I'm new to Python I'm not sure about the best way to address it. Basically (I think) I need to make a modification to keep the occasional malformed web address from crashing the script entirely and instead allow it to dispense with that web address and move on to the others. The origins of the error is when I attempt to download an article using: article.download() Some articles (they change every day obviously) will throw the following error but the script continues to run: Traceback (most recent call last): File "C:\Anaconda3\lib\encodings\idna.py", line 167, in encode raise UnicodeError("label too long") UnicodeError: label too long The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Anaconda3\lib\site-packages ewspaper\mthreading.py", line 38, in run func(*args, **kargs) File "C:\Anaconda3\lib\site-packages ewspaper\source.py", line 350, in download_articles html = network.get_html(url, config=self.config) File "C:\Anaconda3\lib\site-packages ewspaper etwork.py", line 39, in get_html return get_html_2XX_only(url, config, response) File "C:\Anaconda3\lib\site-packages ewspaper etwork.py", line 60, in get_html_2XX_only url=url, **get_request_kwargs(timeout, useragent)) File "C:\Anaconda3\lib\site-packages\requests\api.py", line 72, in get return request('get', url, params=params, **kwargs) File "C:\Anaconda3\lib\site-packages\requests\api.py", line 58, in request return session.request(method=method, url=url, **kwargs) File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 502, in request resp = self.send(prep, **send_kwargs) File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 612, in send r = adapter.send(request, **kwargs) File "C:\Anaconda3\lib\site-packages\requests\adapters.py", line 440, in send timeout=timeout File "C:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 600, in urlopen chunked=chunked) File "C:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 356, in _make_request conn.request(method, url, **httplib_request_kw) File "C:\Anaconda3\lib\http\client.py", line 1107, in request self._send_request(method, url, body, headers) File "C:\Anaconda3\lib\http\client.py", line 1152, in _send_request self.endheaders(body) File "C:\Anaconda3\lib\http\client.py", line 1103, in endheaders self._send_output(message_body) File "C:\Anaconda3\lib\http\client.py", line 934, in _send_output self.send(msg) File "C:\Anaconda3\lib\http\client.py", line 877, in send self.connect() File "C:\Anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect conn = self._new_conn() File "C:\Anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File "C:\Anaconda3\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "C:\Anaconda3\lib\socket.py", line 733, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): UnicodeError: encoding with 'idna' codec failed (UnicodeError: label too long) The next bit is supposed to then parse and run natural language processing on each article and write certain elements to a dataframe so I then have: for paper in papers: for article in paper.articles: article.parse() print(article.title) article.nlp() if article.publish_date is None: d = datetime.now().date() else: d = article.publish_date.date() stories.loc[i] = [paper.brand, d, datetime.now().date(), article.title, article.summary, article.keywords, article.url] i += 1 (This might be a little sloppy too but that's a problem for another day) This runs fine until it gets to one of those URLs with the error and then tosses an article exception and the script crashes: C:\Anaconda3\lib\site-packages\PIL\TiffImagePlugin.py:709: UserWarning: Corrupt EXIF data. Expecting to read 2 bytes but only got 0. warnings.warn(str(msg)) ArticleException Traceback (most recent call last) <ipython-input-17-2106485c4bbb> in <module>() 4 for paper in papers: 5 for article in paper.articles: ----> 6 article.parse() 7 print(article.title) 8 article.nlp() C:\Anaconda3\lib\site-packages ewspaper\article.py in parse(self) 183 184 def parse(self): --> 185 self.throw_if_not_downloaded_verbose() 186 187 self.doc = self.config.get_parser().fromstring(self.html) C:\Anaconda3\lib\site-packages ewspaper\article.py in throw_if_not_downloaded_verbose(self) 519 if self.download_state == ArticleDownloadState.NOT_STARTED: 520 print('You must `download()` an article first!') --> 521 raise ArticleException() 522 elif self.download_state == ArticleDownloadState.FAILED_RESPONSE: 523 print('Article `download()` failed with %s on URL %s' % ArticleException: So what's the best way to keep this from terminating my script? Should I address it in the download stage where I'm getting the unicode error or at the parse stage by telling it to overlook those bad addresses? And how would I go about implementing that correction? Really appreciate any advice.
1
1
0
0
0
0
I want to build a word cloud containing multiple word structures (not just one word). In any given text we will have bigger frequencies for unigrams than bigrams. Actually, the n-gram frequency decreases when n increases for the same text. I want to find a magic number or a method to obtain comparative results between unigrams and bigrams, trigrams, n-grams. There is any magic number as a multiplier for n-gram frequency in order to be comparable with a unigram? A solution that I have now in mind is to make a top for any n-gram (1, 2, 3, ...) and use the first z positions for any category of n-grams.
1
1
0
0
0
0
I'm trying to reproduce a study into sentiment analysis which uses dependency structures which were generated using the Stanford NLP library, the issue is that the study is from 2011 and I've found that than the Standford library used Stanford Dependencies but it now uses Universal Dependencies which gives different results (see https://nlp.stanford.edu/software/stanford-dependencies.shtml#English). My query is whether one can still use Stanford dependencies in the stanfordnlp library in Python?
1
1
0
0
0
0
I am trying to build the input for the saved model from BERT-SQuAD given that I have got all the elements for the input. I fine-tuned a question answering model by running of run_squad.py in Google bert, then I exported the model with export_saved_model. Now when I have a new context and question, I can't build the correct input that can get return output from the model. Code to export the model: #export the model def serving_input_receiver_fn(): feature_spec = { "unique_ids": tf.FixedLenFeature([], tf.int64), "input_ids": tf.FixedLenFeature([FLAGS.max_seq_length], tf.int64), "input_mask": tf.FixedLenFeature([FLAGS.max_seq_length], tf.int64), "segment_ids": tf.FixedLenFeature([FLAGS.max_seq_length], tf.int64), } serialized_tf_example = tf.placeholder(dtype=tf.string, shape=FLAGS.predict_batch_size, name='input_example_tensor') receiver_tensors = {'examples': serialized_tf_example} features = tf.parse_example(serialized_tf_example, feature_spec) return tf.estimator.export.ServingInputReceiver(features, receiver_tensors) estimator = tf.contrib.tpu.TPUEstimator( use_tpu=FLAGS.use_tpu, model_fn=model_fn, config=run_config, train_batch_size=FLAGS.train_batch_size, predict_batch_size=FLAGS.predict_batch_size) estimator._export_to_tpu = False ## !!important to add this estimator.export_saved_model( export_dir_base ="C:/Users/ZitongZhou/Desktop/qa/bert_squad/servemodel", serving_input_receiver_fn = serving_input_receiver_fn) The way I loaded the model: export_dir = 'servemodel' subdirs = [x for x in Path(export_dir).iterdir() if x.is_dir() and 'temp' not in str(x)] latest = str(sorted(subdirs)[-1]) predict_fn = predictor.from_saved_model(latest) I got the eval_features from the run_squad.py. The way I tried to build the input: feature_spec = { "unique_ids": np.asarray(eval_features[0].unique_id).tolist(), "input_ids": np.asarray(eval_features[0].input_ids).tolist(), "input_mask": np.asarray(eval_features[0].input_mask).tolist(), "segment_ids": np.asarray(eval_features[0].segment_ids).tolist() } serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[1], name='input_example_tensor') receiver_tensors = {'examples': serialized_tf_example} features = tf.parse_example(serialized_tf_example, feature_spec) out = predict_fn({'examples':[str(feature_spec)]}) I expect to get a prediction 'out' so I can extract the answer to the question from it. The traceback I got: Traceback (most recent call last): File "<ipython-input-51-0c3b618a8f48>", line 11, in <module> features = tf.parse_example(serialized_tf_example, feature_spec) File "C:\Users\ZitongZhou\Anaconda3\envs lp\lib\site-packages\tensorflow \python\ops\parsing_ops.py", line 580, in parse_example return parse_example_v2(serialized, features, example_names, name) File "C:\Users\ZitongZhou\Anaconda3\envs lp\lib\site-packages\tensorflow \python\ops\parsing_ops.py", line 803, in parse_example_v2 [VarLenFeature, SparseFeature, FixedLenFeature, FixedLenSequenceFeature]) File "C:\Users\ZitongZhou\Anaconda3\envs lp\lib\site-packages\tensorflow \python\ops\parsing_ops.py", line 299, in _features_to_raw_params raise ValueError("Invalid feature %s:%s." % (key, feature)) ValueError: Invalid feature input_ids:[101, 1005, 2129, 2214, 2003, 19523, 6562, 1005, 102, 1005, 19523, 11233, 2003, 2274, 2086, 2214, 1005, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
1
1
0
0
0
0
I want to split a sentence into a list of words. For English and European languages this is easy, just use split() >>> "This is a sentence.".split() ['This', 'is', 'a', 'sentence.'] But I also need to deal with sentences in languages such as Chinese that don't use whitespace as word separator. >>> u"这是一个句子".split() [u'\u8fd9\u662f\u4e00\u4e2a\u53e5\u5b50'] Obviously that doesn't work. How do I split such a sentence into a list of words? UPDATE: So far the answers seem to suggest that this requires natural language processing techniques and that the word boundaries in Chinese are ambiguous. I'm not sure I understand why. The word boundaries in Chinese seem very definite to me. Each Chinese word/character has a corresponding unicode and is displayed on screen as an separate word/character. So where does the ambiguity come from. As you can see in my Python console output Python has no problem telling that my example sentence is made up of 5 characters: 这 - u8fd9 是 - u662f 一 - u4e00 个 - u4e2a 句 - u53e5 子 - u5b50 So obviously Python has no problem telling the word/character boundaries. I just need those words/characters in a list.
1
1
0
0
0
0
Given a DBpedia resource, I want to find the entire taxonomy till root. For example, if I were to say in plain English, for Barack Obama I want to know the entire taxonomy which goes as Barack Obama → Politician → Person → Being. I have written the following recursive function for the same: import requests import json from SPARQLWrapper import SPARQLWrapper, JSON sparql = SPARQLWrapper("http://dbpedia.org/sparql") def get_taxonomy(results,entity,hypernym_list): '''This recursive function keeps on fetching the hypernyms of the DBpedia resource recursively till the highest concept or root is reached''' if entity == 'null': return hypernym_list else : query = ''' SELECT ?hypernyms WHERE {<'''+entity+'''> <http://purl.org/linguistics/gold/hypernym> ?hypernyms .} ''' sparql.setQuery(query) sparql.setReturnFormat(JSON) results = sparql.query().convert() for result in results["results"]["bindings"]: hypernym_list.append(result['hypernyms']['value']) if len(results["results"]["bindings"]) == 0: return get_taxonomy(results,'null',hypernym_list) return get_taxonomy(results,results["results"]["bindings"][0]['hypernyms']['value'],hypernym_list) def get_taxonomy_of_resource(dbpedia_resource): list_for_hypernyms=[] results = {} results["results"]={} results["results"]["bindings"]=[1,2,3] taxonomy_list = get_taxonomy(results,dbpedia_resource,list_for_hypernyms) return taxonomy_list The code works for the following input: get_taxonomy_of_resource('http://dbpedia.org/resource/Barack_Obama') Output: ['http://dbpedia.org/resource/Politician', 'http://dbpedia.org/resource/Person', 'http://dbpedia.org/resource/Being'] Problem : But for following output it only gives hypernym till one level above and stops: get_taxonomy_of_resource('http://dbpedia.org/resource/Steve_Jobs') Output: ['http://dbpedia.org/resource/Entrepreneur'] Research: On doing some research on their site dbpedia.org/page/<term> I realized that the reason it stopped at Entrepreneur is that when I click on this resource on their site, it takes me to resource 'Entrepreneurship' and state its hypernym as 'Process'. So now my problem has been directed to the question: How do I know that Entrepreneur is directing to Entrepreneurship even though both are valid DBpedia entities? My recursive function fails due to this as in next iteration it attempts to find hypernym for Entrepreneur rather than Entrepreneurship. Any help is duly appreciated
1
1
0
0
0
0
I am building Dataflow job to get data from cloud storage and pass it to NLP API to perform sentiment analysis and import the result to BigQuery The Job ran successfully localy (I didn't use data flow runner) import apache_beam as beam import logging from google.cloud import language from google.cloud.language import enums from google.cloud.language import types PROJECT = 'ProjectName' schema = 'name : STRING, id : STRING, date : STRING,title : STRING, text: STRING,magnitude : STRING, score : STRING' src_path = "gs://amazoncustreview/sentimentSource.csv" class Sentiment(beam.DoFn): def process(self, element): element = element.split(",") client = language.LanguageServiceClient() document = types.Document(content=element[2], type=enums.Document.Type.PLAIN_TEXT) sentiment = client.analyze_sentiment(document).document_sentiment return [{ 'name': element[0], 'title': element[1], 'magnitude': sentiment.magnitude, 'score': sentiment.score }] def main(): BUCKET = 'BucKet name' argv = [ '--project={0}'.format(PROJECT), '--staging_location=gs://{0}/staging/'.format(BUCKET), '--temp_location=gs://{0}/staging/'.format(BUCKET), '--runner=DataflowRunner', '--job_name=examplejob2', '--save_main_session' ] p = beam.Pipeline(argv=argv) (p | 'ReadData' >> beam.io.textio.ReadFromText(src_path) | 'ParseCSV' >> beam.ParDo(Sentiment()) | 'WriteToBigQuery' >> beam.io.WriteToBigQuery('{0}:Dataset.table'.format(PROJECT), write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND) ) p.run() if __name__ == '__main__': main() this the error I get I have tried to import different version of Google Cloud Language but all my trial failed. Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run self._load_main_session(self.local_staging_directory) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session pickler.load_session(session_file) File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 280, in load_session return dill.load_session(file_path) File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session module = unpickler.load() File "/usr/lib/python2.7/pickle.py", line 864, in load dispatch[key](self) File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce value = func(*args) File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 828, in _import_module return getattr(__import__(module, None, None, [obj]), obj) ImportError: No module named language_v1.gapic
1
1
0
0
0
0
Python3.6: I am using Spacy on a column of text in a pandas df. The text does have "Special Characters" and I need to keep them. nlp required unicode for some reason. I am getting an error from nlp below: Any help would be very much appreciated. # -*- coding: utf-8 -*- import spacy nlp = spacy.load("en_core_web_sm") df['TextCol'] = df['TextCol'].str.encode('utf-8') def function(row): doc = nlp(unicode(text)) df.apply(function, axis=1) Return from nlp: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2
1
1
0
0
0
0
My dataframe has thousands of rows. It look like this: import pandas as pd import numpy as np text = ['please send us a dm...','…could you please dm me','dm me plz…','i dmed u yesterday…','dm me asap thx', 'i send a dm to u now', 'thx u r so nice dming u now', 'just sent u a dm'] df = pd.DataFrame({"text": text}) text 0 please send us a dm... 1 …could you please dm me 2 dm me plz… 3 i dmed u yesterday… 4 dm me asap thx 5 i send a dm to u now 6 thx u r so nice dming u now 7 just sent u a dm I wrote a function to replace abbreviation in column 'text'. def convert(dataframe, column): dataframe[column] = dataframe[column].apply(lambda x: x.replace(" dm ", " direct message ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace(" dming ", " direct message ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace(" dmed ", " direct message ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace(" plz ", " please ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace(" thx ", " thanks ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace(" u ", " you ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace(" asap ", " as soon as possible ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace("...", " ")) dataframe[column] = dataframe[column].apply(lambda x: x.replace("…", " ")) However, my code is not working properly, so it can't fully replace all of the abbreviations in my dataframe. convert(df, 'text') text 0 please send us a dm 1 could you please direct message me 2 dm me plz 3 i direct message you yesterday 4 dm me as soon as possible thx 5 i send a direct message to you now 6 thx you r so nice direct message you now 7 just sent you a dm The desired final output would look like this: text 0 please send us a direct message 1 could you please direct message me 2 direct message me plz 3 i direct message you yesterday 4 direct message me as soon as possible thanks 5 i send a direct message to you now 6 thanks you r so nice direct message you now 7 just sent you a direct message I can't figure out why my code is not working.
1
1
0
0
0
0
In the description of the fasttext library for python https://github.com/facebookresearch/fastText/tree/master/python for training a supervised model there are different arguments, where among others are stated as: ws: size of the context window wordNgrams: max length of word ngram. If I understand it right, both of them are responsible for taking into account the surrounding words of the word, but what is the clear difference between them?
1
1
0
0
0
0
I am trying to print the bigrams for a text in Python 3.5. The text is already pre-processed and split into individual words. I tried two different ways (shown below), neither work. The first: ninety_seven=df.loc[97] nine_bi=ngrams(ninety_seven,2) print(nine_bi) This outputs: < generator object ngrams at 0x0B4F9E70> The second is: ninety_seven=df.loc[97] bigrm = list(nltk.bigrams(ninety_seven)) print(*map(' '.join, bigrm), sep=', ') This outputs: TypeError: sequence item 0: expected str instance, list found df.loc[97] is [car, chip, indication, posted, flight, post, flight] I want it to print as: car chip, chip indication, indication posted, posted flight, flight post, post flight
1
1
0
0
0
0
NLTK Mutli word tokenzier works is case sensitive. I want to work for both upper and lower case tk.add_mwe(('The', 'questions')) works for the word The questions But fails for the word the questions Plz give a solution or an alternate
1
1
0
0
0
0
I have a list of properties, name:value style. The name and value could be anything. I would like to generate gramatically correct descriptive text that considers the entire set of name:value pairs. The generator should be smart enough to recognize the type of the property based on the property name and generate appropriate text. For example: name:John age: 26 height:6ft 2inches eyecolor: blue profession: cowboy Expected output - something along the lines of: John is a cowboy, aged 26. His height is 6ft 2 inches and he has blue eyes. ApplicationName: Goole maps Developer: Google Usage: Geo navigation available on: desktop, notebook, tablet, mobile phones competition: Apple, Facebook, Microsoft Expected output: Google Maps is a geo navigation app developed by Google. It is available on desktops, notebooks,tablets and mobile phones. it's main competitors are Apple, Yahoo and Facebook maps. How should I approach this problem? Would this be a machine learning problem? Or can I implement this using plain NLP without the need for ML? Any pointers appreciated.
1
1
0
0
0
0
I am analyzing the call records and try to use doc2vec I cant find the appropriate way to apply I tried to convert words to root later i will try to get rid of stop words(which are rooted). I desire to understand that each what the conversation is about(that can be a few or more words).Can you suggest me a certain way or sample project ?
1
1
0
0
0
0
I'm trying to input a sentence and classify it as a 1 or 0. I have data with two columns, the first is the sentence text (e.g. "This is a sentence") and the second column is a classification (e.g. 0 or 1). I have predicted values that I'm trying to interpret, only I can't seem to understand the X axis of my graph and why my Regression line looks like the way it does. import nltk import pandas as pd import numpy as np import matplotlib.pyplot as plt from os import listdir from os.path import isfile, join from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import roc_auc_score, mean_squared_error, r2_score from sklearn import linear_model X_train, X_test, Y_train, Y_test = train_test_split(labor_data['text'],labor_data['label_one'],random_state=0) vect = CountVectorizer(ngram_range=(1,1),min_df=0,max_df=.25).fit(X_train) X_train_vectorized = vect.transform(X_train) lr_model = linear_model.LinearRegression() lr_model.fit(X_train_vectorized,Y_train) lr_predictions = lr_model.predict(vect.transform(X_test)) plt.scatter(X_test, Y_test, color='black') plt.plot(X_test, lr_predictions, color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() I understand the Y Axis is values, but don't understand the X axis or my regression line. I know my lr_predictions are values between 0 and 1, as are all the values on the plot. But shouldn't the line be a downward sloped straight line? Graph https://imgur.com/a/k9JUKC9
1
1
0
1
0
0
I'm trying to filter my dataset which contains nearly 50K articles. From each article I want to filter out stop words and punctuation. But the process is taking long time. I've already filtered the dataset and it took 6 hours. Now I've got another dataset to filter which contains 300K articles. I'm using python in anaconda environment. PC configuration: 7th Gen. Core i5, 8GB RAM and NVIDIA 940MX GPU. To filter my dataset I've wrote a code which takes each article in dataset, tokenize words and then remove stop words, punctuations and numbers. def sentence_to_wordlist(sentence, filters="!\"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t ?,।!‍.'0123456789০১২৩৪৫৬৭৮৯‘\u200c–“”…‘"): translate_dict = dict((c, ' ') for c in filters) translate_map = str.maketrans(translate_dict) wordlist = sentence.translate(translate_map).split() global c,x; return list(filter(lambda x: x not in stops, wordlist)) Now I want to reduce the time for this process. Is there any way to optimize this?
1
1
0
0
0
0
I am classifying text with 2 categories. One is imperatives, and the other one is non-imperatives. I prepared my text in the way Naive Bayes Classifier needs. But, now, I also need to use SVM. What should I do here? (I need to classify the text and calculate the accuracy, too.)Thank you for reading and trying to answering my questions. all_words_list = [word for (sent, cat) in train for word in sent] all_words = nltk.FreqDist(all_words_list) word_items = all_words.most_common(1000) word_features = [word for (word, count) in word_items] def document_features(document, word_features): document_words = set(document) features = {} for word in word_features: features['contains({})'.format(word)] = (word in document_words) return features featuresets = [(document_features(d, word_features), c) for (d, c) in train] train_set, test_set = featuresets[360:], featuresets[:360] classifier = nltk.NaiveBayesClassifier.train(train_set) print (nltk.classify.accuracy(classifier, test_set))
1
1
0
0
0
0
Given a 3d tenzor, say: batch x sentence length x embedding dim a = torch.rand((10, 1000, 96)) and an array(or tensor) of actual lengths for each sentence lengths = torch .randint(1000,(10,)) outputs tensor([ 370., 502., 652., 859., 545., 964., 566., 576.,1000., 803.]) How to fill tensor ‘a’ with zeros after certain index along dimension 1 (sentence length) according to tensor ‘lengths’ ? I want smth like that : a[ : , lengths : , : ] = 0 One way of doing it (slow if batch size is big enough): for i_batch in range(10): a[ i_batch , lengths[i_batch ] : , : ] = 0
1
1
0
0
0
0
I'm building a classifier for a QA bot, and have a dataset for 8k questions, and 149 different Answers. I got some problems when training my model; the "loss" won't go down as I expected so I am asking for your help... Here is my method: I use word2vec to get a word's vector, then use a GRU-based network to get the vector of sentence The w2v model has been trained with wiki data, and works well on another of my NLP projects. The GRU code is from my senior, I think it works well, too. # Part of the code for getting sentence vector input_size = 400 hidden_dim = 400 num_layers = 1 gru = nn.GRU(input_size, hidden_dim,num_layers,batch_first = True) h0 = torch.rand(num_layers, 7187, hidden_dim) # (num_layers, batch, hidden_dim) # shape of input [dataset_len,max_sentence_len,input_feature] inputSet = torch.tensor(x_train,dtype = torch.float) sentenceVecs, hidden = gru(inputSet,h0) sentenceVecs = sentenceVecs[:,-1, :] and here is my classifier model from argparse import Namespace args = Namespace( dataset_file = 'dataset/waimai_10k_tw.pkl', model_save_path='torchmodel/pytorch_bce.model', # Training hyper parameters batch_size = 100, learning_rate = 0.002, min_learning_rate = 0.002, num_epochs=200, ) class JWP(nn.Module): def __init__(self, n_feature, n_hidden, n_hidden2, n_hidden3, n_output): super(JWP, self).__init__() self.hidden = nn.Linear(n_feature, n_hidden) self.hidden2 = nn.Linear(n_hidden, n_hidden2) self.hidden3 = nn.Linear(n_hidden2, n_hidden3) self.out = nn.Linear(n_hidden3, n_output) def forward(self, x, apply_softmax=False): x = F.relu(self.hidden(x).squeeze()) x = F.relu(self.hidden2(x).squeeze()) x = F.relu(self.hidden3(x).squeeze()) # if(apply_softmax): x = torch.softmax(self.out(x)) else: x = self.out(x) return x training code lr = args.learning_rate min_lr = args.min_learning_rate def adjust_learning_rate(optimizer, epoch): global lr if epoch % 10 == 0 and epoch != 0: lr = lr * 0.65 if(lr < min_lr): lr = min_lr for param_group in optimizer.param_groups: param_group['lr'] = lr if __name__ == "__main__": EPOCH = args.num_epochs net = JWP(400,325,275,225,149) # net = JWP(400,250,149) # net = JWP(400,149) print(net) optimizer = torch.optim.SGD(net.parameters(), lr=lr) loss_func = torch.nn.CrossEntropyLoss() for t in range(EPOCH): adjust_learning_rate(optimizer,t) """ Train phase """ net.train() TrainLoss = 0.0 # Train batch for step,(batchData, batchTarget) in enumerate(trainDataLoader): optimizer.zero_grad() out = net(batchData) loss = loss_func(out,batchTarget) TrainLoss = TrainLoss + loss loss.backward() optimizer.step() TrainLoss = TrainLoss / (step+1) # epoch loss """ Result """ print( "epoch:",t+1 , "train_loss:",round(TrainLoss.item(),3), "LR:",lr ) Is it that my model is too simple or do I simply use the wrong method? The loss is always stuck at around 4.6 and I can't lower it any more... epoch: 2898 train_loss: 4.643 LR: 0.002 epoch: 2899 train_loss: 4.643 LR: 0.002 epoch: 2900 train_loss: 4.643 LR: 0.002 epoch: 2901 train_loss: 4.643 LR: 0.002
1
1
0
0
0
0
I am new to NLP and trying to do some pre-processing steps on my data for a classification task. I have already done most of the cleaning but there still are some special characters within the text that I am now trying to remove. The text is in a Dataframe and is already tokenized and lemmatized, converted to lowercase, with no stopwords and no punctuation. Each text record is represented by a list of words. ['​‘the', 'redwood', 'massacre’', 'five', 'adventurous', 'friend', 'visiting', 'legendary', 'murder', 'site', 'redwood', 'hallmark', 'exciting', 'thrilling', 'camping', 'weekend', 'away', 'soon', 'discover', 'they’re', 'people', 'mysterious', 'location', 'fun', 'camping', 'expedition', 'soon', 'turn', 'nightmare', 'sadistically', 'stalked', 'mysterious', 'unseen', 'killer'] I tried the following code and other solutions as well but I can't understand why the output splits the words into single letters instead of just removing the special character, leaving the words in a compact format. def remove_character(text): new_text=[word.replace('€','') for word in text] return new_text df["Column_name"]=df["Column_name"].apply(lambda x:remove_character(x)) After applying the function this is the output on the same text record: "['[', ""'"", 'â', '', '‹', 'â', '', '˜', 't', 'h', 'e', ""'"", ',', ' ', ""'"", 'r', 'e', 'd', 'w', 'o', 'o', 'd', ""'"", ',', ' ', ""'"", 'm', 'a', 's', 's', 'a', 'c', 'r', 'e', 'â', '', '™', ""'"", ',', ' ', ""'"", 'f', 'i', 'v', 'e', ""'"", ',', ' ', ""'"", 'a', 'd', 'v', 'e', 'n', 't', 'u', 'r', 'o', 'u', 's', ""'"", ',', ' ', ""'"", 'f', 'r', 'i', 'e', 'n', 'd', ""'"", ',', ' ', ""'"", 'v', 'i', 's', 'i', 't', 'i', 'n', 'g', ""'"", ',', ' ', ""'"", 'l', 'e', 'g', 'e', 'n', 'd', 'a', 'r', 'y', ""'"", ',', ' ', ""'"", 'm', 'u', 'r', 'd', 'e', 'r', ""'"", ',', ' ', ""'"", 's', 'i', 't', 'e', ""'"", ',', ' ', ""'"", 'r', 'e', 'd', 'w', 'o', 'o', 'd', ""'"", ',', ' ', ""'"", 'h', 'a', 'l', 'l', 'm', 'a', 'r', 'k', ""'"", ',', ' ', ""'"", 'e', 'x', 'c', 'i', 't', 'i', 'n', 'g', ""'"", ',', ' ', ""'"", 't', 'h', 'r', 'i', 'l', 'l', 'i', 'n', 'g', ""'"", ',', ' ', ""'"", 'c', 'a', 'm', 'p', 'i', 'n', 'g', ""'"", ',', ' ', ""'"", 'w', 'e', 'e', 'k', 'e', 'n', 'd', ""'"", ',', ' ', ""'"", 'a', 'w', 'a', 'y', ""'"", ',', ' ', ""'"", 's', 'o', 'o', 'n', ""'"", ',', ' ', ""'"", 'd', 'i', 's', 'c', 'o', 'v', 'e', 'r', ""'"", ',', ' ', ""'"", 't', 'h', 'e', 'y', 'â', '', '™', 'r', 'e', ""'"", ',', ' ', ""'"", 'p', 'e', 'o', 'p', 'l', 'e', ""'"", ',', ' ', ""'"", 'm', 'y', 's', 't', 'e', 'r', 'i', 'o', 'u', 's', ""'"", ',', ' ', ""'"", 'l', 'o', 'c', 'a', 't', 'i', 'o', 'n', ""'"", ',', ' ', ""'"", 'f', 'u', 'n', ""'"", ',', ' ', ""'"", 'c', 'a', 'm', 'p', 'i', 'n', 'g', ""'"", ',', ' ', ""'"", 'e', 'x', 'p', 'e', 'd', 'i', 't', 'i', 'o', 'n', ""'"", ',', ' ', ""'"", 's', 'o', 'o', 'n', ""'"", ',', ' ', ""'"", 't', 'u', 'r', 'n', ""'"", ',', ' ', ""'"", 'n', 'i', 'g', 'h', 't', 'm', 'a', 'r', 'e', ""'"", ',', ' ', ""'"", 's', 'a', 'd', 'i', 's', 't', 'i', 'c', 'a', 'l', 'l', 'y', ""'"", ',', ' ', ""'"", 's', 't', 'a', 'l', 'k', 'e', 'd', ""'"", ',', ' ', ""'"", 'm', 'y', 's', 't', 'e', 'r', 'i', 'o', 'u', 's', ""'"", ',', ' ', ""'"", 'u', 'n', 's', 'e', 'e', 'n', ""'"", ',', ' ', ""'"", 'k', 'i', 'l', 'l', 'e', 'r', ""'"", ']']"
1
1
0
0
0
0
I've been trying to install Python package pyrouge for a while. Finally by following all these steps here I installed. It was the most helpful answer related to pyrouge I have seen so far. It does not give any error, I can import Rouge155 successfully. However when I try to do the same test as in step 8(with the same code), I got FileNotFoundError. I compared the given output in the answer and my output, and I think it can not find the file 'rouge_conf.xml'. I checked, the file was created. Since I don't have enough rep, I can not ask this as a comment, so I have to open a new question. Do you know what is the problem exactly, and how to fix? (win10, python 3.7). Thanks in advance for any help. This is the error I get(you can compare with the link): 2019-06-18 21:14:14,361 [MainThread ] [INFO ] Writing summaries. 2019-06-18 21:14:14,362 [MainThread ] [INFO ] Processing summaries. Saving system files to C:\Users\admin\AppData\Local\Temp\tmp86sm5x3u\system and model files to C:\Users\admin\AppData\Local\Temp\tmp86sm5x3u\model. 2019-06-18 21:14:14,363 [MainThread ] [INFO ] Processing files in systems. 2019-06-18 21:14:14,363 [MainThread ] [INFO ] Processing text.001.txt. 2019-06-18 21:14:14,365 [MainThread ] [INFO ] Saved processed files to C:\Users\admin\AppData\Local\Temp\tmp86sm5x3u\system. 2019-06-18 21:14:14,366 [MainThread ] [INFO ] Processing files in references. 2019-06-18 21:14:14,367 [MainThread ] [INFO ] Processing text.A.001.txt. 2019-06-18 21:14:14,369 [MainThread ] [INFO ] Saved processed files to C:\Users\admin\AppData\Local\Temp\tmp86sm5x3u\model. 2019-06-18 21:14:14,374 [MainThread ] [INFO ] Written ROUGE configuration to C:\Users\admin\AppData\Local\Temp\tmpirzhwufa\rouge_conf.xml 2019-06-18 21:14:14,374 [MainThread ] [INFO ] Running ROUGE with command perl D:\study\ROUGE-1.5.5\ROUGE-1.5.5.pl -e D:\study\ROUGE-1.5.5\data -c 95 -2 -1 -U -r 1000 -n 4 -w 1.2 -a -m C:\Users\admin\AppData\Local\Temp\tmpirzhwufa\rouge_conf.xml Traceback (most recent call last): File "<ipython-input-21-732ec1e402fb>", line 1, in <module> runfile('C:/Users/admin/Desktop/somefolder/untitled0.py', wdir='C:/Users/admin/Desktop/somefolder') File "C:\Users\admin\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\Users\admin\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/admin/Desktop/somefolder/untitled0.py", line 16, in <module> output = r.convert_and_evaluate() File "C:\Users\admin\Anaconda3\lib\site-packages\pyrouge-0.1.3-py3.7.egg\pyrouge\Rouge155.py", line 368, in convert_and_evaluate rouge_output = self.evaluate(system_id, rouge_args) File "C:\Users\admin\Anaconda3\lib\site-packages\pyrouge-0.1.3-py3.7.egg\pyrouge\Rouge155.py", line 343, in evaluate rouge_output = check_output(command, env=env).decode("UTF-8") File "C:\Users\admin\Anaconda3\lib\subprocess.py", line 395, in check_output **kwargs).stdout File "C:\Users\admin\Anaconda3\lib\subprocess.py", line 472, in run with Popen(*popenargs, **kwargs) as process: File "C:\Users\admin\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 143, in __init__ super(SubprocessPopen, self).__init__(*args, **kwargs) File "C:\Users\admin\Anaconda3\lib\subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "C:\Users\admin\Anaconda3\lib\subprocess.py", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified Edit: Today, I ran the same code again, weirdly the error changed to CalledProcessError. Which is the same error written here. Here is the output: 2019-06-19 16:00:15,115 [MainThread ] [INFO ] Writing summaries. ... The same as the first one... 2019-06-19 16:00:15,129 [MainThread ] [INFO ] Running ROUGE with command perl D:\study\ROUGE-1.5.5\ROUGE-1.5.5.pl -e D:\study\ROUGE-1.5.5\data -c 95 -2 -1 -U -r 1000 -n 4 -w 1.2 -a -m C:\Users\admin\AppData\Local\Temp\tmpgyd8zauc\rouge_conf.xml Traceback (most recent call last): File "<ipython-input-2-732ec1e402fb>", line 1, in <module> runfile('C:/Users/admin/Desktop/somefolder/untitled0.py', wdir='C:/Users/admin/Desktop/somefolder') File "C:\Users\admin\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\Users\admin\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/admin/Desktop/somefolder/untitled0.py", line 16, in <module> output = r.convert_and_evaluate() File "C:\Users\admin\Anaconda3\lib\site-packages\pyrouge-0.1.3-py3.7.egg\pyrouge\Rouge155.py", line 368, in convert_and_evaluate rouge_output = self.evaluate(system_id, rouge_args) File "C:\Users\admin\Anaconda3\lib\site-packages\pyrouge-0.1.3-py3.7.egg\pyrouge\Rouge155.py", line 343, in evaluate rouge_output = check_output(command, env=env).decode("UTF-8") File "C:\Users\admin\Anaconda3\lib\subprocess.py", line 395, in check_output **kwargs).stdout File "C:\Users\admin\Anaconda3\lib\subprocess.py", line 487, in run output=stdout, stderr=stderr) CalledProcessError: Command '['perl ', 'D:\\study\\ROUGE-1.5.5\\ROUGE-1.5.5.pl', '-e', 'D:\\study\\ROUGE-1.5.5\\data', '-c', '95', '-2', '-1', '-U', '-r', '1000', '-n', '4', '-w', '1.2', '-a', '-m', 'C:\\Users\\admin\\AppData\\Local\\Temp\\tmpgyd8zauc\\rouge_conf.xml']' returned non-zero exit status 255.
1
1
0
0
0
0
I am trying to train an NLP model on one set, save the vocab and the model, then apply it to a separate validation set. The code is running, but how can I be sure it is working as I expect? In other words, I have saved a vocab and nmodel from the training set, then I created the TFidfVectorizer with saved vocabulary, and finally I use "fit_transform" on the new, validation notes. Is this applying only the trained vocab and model? Is it not "learning" anything new from the validation set? Training, then load the vocab and model and apply to the validation set: train_vector = tfidf_vectorizer.fit_transform(training_notes) pickle.dump(tfidf_vectorizer.vocabulary_, open('./vocab/' + '_vocab.pkl', 'wb')) X_train = train_vector.toarray() y_train = np.array(train_data['ref_std']) model.fit(X_train, y_train) dump(model, './model/' + '.joblib') train_prediction = model.predict(X_train) vocab = pickle.load(open('./vocab/' + '_vocab.pkl', 'rb')) tfidf_vectorizer = TfidfVectorizer(vocabulary = vocab) valid_vector = tfidf_vectorizer.fit_transform(validation_notes) X_valid = valid_vector.toarray() y_valid = np.array(validation_data['ref_std']) model = load('./model/' + '.joblib') valid_prediction = model.predict(X_valid)```
1
1
0
0
0
0
I would like to classify comments based on NLP algorithm (tf-idf). I managed to classify these clusters but I want to visualize them graphically (histogram, scatter plot...) import collections from nltk import word_tokenize from nltk.corpus import stopwords from nltk.stem import PorterStemmer from sklearn.cluster import KMeans from sklearn.feature_extraction.text import TfidfVectorizer from pprint import pprint import matplotlib.pyplot as plt import pandas as pd import nltk import pandas as pd import string data = pd.read_excel (r'C:\Users\cra\One\intern\Book2.xlsx') def word_tokenizer(text): #tokenizes and stems the text tokens = word_tokenize(text) stemmer = PorterStemmer() tokens = [stemmer.stem(t) for t in tokens if t not in stopwords.words('english')] return tokens #tfidf convert text data to vectors def cluster_sentences(sentences, nb_of_clusters=5): tfidf_vectorizer = TfidfVectorizer(tokenizer=word_tokenizer, stop_words=stopwords.words('english'),#enlever stopwords max_df=0.95,min_df=0.05, lowercase=True) tfidf_matrix = tfidf_vectorizer.fit_transform(sentences) kmeans = KMeans(n_clusters=nb_of_clusters) kmeans.fit(tfidf_matrix) clusters = collections.defaultdict(list) for i, label in enumerate(kmeans.labels_): clusters[label].append(i) return dict(clusters) if __name__ == "__main__": sentences = data.Comment nclusters= 20 clusters = cluster_sentences(sentences, nclusters) #dictionary of #cluster and the index of the comment in the dataframe for cluster in range(nclusters): print ("cluster ",cluster,":") for i,sentence in enumerate(clusters[cluster]): print ("\tsentence ",i,": ",sentences[sentence]) result that I got for example : cluster 6 : sentence 0 : 26 RIH DP std sentence 1 : 32 RIH DP std sentence 2 : 68 RIH Liner with DP std in hole sentence 3 : 105 RIH DP std sentence 4 : 118 RIH std no of DP in hole sentence 5 : 154 RIH DP std Could you help me please! thank you
1
1
0
0
0
0
I am trying to use a pre-trained BERT model for fine tuning with SST2 data processor. But when I give the checkpoint of the pre-trained model, it is showing that "Key output_bias not found in checkpoint". I thought it might be due to errors in the pre-trained BERT model checkpoint. So I did the pre-training again. But, I am still facing the same issue. TASK = 'STS' #@param {type:\"string\"} TASK_DATA_DIR = 'glue_data/STS-B/'# + TASK output_dir = 'trained_model/observation' tf.gfile.MakeDirs(output_dir) BERT_MODEL = path + 'multi_cased_L-12_H-768_A-12/' VOCAB_FILE = os.path.join(BERT_MODEL, 'vocab.txt') CONFIG_FILE = os.path.join(BERT_MODEL, 'bert_config.json') INIT_CHECKPOINT = os.path.join(BERT_MODEL, 'bert_model.ckpt') DO_LOWER_CASE = BERT_MODEL.startswith('cased') tokenizer = tokenization.FullTokenizer(vocab_file=VOCAB_FILE, do_lower_case=DO_LOWER_CASE) TRAIN_BATCH_SIZE = 1 EVAL_BATCH_SIZE = 8 PREDICT_BATCH_SIZE = 8 LEARNING_RATE = 2e-5 NUM_TRAIN_EPOCHS = 3.0 MAX_SEQ_LENGTH = 128 processors = { "sts": run_classifier.StsProcessor, } processor = processors[TASK.lower()]() label_list = processor.get_labels() The error is: NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error: Key output_bias not found in checkpoint [[node save/RestoreV2 (defined at /home/subraas3/.conda/envs/tensorflow_13/lib/python3.7/ site-packages/tensorflow_estimator/python/estimator/estimator.py:1403) ]] [[node save/RestoreV2 (defined at /home/subraas3/.conda/envs/tensorflow_13/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py:1403) ]]
1
1
0
0
0
0
I am practising NLP and checking using the below function what are the most frequent words per category and then observe how some sentences would be classified. The results are surprisingly wrong (Do you have to suggest another way of doing this helpful part of finding most frequent words per category?): #The function def show_top10(classifier, vectorizer, categories): ... feature_names = np.asarray(vectorizer.get_feature_names()) ... for i, category in enumerate(categories): ... top10 = np.argsort(classifier.coef_[i])[-10:] ... print("%s: %s" % (category, " ".join(feature_names[top10]))) #Using the function on the data show_top10(clf, vectorizer, newsgroups_train.target_names) #The results seem to be logical #the most frequent words by category are these: rec.autos: think know engine don new good just like cars car rec.motorcycles: riding helmet don know ride bikes dod like just bike sci.space: don earth think orbit launch moon just like nasa space #Now, testing these sentences, we see that they are classified wrong and not based #on the above most frequent words texts = ["wheelie", "stars are shining", "galaxy"] text_features = vectorizer.transform(texts) predictions = clf.predict(text_features) for text, predicted in zip(texts, predictions): print('"{}"'.format(text)) print(" - Predicted as: '{}'".format(newsgroup_train.target_names[predicted])) print("") and the results are: "wheelie" - Predicted as: 'rec.motorcycles' "stars are shining" - Predicted as: 'sci.space' "galaxy" - Predicted as: 'rec.motorcycles' The word galaxy is mentioned many times in the space texts. Why it can't classify it correctly? The code of the classification can be seen below if needed. from sklearn.datasets import fetch_20newsgroups from sklearn.naive_bayes import MultinomialNB from sklearn.feature_extraction.text import TfidfVectorizer from sklearn import metrics cats = ['sci.space','rec.autos','rec.motorcycles'] newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'), categories = cats) newsgroups_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'), categories = cats) vectorizer = TfidfVectorizer(max_features = 1000,max_df = 0.5, min_df = 5, stop_words='english') vectors = vectorizer.fit_transform(newsgroups_train.data) vectors_test = vectorizer.transform(newsgroups_test.data) clf = MultinomialNB(alpha=.01) clf.fit(vectors, newsgroups_train.target) vectors_test = vectorizer.transform(newsgroups_test.data) pred = clf.predict(vectors_test) Maybe is due to the fact that the accuracy score is 0.77 which renders some to be misclassified. How do you suggest to make the model to perform better? Actually SVM would be what I would like to use but gives worse results and gives as more frequent words just "00" in every category.
1
1
0
0
0
0
I am trying to extract text between two iterators. I have tried using span() function on it to find the start and the end span How do I proceed further, to extract text between these spans start_matches = start_pattern.finditer(filter_lines) end_matches = end_pattern.finditer(filter_lines) for s_match in start_matches : s_cargo=s_match.span() for e_match in end_matches : e_cargo=e_match.span() Using the span: 1) s_cargo and 2) e_cargo, I would want to find the text within the string filter_lines I am relatively new to python, any kind of help is much appreciated.
1
1
0
0
0
0
Edit due to off-topic I want to use regex in SpaCy to find any combination of (Accrued or accrued or Annual or annual) leave by this code: from spacy.matcher import Matcher nlp = spacy.load('en_core_web_sm') matcher = Matcher(nlp.vocab) # Add the pattern to the matcher matcher.add('LEAVE', None, [{'TEXT': {"REGEX": "(Accrued|accrued|Annual|annual)"}}, {'LOWER': 'leave'}]) # Call the matcher on the doc doc= nlp('Annual leave shall be paid at the time . An employee is to receive their annual leave payment in the normal pay cycle. Where an employee has accrued annual leave in') matches = matcher(doc) # Iterate over the matches for match_id, start, end in matches: # Get the matched span matched_span = doc[start:end] print('- ', matched_span.sent.text) # returned: - Annual leave shall be paid at the time . - An employee is to receive their annual leave payment in the normal pay cycle. - Where an employee has accrued annual leave in However, I think my regex was not abstract/generalized enough to be applied to other situations, I would be very much appreciated for your advice on how to improve my regex expression with spaCy.
1
1
0
0
0
0
I have a fairly simple NLTK and sklearn classifier (I'm a complete noob at this). I do the usual imports import pandas as pd import matplotlib.pyplot as plt from sklearn.feature_extraction.text import CountVectorizer from nltk.tokenize import RegexpTokenizer from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from sklearn import metrics from sklearn.feature_extraction.text import TfidfVectorizer I load the data (I already cleaned it). It is a very simple dataframe with two columns. The first is 'post_clean' which contains the cleaned text, the second is 'uk' which is either True or False data = pd.read_pickle('us_uk_posts.pkl') Then I Vectorize with tfidf and split the dataset, followed by creating the model tf = TfidfVectorizer() text_tf = tf.fit_transform(data['post_clean']) X_train, X_test, y_train, y_test = train_test_split(text_tf, data['uk'], test_size=0.3, random_state=123) clf = MultinomialNB().fit(X_train, y_train) predicted = clf.predict(X_test) print("MultinomialNB Accuracy:" , metrics.accuracy_score(y_test,predicted)) Apparently, unless I'm completely missing something here, I have Accuracy of 93% My two questions are: 1) How do I now use this model to actually classify some items that don't have a known UK value? 2) How do I test this model using a completely separate test set (that I haven't split)? I have tried new_data = pd.read_pickle('new_posts.pkl') Where the new_posts data is in the same format new_text_tf = tf.fit_transform(new_data['post_clean']) predicted = clf.predict(new_X_train) predicted and new_text_tf = tf.fit_transform(new_data['post_clean']) new_X_train, new_X_test, new_y_train, new_y_test = train_test_split(new_text_tf, new_data['uk'], test_size=1) predicted = clf.predict(new_text_tf) predicted but both return "ValueError: dimension mismatch"
1
1
0
0
0
0
For training new custom entities we can train a model using the steps mentioned here: https://spacy.io/usage/training#ner But I want to know how to decide no of iterations, drop and batch size to overfit or underfit the model? One example of loss is: Starting training.... Losses: {'ner': 3875.2103796127717} Losses: {'ner': 3091.347521599567} Losses: {'ner': 2811.074334355512} Losses: {'ner': 2235.2944185569686} Losses: {'ner': 2015.7072019365773} Losses: {'ner': 1647.0052678292357} Losses: {'ner': 1746.1746172501762} Losses: {'ner': 1350.2094295662862} Losses: {'ner': 1302.3405612718204} Losses: {'ner': 1322.3590930188122} Losses: {'ner': 1070.3760899125737} Losses: {'ner': 990.9221824283309} Losses: {'ner': 961.2431416302175} Losses: {'ner': 885.3743390914278} Losses: {'ner': 838.3100930655886} Losses: {'ner': 733.5780730531789} Losses: {'ner': 915.0732067395388} Losses: {'ner': 734.7598118888878} Losses: {'ner': 645.5447305966479} Losses: {'ner': 615.6987186405088} Losses: {'ner': 624.112212173154} Losses: {'ner': 590.4118676242763} Losses: {'ner': 411.8125225993247} Losses: {'ner': 482.4468110898493} Losses: {'ner': 479.08534166022685} Training completed... In the above output, the loss is decreasing and increasing. So at what point should I stop training? Basically how to decide all the parameters for training?
1
1
0
0
0
0
I want to know if there is an elegant way to get the index of an Entity with respect to a Sentence. I know I can get the index of an Entity in a string using ent.start_char and ent.end_char, but that value is with respect to the entire string. import spacy nlp = spacy.load("en_core_web_sm") doc = nlp(u"Apple is looking at buying U.K. startup for $1 billion. Apple just launched a new Credit Card.") for ent in doc.ents: print(ent.text, ent.start_char, ent.end_char, ent.label_) I want the Entity Apple in both the sentences to point to start and end indexes 0 and 5 respectively. How can I do that?
1
1
0
0
0
0
Python sklearn CountVectorizer has an "analyzer" parameter which has a "char_wb" option. According to the definition, "Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space.". My question here is, how does CountVectorizer identify a "word" from a string? More specifically, are "words" simply space-separated strings from a sentence, or are they identified by more complex techniques like word_tokenize from nltk? The reason I ask this is that I am analyzing social media data which has a whole lot of @mentions and #hashtags. Now, nltk's word_tokenize breaks up a "@mention" into ["@", "mention], and a "#hashtag" into ["#", "hashtag"]. If I feed these into CountVectorizer with ngram_range > 1, the "#" and "@" will never be captured as features. Moreover, I want character n-grams (with char_wb) to capture "@m" and "#h" as features, which won't ever happen if CountVectorizer breaks up @mentions and #hashtags into ["@","mentions"] and ["#","hashtags"]. What do I do?
1
1
0
0
0
0
this question is about classification of texts based on common words, I don't know if I am approaching the problem right I have an excel with texts in the "Description" column and a unique ID in the "ID" column, I want to iterate through Descriptions and compare them based on percentage or frequency of common words in the text I would like to classify descriptions and give them another ID. Please see example below .... #importing pandas as pd import pandas as pd # creating a dataframe df = pd.DataFrame({'ID': ['12 ', '54', '88','9'], 'Description': ['Staphylococcus aureus is a Gram-positive, round-shaped bacterium that is a member of the Firmicutes', 'Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, alpha-hemolytic or beta-hemolytic', 'Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites ','A television set or television receiver, more commonly called a television, TV, TV set, or telly']}) ID Description 12 Staphylococcus aureus is a Gram-positive, round-shaped bacterium that is a member of the Firmicutes 54 Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, round-shaped bacterium that is a member beta-hemolytic 88 Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites 9 A television set or television receiver, more commonly called a television, TV, TV set, or telly for example 12 and 54 Descriptions have more than 75% common words they will have same ID. output would be like : ID Description 12 Staphylococcus aureus is a Gram-positive, round-shaped bacterium that is a member of the Firmicutes 12 Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, round- shaped bacterium that is a member beta-hemolytic 88 Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites 9 A television set or television receiver, more commonly called a television, TV, TV set, or telly Here what I tried,I worked with two different dataframes Risk1 & Risk2, I'm not iterating throught rows which I need to do too : import codecs import re import copy import collections import pandas as pd import numpy as np import nltk from nltk.stem import PorterStemmer from nltk.tokenize import WordPunctTokenizer import matplotlib.pyplot as plt %matplotlib inline nltk.download('stopwords') from nltk.corpus import stopwords # creating a dataframe 1 df = pd.DataFrame({'ID': ['12 '], 'Description': ['Staphylococcus aureus is a Gram-positive, round-shaped bacterium that is a member of the Firmicutes']}) # creating a dataframe 2 df = pd.DataFrame({'ID': ['54'], 'Description': ['Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, alpha-hemolytic or beta-hemolytic']}) esw = stopwords.words('english') esw.append('would') word_pattern = re.compile("^\w+$") def get_text_counter(text): tokens = WordPunctTokenizer().tokenize(PorterStemmer().stem(text)) tokens = list(map(lambda x: x.lower(), tokens)) tokens = [token for token in tokens if re.match(word_pattern, token) and token not in esw] return collections.Counter(tokens), len(tokens) def make_df(counter, size): abs_freq = np.array([el[1] for el in counter]) rel_freq = abs_freq / size index = [el[0] for el in counter] df = pd.DataFrame(data = np.array([abs_freq, rel_freq]).T, index=index, columns=['Absolute Frequency', 'Relative Frequency']) df.index.name = 'Most_Common_Words' return df Risk1_counter, Risk1_size = get_text_counter(Risk1) make_df(Risk1_counter.most_common(500), Risk1_size) Risk2_counter, Risk2_size = get_text_counter(Risk2) make_df(Risk2_counter.most_common(500), Risk2_size) all_counter = Risk1_counter + Risk2_counter all_df = make_df(Risk2_counter.most_common(1000), 1) most_common_words = all_df.index.values df_data = [] for word in most_common_words: Risk1_c = Risk1_counter.get(word, 0) / Risk1_size Risk2_c = Risk2_counter.get(word, 0) / Risk2_size d = abs(Risk1_c - Risk2_c) df_data.append([Risk1_c, Risk2_c, d]) dist_df= pd.DataFrame(data = df_data, index=most_common_words, columns=['Risk1 Relative Freq', 'Risk2 Hight Relative Freq','Relative Freq Difference']) dist_df.index.name = 'Most Common Words' dist_df.sort_values('Relative Freq Difference', ascending = False, inplace=True) dist_df.head(500)
1
1
0
0
0
0
I have this text: data = ['Hi, this is XYZ and XYZABC is $$running'] I am using the following tfidfvectorizer: vectorizer = TfidfVectorizer( stop_words='english', use_idf=False, norm=None, min_df=1, tokenizer = tokenize, ngram_range=(1, 1), token_pattern=u'\w{4,}') I am fitting the data as follows: tdm =vectorizer.fit_transform(data) Now, when I print vectorizer.get_feature_names() I get this: [u'hi', u'run', u'thi', u'xyz', u'xyzabc'] My question is why am I getting 'hi' and 'xyz' even thought I mentioned that I want it to capture only words that have at least 4 characters? - token_pattern=u'\w{4,}'
1
1
0
1
0
0
The final call of pyterresect is not returning an string instead its printing values of every pixel of that image only. import numpy as np import cv2 import imutils from PIL import Image from pytesseract import image_to_string count = 0 for c in cnts: peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) if len(approx) == 4: # Select the contour with 4 corners NumberPlateCnt = approx #This is our approx Number Plate Contour pl=NumberPlateCnt print(NumberPlateCnt) if(pl[0][0][1]+10>pl[2][0][1] or pl[0][0][0]+40>pl[2][0][0]): continue filter_img = image[pl[0][0][1]:pl[2][0][1],pl[0][0][0]:pl[2][0][0]] print("Number Plate Detected") cv2_imshow(filter_img) Number=pytesseract.image_to_string(filter_img,lang='eng') print("Number is :",Number) cv2.waitKey(0) cv2.drawContours(image, [NumberPlateCnt], -1, (0, 255, 0), 3) print("Final Image With Number Plate Detected") cv2_imshow(image) cv2.waitKey(0) #Wait for user input before closing the images displayed the number i am getting here should be some string but its printing like some sort of matrix as we get when we print an image using print.
1
1
0
1
0
0
How we I stop word_tokenize from splittings strings like "pass_word", "https://www.gmail.com" and "tempemail@mail.com"? The quotes should prevent it, but they don't. I have tried with different regex options. from nltk import word_tokenize s = 'open "https://www.gmail.com" url. Enter "tempemail@mail.com" in email. Enter "pass_word" in password.' for phrase in re.findall('"([^"]*)"', s): s = s.replace('"{}"'.format(phrase), phrase.replace(' ', '*')) tokens = word_tokenize(s) print(tokens) Actual response: ['open', 'https', ':', '//www.gmail.com', 'url', '.', 'Enter', 'tempemail', '@', 'mail.com', 'in', 'email', '.', 'Enter', 'pass_word', 'in', 'password', '.'] Expected response: ['open', 'https://www.gmail.com', 'url', '.', 'Enter', 'tempemail@mail.com', 'in', 'email', '.', 'Enter', 'pass_word', 'in', 'password', '.']
1
1
0
0
0
0
Can't figure out why is this problem appearing. from mosestokenizer import MosesDetokenizer with MosesDetokenizer('en') as detokenize: print(detokenize(["hi", 'my', 'name', 'is', 'artem'])) This is what I get: stdbuf was not found; communication with perl may hang due to stdio buffering. Traceback (most recent call last): File "C:\Users\ArtemLaptiev\Documents\GitHub\temp\foo.py", line 3, in <module> with MosesDetokenizer('en') as detokenize: File "C:\ProgramFiles\Anaconda\lib\site-packages\mosestokenizer\detokenizer.py", line 47, in __init__ super().__init__(argv) File "C:\ProgramFiles\Anaconda\lib\site-packages\toolwrapper.py", line 52, in __init__ self.start() File "C:\ProgramFiles\Anaconda\lib\site-packages\toolwrapper.py", line 92, in start cwd=self.cwd File "C:\ProgramFiles\Anaconda\lib\subprocess.py", line 709, in __init__ restore_signals, start_new_session) File "C:\ProgramFiles\Anaconda\lib\subprocess.py", line 997, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified Thank you for help!
1
1
0
0
0
0
I'm trying to build a predictive model (random forest, sgd, etc.) using scikit-learn and it seems like every model only allows you to fit text data such as classifier.fit(X,Y) ...where Y is the target and X is a text feature vector (count_vec -> tf_idf). Is there any way to have a model which in addition to the text feature matrix also contains several categorical variables? Can I simply append them as new columns on the right side of X?
1
1
0
0
0
0
I have two data frames (df1 and df2), each with the columns "Words" and "Frequency". For each word in df1, I want to see if it exists in df2 and then return the "Frequency" value so that it can be appended to include the new instances from df1. And if the word does not exist in df2, then add it. I have found ways of appending dataframes, but I haven't been able to create a functional loop to do what I have described. I was trying to use Pandas and df.query but had no luck. In the example below I want it to add the words "This", "is", "test", and "dataframe" along with their frequency, and I want to append "a" in df2 to be the sum of both frequency values (4 + 222 = 226) [in] df1 = pd.DataFrame({'Words': ["this","is","a","test","dataframe"], 'Frequency': [20,18,4,12,6]}) [out] Words Frequency 0 this 20 1 is 18 2 a 4 3 test 12 4 dataframe 6 [in] df2 = pd.read_csv("Words.csv") [out] Word Frequency 0 the 562 1 to 246 2 a 222 3 of 204 4 and 200
1
1
0
0
0
0
I got two descriptions, one in a dataframe and other that is a list of words and I need to compute the levensthein distance of each word in the description against each word in the list and return the count of the result of the levensthein distance that is equal to 0 import pandas as pd definitions=['very','similarity','seem','scott','hello','names'] # initialize list of lists data = [['hello my name is Scott'], ['I went to the mall yesterday'], ['This seems very similar']] # Create the pandas DataFrame df = pd.DataFrame(data, columns = ['Descriptions']) # print dataframe. df Column counting the number of all words in each row that computing the Lev distances against each word in the dictionary returns 0 df['lev_count_0']= Column counting the number of all words in each row that computing the Lev distances against each word in the dictionary returns 0 So for example, the first case will be edit_distance("hello","very") # This will be equal to 4 edit_distance("hello","similarity") # this will be equal to 9 edit_distance("hello","seem") # This will be equal to 4 edit_distance("hello","scott") # This will be equal to 5 edit_distance("hello","hello")# This will be equal to 0 edit_distance("hello","names") # this will be equal to 5 So for the first row in df['lev_count_0'] the result should be 1, since there is just one 0 comparing all words in the Descriptions against the list of Definitions Description | lev_count_0 hello my name is Scott | 1
1
1
0
0
0
0
I've built a text classifier using FastAi on Kaggle, while trying to export the trained model i get the following error - TypeError: unsupported operand type(s) for /: 'str' and 'str' I've tried setting the leaner model directory and path to working directory. learn_clas.path='/kaggle/working/' learn_clas.model_dir='/kaggle/working/' learn_clas.export() Error i am getting is - /opt/conda/lib/python3.6/site-packages/fastai/torch_core.py in try_save(state, path, file) 410 def try_save(state:Dict, path:Path=None, file:PathLikeOrBinaryStream=None): --> 411 target = open(path/file, 'wb') if is_pathlike(file) else file 412 try: torch.save(state, target) 413 except OSError as e: TypeError: unsupported operand type(s) for /: 'str' and 'str'
1
1
0
0
0
0
I am training a LSTM in-order to classify the time-series data into 2 classes(0 and 1).I have huge data-set on the drive where where the 0-class and the 1-class data are located in different folders.I am trying to train the LSTM batch-wise using by creating a Dataset class and wrapping the DataLoader around it. I have to do pre-processing such as reshaping.Here's my code which does that ` class LoadingDataset(Dataset): def __init__(self,data_root1,data_root2,file_name): self.data_root1=data_root1#Has the path for class1 data self.data_root2=data_root2#Has the path for class0 data self.fileap1= pd.DataFrame()#Stores class 1 data self.fileap0 = pd.DataFrame()#Stores class 0 data self.file_name=file_name#List of all the files at data_root1 and data_root2 self.labs1=None #Will store the class 1 labels self.labs0=None #Will store the class 0 labels def __len__(self): return len(self.fileap1) def __getitem__(self, index): self.fileap1 = pd.read_csv(self.data_root1+self.file_name[index],header=None)#read the csv file for class 1 self.fileap1=self.fileap1.iloc[1:,1:].values.reshape(-1,WINDOW+1,1)#reshape the file for lstm self.fileap0 = pd.read_csv(self.data_root2+self.file_name[index],header=None)#read the csv file for class 0 self.fileap0=self.fileap0.iloc[1:,1:].values.reshape(-1,WINDOW+1,1)#reshape the file for lstm self.labs1=np.array([1]*len(self.fileap1)).reshape(-1,1)#create the labels 1 for the csv file self.labs0=np.array([0]*len(self.fileap0)).reshape(-1,1)#create the labels 0 for the csv file # print(self.fileap1.shape,' ',self.fileap0.shape) # print(self.labs1.shape,' ',self.labs0.shape) self.fileap1=np.append(self.fileap1,self.fileap0,axis=0)#combine the class 0 and class one data self.fileap1 = torch.from_numpy(self.fileap1).float() self.labs1=np.append(self.labs1,self.labs0,axis=0)#combine the label0 and label 1 data self.labs1 = torch.from_numpy(self.labs1).int() # print(self.fileap1.shape,' ',self.fileap0.shape) # print(self.labs1.shape,' ',self.labs0.shape) return self.fileap1,self.labs1 data_root1 = '/content/gdrive/My Drive/Data/Processed_Data/Folder1/One_'#location of class 1 data data_root2 = '/content/gdrive/My Drive/Data/Processed_Data/Folder0/Zero_'#location of class 0 data training_set=LoadingDataset(data_root1,data_root2,train_ind)#train_ind is a list of file names that have to be read from data_root1 and data_root2 training_generator = DataLoader(training_set,batch_size =2,num_workers=4) for epoch in range(num_epochs): model.train()#Setting the model to train mode after eval mode to train for next epoch once the testing for that epoch is finished for i, (inputs, targets) in enumerate(train_loader): . . . . ` I get this error when the run this code RuntimeError: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 68, in default_collate return [default_collate(samples) for samples in transposed] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 68, in return [default_collate(samples) for samples in transposed] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py", line 43, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 96596 and 25060 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:711 My Questions are 1.Have I Implemented this correctly, is this how you pre-process and then train a dataset batch-wise? 2.The batch_size of DataLoader and batch_size of the LSTM are different since the batch_size of DataLoader refers to the no. of files, whereas batch_size of the LSTM model refers to the no. of instances, so will I get another error here? 3.I have no idea how to scale this data-set since the MinMaxScaler has to be applied to the dataset in its entirety. Responses are appreciated.Please let me know if I have to create separate posts for each question. Thank You.
1
1
0
1
0
0
I have a corpus of free text medical narratives, for which I am going to use for a classification task, right now for about 4200 records. To begin, I wish to create word embeddings using w2v, but I have a question about a train-test split for this task. When I train the w2v model, is it appropriate to use all of the data for the model creation? Or should I only use the train data for creating the model? Really, my question sort of comes down to: do I take the whole dataset, create the w2v model, transform the narratives with the model, and then split, or should I split, create w2v, and then transform the two sets independently? Thanks! EDIT I found an internal project at my place of work which was built by a vendor; they create the split, and create the the w2v model on ONLY the train data, then transform the two sets independently in different jobs; so it's the latter of the two options that I specified above. This is what I thought would be the case, as I wouldn't want to contaminate the w2v model on any of the test data.
1
1
0
1
0
0
I am having trouble with my performance doing nlp tasks. I want to use this module for word embeddings and it produces output, but its runtime increases with each iterative call. I have already read about different solutions, but i cant get them to work. I suspect using tf.placeholders would be the a good solution, but i dont know how to use them in this instance. Example code for my problem: embedder = hub.Module("https://tfhub.dev/google/nnlm-en-dim128-with-normalization/1") session = tf.Session() session.run(tf.global_variables_initializer()) session.run(tf.tables_initializer()) doc = [["Example1", "Example2", "Example3", "Example4", ...], [...], ...] for paragraph in doc: vectors = session.run(embedder(paragraph)) #do something with vectors Note, that doc cant be fed to the embedder all at once. Thank you in advance.
1
1
0
0
0
0
I would like to use a pre-trained word2vec model in Spacy to encode titles by (1) mapping words to their vector embeddings and (2) perform the mean of word embeddings. To do this I use the following code: import spacy nlp = spacy.load('myspacy.bioword2vec.model') sentence = "I love Stack Overflow butitsalsodistractive" avg_vector = nlp(sentence).vector Where nlp(sentence).vector (1) tokenizes my sentence with white-space splitting, (2) vectorizes each word according to the dictionary provided and (3) averages the word vectors within a sentence to provide a single output vector. That's fast and cool. However, in this process, out-of-vocabulary (OOV) terms are mapped to n-dimensional 0 vectors, which affects the resulting mean. Instead, I would like OOV terms to be ignored when performing the average. In my example, 'butitsalsodistractive' is the only term not present in my dictionary, so I would like nlp("I love Stack Overflow butitsalsodistractive").vector = nlp("I love Stack Overflow").vector. I have been able to do this with a post-processing step (see code below), but this becomes too slow for my purposes, so I was wondering if there is a way to tell the nlp pipeline to ignore OOV terms beforehand? So when calling nlp(sentence).vector it does not include OOV-term vectors when computing the mean import numpy as np avg_vector = np.asarray([word.vector for word in nlp(sentence) if word.has_vector]).mean(axis=0) Approaches tried In both cases documents is a list with 200 string elements with ≈ 400 words each. Without dealing with OOV terms: import spacy import time nlp = spacy.load('myspacy.bioword2vec.model') times = [] for i in range(0, 100): init = time.time() documents_vec = [document.vector for document in list(nlp.pipe(documents))] fin = time.time() times.append(fin-init) print("Mean time after 100 rounds:", sum(times)/len(times), "s") # Mean time after 100 rounds: 2.0850741124153136 s Ignoring OOV terms in output vector. Note that in this case we need to add an extra 'if' statment for those cases in which all words are OOV (if this happens the output vector is r_vec): r_vec = np.random.rand(200) # Random vector for empty text # Define function to obtain average vector given a document def get_vector(text): vectors = np.asarray([word.vector for word in nlp(text) if word.has_vector]) if vectors.size == 0: # Case in which none of the words in text were in vocabulary avg_vector = r_vec else: avg_vector = vectors.mean(axis=0) return avg_vector times = [] for i in range(0, 100): init = time.time() documents_vec = [get_vector(document) for document in documents] fin = time.time() times.append(fin-init) print("Mean time after 100 rounds:", sum(times)/len(times), "s") # Mean time after 100 rounds: 2.4214172649383543 s In this example the mean difference time in vectorizing 200 documents was 0.34s. However, when processing 200M documents this becomes critical. I am aware that the second approach needs an extra 'if' condition to deal with documents full of OOV terms, which might slightly increase computational time. In addition, in the first case I am able to use nlp.pipe(documents) to process all documents in one go, which I guess must optimize the process. I could always look for extra computational resources to apply the second piece of code, but I was wondering if there is any way of applying the nlp.pipe(documents) ignoring the OOV terms in the output. Any suggestion will be very much welcome.
1
1
0
0
0
0
Is there a way to use spaCy's rule-based pattern matcher (or a similar library) on dependency sequences such as the list of tokens returned by token.ancestors? For example, I have pluralized a noun and now I need to check for dependent verbs to fix any errors in verb agreement. So one pattern (of many) would be to match an 'auxpass' verb belonging to a parent verb which is a relative clause of the noun.
1
1
0
0
0
0
I have array reshape and sizes issue I haven't try anything due to the reason I am still new in this and I dont want to mess up things that are unreleated to the issue import tensorflow as tf import numpy as np mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train = tf.keras.utils.normalize(x_train, axis=1) # scales data between 0 and 1 x_test = tf.keras.utils.normalize(x_test, axis=1) model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten(input_shape=(32,))) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax)) x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1])) x_test = np.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1])) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=3) val_loss, val_acc = model.evaluate(x_test, y_test) print(val_loss) print(val_acc) File "t1.py", line 17, in <module> x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1])) File "<__array_function__ internals>", line 6, in reshape File "H:\Program Files\Python36\lib\site-packages umpy\core\fromnumeric.py", line 301, in reshape return _wrapfunc(a, 'reshape', newshape, order=order) File "H:\Program Files\Python36\lib\site-packages umpy\core\fromnumeric.py", line 61, in _wrapfunc return bound(*args, **kwds) ValueError: cannot reshape array of size 47040000 into shape (60000,1,28)```
1
1
0
0
0
0
I have two data frames. Each contains 1 word per row. They are pretty close, but there are misspellings and sometimes one df has one or two words the other doesn't. As a rule, I want to combine df2.word with df1.metadata. If df2.word and df1.word match, are close in spelling, or are close enough and within 1 row from each other, I want to join df2.word with df1.metadata. If there is no close match directly or within 1 row, I want to drop this row. I have: df1 word metadata metadata2 okay 1 A I 1 A win 1 A tree 1 A apples 1 A also 0 B would 0 B like 0 B for 0 B oranges 0 B df2 word OK. I want three apples. Also, I would like four oranges. What I want is: word metadata metadata2 OK. 1 B I 1 B want 1 B three 1 B apples. 1 B Also, 0 B would 0 B like 0 B four 0 B oranges. 0 B
1
1
0
0
0
0
Using gensim I was able to extract topics from a set of documents in LSA but how do I access the topics generated from the LDA models? When printing the lda.print_topics(10) the code gave the following error because print_topics() return a NoneType: Traceback (most recent call last): File "/home/alvas/workspace/XLINGTOP/xlingtop.py", line 93, in <module> for top in lda.print_topics(2): TypeError: 'NoneType' object is not iterable The code: from gensim import corpora, models, similarities from gensim.models import hdpmodel, ldamodel from itertools import izip documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey"] # remove common words and tokenize stoplist = set('for a of the and to in'.split()) texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents] # remove words that appear only once all_tokens = sum(texts, []) tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1) texts = [[word for word in text if word not in tokens_once] for text in texts] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] # I can print out the topics for LSA lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) corpus_lsi = lsi[corpus] for l,t in izip(corpus_lsi,corpus): print l,"#",t print for top in lsi.print_topics(2): print top # I can print out the documents and which is the most probable topics for each doc. lda = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=50) corpus_lda = lda[corpus] for l,t in izip(corpus_lda,corpus): print l,"#",t print # But I am unable to print out the topics, how should i do it? for top in lda.print_topics(10): print top
1
1
0
0
0
0
I'm using the NLTK WordNet Lemmatizer for a Part-of-Speech tagging project by first modifying each word in the training corpus to its stem (in place modification), and then training only on the new corpus. However, I found that the lemmatizer is not functioning as I expected it to. For example, the word loves is lemmatized to love which is correct, but the word loving remains loving even after lemmatization. Here loving is as in the sentence "I'm loving it". Isn't love the stem of the inflected word loving? Similarly, many other 'ing' forms remain as they are after lemmatization. Is this the correct behavior? What are some other lemmatizers that are accurate? (need not be in NLTK) Are there morphology analyzers or lemmatizers that also take into account a word's Part Of Speech tag, in deciding the word stem? For example, the word killing should have kill as the stem if killing is used as a verb, but it should have killing as the stem if it is used as a noun (as in the killing was done by xyz).
1
1
0
0
0
0
I'm currently trying to build a sentence parser that extracts unknown parts of speech. Its a bit abstract but my methodology is basically creating a set of grammatical rules that the function can use to parse the text. I'm using Spacy's PoS tagger right now just to extract the pos tags from an example sentence. I know Spacy also has a dependency parser but from what I've read on the documentation its used for matching a known phrase. So my question is this: By creating a set of grammatical rules, whats the best way to extract an unknown target word from a string based off of those rules? For example: import spacy nlp = spacy.load('en_core_web_sm') Example = "I really hate all people who are green, I wish they would go back home" ex_string = Example.split() doc = nlp(Example) pos_tagged_context = [token.tag_ for token in doc] Word_Dict = {} The first rule in this case would be the PoS tag list of pos_tagged_context which matches the sentence structure of ex_string ['PRP', 'RB', 'VBP', 'DT', 'NNS', 'WP', 'VBP', 'JJ', ',', 'PRP', 'VBP', 'PRP', 'MD', 'VB', 'RB', 'RB'] Two problems arise from this though, the easier one being that when printing Word_Dict several PoS tags are lost: {'I': ',', 'really': 'RB', 'hate': 'VBP', 'all': 'DT', 'people': 'NNS', 'who': 'WP', 'are': 'VBP', 'green,': 'JJ', 'wish': 'PRP', 'they': 'VBP', 'would': 'PRP', 'go': 'MD', 'back': 'VB', 'home': 'RB'} The second problem is more abstract, since the structure of a "negative" sentence is inherently relative is there a good "general form" when creating these rules? An ideal output would use the structure of the sentence and identify the target word within it, in this case "green". Let me know if the question is too abstract or needs more clarification!
1
1
0
0
0
0
I have some sample images. How to extract tabular data from images and store it into JSON format?
1
1
0
1
0
0
Given that I have a string like: 'velvet evening purse bags' how can I get all word pairs of this? In other words, all 2-word combinations of this: 'velvet evening' 'velvet purse' 'velvet bags' 'evening purse' 'evening bags' 'purse bags' I know python's nltk package can give the bigrams but I'm looking for something beyond that functionality. Or do I have to write my own custom function in Python?
1
1
0
0
0
0
I tried to load pre-trained model by using BertModel class in pytorch. I have _six.py under torch, but it still shows module 'torch' has no attribute '_six' import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model (weights) model = BertModel.from_pretrained('bert-base-uncased') model.eval() ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __setattr__(self, name, value) 551 .format(torch.typename(value), name)) 552 modules[name] = value --> 553 else: 554 buffers = self.__dict__.get('_buffers') 555 if buffers is not None and name in buffers: ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in register_parameter(self, name, param) 140 raise KeyError("parameter name can't be empty string \"\"") 141 elif hasattr(self, name) and name not in self._parameters: --> 142 raise KeyError("attribute '{}' already exists".format(name)) 143 144 if param is None: AttributeError: module 'torch' has no attribute '_six'
1
1
0
0
0
0
I have a list of tuples that are generated from a string using NLTK's PoS tagger. I'm trying to find the the "intent" of a specific string in order to append it to a dataframe, so I need a way to generate a syntax/grammar rule. string = "RED WHITE AND BLUE" string_list = nltk.pos_tag(a.split()) string_list = [('RED', 'JJ'), ('WHITE', 'NNP'), ('AND', 'NNP'), ('BLUE', 'NNP')] The strings vary in size, from 2-3 elements all the way to full on paragraphs (40-50+) so I'm wondering if there is a general form or rule that I can create in order to parse a sentence. So if I want find a pattern in a list an example pseudocode output would be: string_pattern = "I want to kill all the bad guys in the Halo Game" pattern = ('I', 'PRP') + ('want', 'VBP') + ('to', 'TO') + ('kill:', 'JJ') + ('all', 'DT') + ('bad', 'JJ') + ('guys', 'NNS') + ('in', 'IN') + ('Halo', 'NN') + ('Game', 'NN') Ideally I would be able to match part of the pattern in a tagged string, so it finds: ('I', 'PRP') + ('want', 'VBP') + ('to', 'TO') + ('kill:', 'JJ') but it doesn't need the rest, or vice versa it can find multiple examples of the pattern in the same string, in the event that the string is a paragraph. If anyone knows the best way to do this or knows a better alternative it would be really helpful!
1
1
0
0
0
0
I have dataframe with description column, under one row of description there are multiple lines of texts, basically those are set of information for each record. Example: Regarding information no 1 at 07-01-2019 we got update as the sky is blue and at 05-22-2019 we again got update as Apples are red, that are arranged between two dates. Firstly, I would like to extract text between the date and split the respective details in new columns as date, name, description. The raw description looks like info no| Description -------------------------------------------------------------------------- 1 |07-01-2019 12:59:41 - XYZ (Work notes) The sky is blue in color. | Clouds are looking lovely. | 05-22-2019 12:00:49 - MNX (Work notes) Apples are red in color. -------------------------------------------------------------------------- | 02-26-2019 12:53:18 - ABC (Work notes) Task is to separate balls. 2 | 02-25-2019 16:57:57 - lMN (Work notes) He came by train. | That train was 15 min late. | He missed the concert. | 02-25-2019 11:08:01 - sbc (Work notes) She is my grandmother. Desired output is info No |DATE | NAME | DESCRIPTION --------|------------------------------------------------------ 1 |07-01-2019 12:59:41 | xyz | The sky is blue in color. | | | Clouds are looking lovely. --------|--------------------------------------------------------- 1 |05-22-2019 12:00:49 | MNX | Apples are red in color --------|--------------------------------------------------------- 2 | 02-26-2019 12:53:18 | ABC | Task is to separate blue balls. --------|--------------------------------------------------------- 2 | 02-25-2019 16:57:57 | IMN | He came by train | | | That train was 15 min late. | | | He missed the concert. --------|--------------------------------------------------------- | 02-25-2019 11:08:01 | sbc | She is my grandmother. I tried: myDf = pd.DataFrame(re.split('(\d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2} -.*)',Description),columns = ['date']) myDf['date'] = myDf['date'].replace('(Work notes)','-', regex=True) newQueue = newQueue.date.str.split(-,n=3)
1
1
0
0
0
0
I work in customer support, and I'm using scikit-learn to predict tags for our tickets, given a training set of tickets (approx. 40,000 tickets in the training set). I'm using the classification model based on this one. It's predicting just "()" as the tags for many of my test set of tickets, even though none of the tickets in the training set are without tags. My training data for tags is a list of lists, like: tags_train = [['international_solved'], ['from_build_guidelines my_new_idea eligibility'], ['dropbox other submitted_faq submitted_help'], ['my_new_idea_solved'], ['decline macro_backer_paypal macro_prob_errored_pledge_check_credit_card_us loading_problems'], ['dropbox macro__turnaround_time other plq__turnaround_time submitted_help'], ['dropbox macro_creator__logo_style_guide outreach press submitted_help']] While my training data for ticket descriptions is just a list of strings, e.g.: descs_train = ['description of ticket one', 'description of ticket two', etc] Here's the relevant part of my code to build the model: import numpy as np import scipy from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC # We have lists called tags_train, descs_train, tags_test, descs_test with the test and train data X_train = np.array(descs_train) y_train = tags_train X_test = np.array(descs_test) classifier = Pipeline([ ('vectorizer', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', OneVsRestClassifier(LinearSVC(class_weight='auto')))]) classifier.fit(X_train, y_train) predicted = classifier.predict(X_test) However, "predicted" gives a list that looks like: predicted = [(), ('account_solved',), (), ('images_videos_solved',), ('my_new_idea_solved',), (), (), (), (), (), ('images_videos_solved', 'account_solved', 'macro_launched__edit_update other tips'), ('from_guidelines my_new_idea', 'from_guidelines my_new_idea macro__eligibility'), ()] I don't understand why it's predicting blank () when there are none in the training set. Shouldn't it predict the closest tag? Can anyone recommend any improvements to the model I'm using? Thank you so much for your help in advance!
1
1
0
0
0
0
I am trying to clean up tweets to analyze their sentiments. I want to turn emojis to what they mean. For instance, I want my code to convert 'I ❤ New York' 'Python is ' to 'I love New York' 'Python is cool' I have seen packages such as emoji but they turn the emoji's to what they represent, not what they mean. for instance, they turn my tweets to : print(emoji.demojize('Python is ')) 'Python is :thumbs_up:' print(emoji.demojize('I ❤ New York')) 'I :heart: New York' since "heart" or "thumbs_up" do not carry a positive or negative meaning in textblob, this kind of conversion is useless. But if "❤" is converted to "love", the results of sentiment analysis will improve drastically.
1
1
0
0
0
0
I am using nltk PunktSentenceTokenizer for splitting paragraphs into sentences. I have paragraphs as follows: paragraphs = "1. Candidate is very poor in mathematics. 2. Interpersonal skills are good. 3. Very enthusiastic about social work" Output: ['1.', 'Candidate is very poor in mathematics.', '2.', 'Interpersonal skills are good.', '3.', 'Very enthusiastic about social work'] I tried to add sent starters using below code but that didnt even work out. from nltk.tokenize.punkt import PunktSentenceTokenizer tokenizer = PunktSentenceTokenizer() tokenizer._params.sent_starters.add('1.') I really appreciate if anybody could drive me towards correct direction Thanks in advance :)
1
1
0
0
0
0
I am new in NLP domain and was going through this blog: https://blog.goodaudience.com/learn-natural-language-processing-from-scratch-7893314725ff London is the capital of and largest city in England and the United Kingdom. Standing on the River Thames in the south-east of England, at the head of its 50-mile (80 km) estuary leading to the North Sea, London has been a major settlement for two millennia. It was founded by the Romans. I have the experience in NER and POS tagging using spacy. I would like to know that how i will link the london with it like: London is the capital ..... It has been a major settlement.. It was founded by the Romans.... I have tried the Dependency parser but not able to produce the same result. https://explosion.ai/demos/displacy I am open to use any other library, please suggest the right approach to achieve it
1
1
0
0
0
0
I am trying to implement a tf-idf vectorizer from scratch in Python. I computed my TDF values but the values do not match with the TDF values computed using sklearn's TfidfVectorizer(). What am I doing wrong? corpus = [ 'this is the first document', 'this document is the second document', 'and this is the third one', 'is this the first document', ] from collections import Counter from tqdm import tqdm from scipy.sparse import csr_matrix import math import operator from sklearn.preprocessing import normalize import numpy sentence = [] for i in range(len(corpus)): sentence.append(corpus[i].split()) word_freq = {} #calculate document frequency of a word for i in range(len(sentence)): tokens = sentence[i] for w in tokens: try: word_freq[w].add(i) #add the word as key except: word_freq[w] = {i} #if it exists already, do not add. for i in word_freq: word_freq[i] = len(word_freq[i]) #Counting the number of times a word(key)is in the whole corpus thus giving us the frequency of that word. def idf(): idfDict = {} for word in word_freq: idfDict[word] = math.log(len(sentence) / word_freq[word]) return idfDict idfDict = idf() expected output: (output obtained using vectorizer.idf_) [1.91629073 1.22314355 1.51082562 1. 1.91629073 1.91629073 1.22314355 1.91629073 1. ] actual output: (the values are the idf values of corresponding keys. {'and': 1.3862943611198906, 'document': 0.28768207245178085, 'first': 0.6931471805599453, 'is': 0.0, 'one': 1.3862943611198906, 'second': 1.3862943611198906, 'the': 0.0, 'third': 1.3862943611198906, 'this': 0.0 }
1
1
0
1
0
0
Is it possible to load a packaged spacy model (i.e. foo.tar.gz) directly from the tar file instead of installing it beforehand? I would imagine something like: import spacy nlp = spacy.load(/some/path/foo.tar.gz)
1
1
0
0
0
0
Is it possible to use Stanford Parser in NLTK? (I am not talking about Stanford POS.)
1
1
0
0
0
0
I have a large forum about dog with tagged posts. Index scores from document frequency * text frequency gives me a perfect measure of what a topic should be about. For example print (getscores('dog food')) # keyword scores range between 1 and 2 # {'dog':2,'food':1.8,'bowl':1.7,'consumption':1.5, ..... 'like':1.00001} From there it seems easy to score sentences and find the sentence that best represents the topic, or so I thought. In this example the second sentence has a great fit. def method1 (sen): score = 1 for word in sen.split(): score=score*scores.get(word,1) return score def method2 (sen): score = 1 for word in sen.split(): score=score*scores.get(word,1) return score / len(sen.split()) scores = {'dog':2,'food':1.8,'bowl':1.7,'consumption':1.5,'intended':1.4} sens = ['dog food','dog food is food intended for consumption by dogs','like this one time at band camp there was all this food and and a dog this dog who ate all the food and then my bowl was empty'] for sen in sens: print (sen) print (method1(sen)) print (method2(sen)) #dog food #3.6 #1.8 (winner method 2) #dog food is food intended for consumption by dogs #13.607999999999999 #1.5119999999999998 #like this one time at band camp there was all this food and and a dog this dog who ate all the food and then my bowl was empty #22.032220320000004 (winner method 1) #0.7868650114285716 Averaging scores will favor short sentences while adding scores will favor long sentences. Compensating for sentence length (each word is multiplied by .92 or so) will work for one topic but will need another factor for the next topic. So that approach will get me nowhere. Is there any known method of scoring sentences that will give me the sentence with the highest keyword weights but also takes into account keyword density and sentence length?
1
1
0
0
0
0
I have installed the latest version of nlpnet library (http://nilc.icmc.usp.br/nlpnet/). Then, when I try to use nlpnet POSTagger according to the follwoing example, I get an error: import nlpnet tagger = nlpnet.POSTagger('/path/to/pos-model/', language='pt') Error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/r/env2/lib/python3.6/site-packages/nlpnet/taggers.py", line 205, in __init__ self._load_data() File "/home/r/env2/lib/python3.6/site-packages/nlpnet/taggers.py", line 423, in _load_data self.nn = load_network(md) File "/home/r/env2/lib/python3.6/site-packages/nlpnet/taggers.py", line 38, in load_network nn = net_class.load_from_file(md.paths[md.network]) File "nlpnet/network.pyx", line 860, in nlpnet.network.Network.load_from_file (nlpnet/network.c:14631) File "/home/r/env2/lib/python3.6/site-packages/numpy/lib/npyio.py", line 262, in __getitem__ pickle_kwargs=self.pickle_kwargs) File "/home/r/env2/lib/python3.6/site-packages/numpy/lib/format.py", line 722, in read_array raise ValueError("Object arrays cannot be loaded when " ValueError: Object arrays cannot be loaded when allow_pickle=False I also tried to install nlpnet again in a different virtual environment, but the error persists. I'm not sure if this is a incompatibility problem, a bug in the lib or an installation issue. Any suggestions?
1
1
0
0
0
0
I'm working on linear regression algorithm with multiple variables using Numpy library for Matrix. My problem is that matrix.item((i,j)) is not working properly.here is python shell: >>> a=h(Data,0,Theta) >>> a matrix([[3.78]]) >>> a.item((0,0)) 3.7800000000000002 As you see the output value is 0.0000000000000002 bigger than the real answer.
1
1
0
0
0
0
This is an abstract idea, I dont know the correct pipeline for implementing; I have used a RestNet50 architecture for training a model to classify image into 3 categories; one of the ways i was thinking of exploring was using the textual data of the image; train_gen = image.ImageDataGenerator().flow_from_directory(dataset_path_train, target_size=input_shape[:2], batch_size=batch_size, class_mode='categorical', shuffle=True, seed=seed) test_gen = image.ImageDataGenerator().flow_from_directory(dataset_path_valid, target_size=input_shape[:2], batch_size=batch_size, class_mode='categorical', shuffle=True, seed=seed) Data prep for model; now for each image i also have {text},{label} as key value pair for individual image; if i have to pass 1. WordtoVec 2. TFIDF I have read about embedding layer in Keras; I am not sure how to embed the text-data along with test_gen and train_gen in the model( in any intermediate layer or after Flatten(). base_model = ResNet50(weights='imagenet', include_top=False, input_shape=input_shape) from keras.models import Model, load_model x = base_model.output x = Flatten(name='flatten')(x) predictions = Dense(3, activation='softmax', name='predictions')(x) model = Model(inputs=base_model.input, outputs=predictions) for layer in model.layers[0:141]: layer.trainable = True model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(train_gen,steps_per_epoch=1000 , epochs=2,validation_steps=100, validation_data=test_gen,verbose=1)
1
1
0
0
0
0
I've trained a Doc2Vec model in order to do a simple binary classification task, but I would also love to see which words or sentences weigh more in terms of contributing to the meaning of a given text. So far I had no luck finding anything relevant or helpful. Any ideas how could I implement this feature? Should I switch from Doc2Vec to more conventional methods like tf-idf?
1
1
0
0
0
0
I'm trying to categorize customer feedback and I ran an LDA in python and got the following output for 10 topics: (0, u'0.559*"delivery" + 0.124*"area" + 0.018*"mile" + 0.016*"option" + 0.012*"partner" + 0.011*"traffic" + 0.011*"hub" + 0.011*"thanks" + 0.010*"city" + 0.009*"way"') (1, u'0.397*"package" + 0.073*"address" + 0.055*"time" + 0.047*"customer" + 0.045*"apartment" + 0.037*"delivery" + 0.031*"number" + 0.026*"item" + 0.021*"support" + 0.018*"door"') (2, u'0.190*"time" + 0.127*"order" + 0.113*"minute" + 0.075*"pickup" + 0.074*"restaurant" + 0.031*"food" + 0.027*"support" + 0.027*"delivery" + 0.026*"pick" + 0.018*"min"') (3, u'0.072*"code" + 0.067*"gps" + 0.053*"map" + 0.050*"street" + 0.047*"building" + 0.043*"address" + 0.042*"navigation" + 0.039*"access" + 0.035*"point" + 0.028*"gate"') (4, u'0.434*"hour" + 0.068*"time" + 0.034*"min" + 0.032*"amount" + 0.024*"pay" + 0.019*"gas" + 0.018*"road" + 0.017*"today" + 0.016*"traffic" + 0.014*"load"') (5, u'0.245*"route" + 0.154*"warehouse" + 0.043*"minute" + 0.039*"need" + 0.039*"today" + 0.026*"box" + 0.025*"facility" + 0.025*"bag" + 0.022*"end" + 0.020*"manager"') (6, u'0.371*"location" + 0.110*"pick" + 0.097*"system" + 0.040*"im" + 0.038*"employee" + 0.022*"evening" + 0.018*"issue" + 0.015*"request" + 0.014*"while" + 0.013*"delivers"') (7, u'0.182*"schedule" + 0.181*"please" + 0.059*"morning" + 0.050*"application" + 0.040*"payment" + 0.026*"change" + 0.025*"advance" + 0.025*"slot" + 0.020*"date" + 0.020*"tomorrow"') (8, u'0.138*"stop" + 0.110*"work" + 0.062*"name" + 0.055*"account" + 0.046*"home" + 0.043*"guy" + 0.030*"address" + 0.026*"city" + 0.025*"everything" + 0.025*"feature"') Is there a way to automatically label them? I do have a csv file which has feedbacks manually labeled, but I do not want to supply these labels myself. I want the model to create labels. Is it possible?
1
1
0
0
0
0
I want to write topic lists to check whether a review talks about one of the defined topics. It's important for me to write the topic lists myself and not use topic modeling to find possible topics. I thought this is called dictionary analysis, but I can't find anything. I have a data frame with reviews from amazon: df = pd.DataFrame({'User': ['UserA', 'UserB','UserC'], 'text': ['Example text where he talks about a phone and his charging cable', 'Example text where he talks about a car with some wheels', 'Example text where he talks about a plane']}) Now I want to define topic lists: phone = ['phone', 'cable', 'charge', 'charging', 'call', 'telephone'] car = ['car', 'wheel','steering', 'seat','roof','other car related words'] plane = ['plane', 'wings', 'turbine', 'fly'] The result of the method should be 3/12 for the "phone" topic of the first review (3 words of the topic list where in the review which has 12 words) and 0 for the other two topics. The second review would result in 2/11 for the "car" topic and 0 for the other topics and for the third review 1/8 for the "plane" topic and 0 for the others. Results as a list: phone_results = [0.25, 0, 0] car_results = [0, 0.18181818182, 0] plane_results = [0, 0, 0.125] Of course I would only use lowercase wordstems of the reviews which makes defining topics easier, but this should not be of concern now. Is there a method for this or do I have to write one? Thank you in advance!
1
1
0
0
0
0
I am working on Pre trained word vectors using GloVe method. Data contains vectors on Wikipedia data. While embedding data i am getting error stating that could not convert string to float: 'ng' I tried going through data but there i was not able to find symbol 'ng' # load embedding as a dict def load_embedding(filename): # load embedding into memory, skip first line file = open(filename,'r', errors = 'ignore') # create a map of words to vectors embedding = dict() for line in file: parts = line.split() # key is string word, value is numpy array for vector embedding[parts[0]] = np.array(parts[1:], dtype='float32') file.close() return embedding Here is the error report. Please guide me further. runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP') C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend. Traceback (most recent call last): File "<ipython-input-1-d91aa5ebf9f8>", line 1, in <module> runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP') File "C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 123, in <module> raw_embedding = load_embedding('glove.6B.50d.txt') File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 67, in load_embedding embedding[parts[0]] = np.array(parts[1:], dtype='float32') ValueError: could not convert string to float: 'ng'
1
1
0
1
0
0