text
stringlengths 0
27.6k
| python
int64 0
1
| DeepLearning or NLP
int64 0
1
| Other
int64 0
1
| Machine Learning
int64 0
1
| Mathematics
int64 0
1
| Trash
int64 0
1
|
|---|---|---|---|---|---|---|
I tried various means to correctly tag a bunch of words which form a phrase (especially Noun Phrase) but could not succeed.
Example: 'the', 'first', 'early','morning', 'sunbeams'
'early' and 'morning' are wrongly being tagged as 'Noun' where expected outcome should be: ('first', 'adverb'), ('early', 'adverb'), ('morning', 'adjective'), ('sunbeams', 'noun')
Could you please suggest a procedure to tag these words correctly?
Thanks in advance.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have seen this same topic some other place but no real answer to my question. I have a numpy array and I need to find the index of a number.
a=np.argsort(cosine_similarity(tfidf_matrix[11:12], tfidf_matrix)) #numbers are from 0 to 11
b=np.equal(a,10)
# b values are [[False False False False False False False False True False False False]]
How do I get it to return index 8? (The index for the true value in the array)
| 1
| 1
| 0
| 1
| 0
| 0
|
Novice alert
I have been learning ML in python for the last few months and have had some great results. Currently, however, I am stuck with a project and require the guidance of someone with more experience (Google can only take you so far it appears ).
What I am trying to achieve
I have a dummy data set full of clients and their transactions. I want to cluster or segment them into much smaller 'tribes' based on their demographic data, spending score and shopping behaviour. For example, one 'tribes' description could be something this granular : (men, aged 35 who primarily purchase music-based products on a Saturday afternoon within the first half of each month and have a high spending score)
I want to find the sweet spot between granular segmentation and general segmentation eg: segmentation by income and spending score.
What I have tried
Firstly, I have allocated an int value representing the frequency of each categorical occurrence across each client's transactions. For example :
Client | Home | Movies | Games
1 3 1 0
This indicates that Client 1 has purchased Home related items 3 times, Movie related items 1 time and they have never purchased any item in the Games category.
I have done the same for Days (i.e Sunday - Saturday), Week number (i.e 1-5 the week number in any given month), Hour (i.e hour_one - hour_twenty_four).
This approach allows me to create a clean vector of purely numerical data.
This is an example of my raw input data in JSON format (before processing):
[
{
"id": 1,
"customer_id": 1,
"age": 47,
"gender": "Female",
"first_name": "Lea",
"last_name": "Calafato",
"email": "lcalafato0@cafepress.com",
"phone_number": "612-170-5956",
"income_k": 24,
"location": "Nottingham",
"sign_up_date": "2/16/2019",
"transactions": [
{
"customer_id": "1",
"product_id": 42,
"product_cat": "Home",
"price": 106.92,
"time": "8:15 PM",
"date": "04/15/2019",
"day": "Monday",
"week_num": 3
},
{
"customer_id": "1",
"product_id": 30,
"product_cat": "Movies",
"price": 26.63,
"time": "10:12 AM",
"date": "09/17/2019",
"day": "Tuesday",
"week_num": 4
}
],
"number_of_purchases": 2,
"last_purchase": "09/17/2019",
"total_spent": 133.55
}
]
This is my dataframe after being processed and standardized :
age 750 non-null int64
income_k 750 non-null int64
spending_score 750 non-null int64
gender__Female 750 non-null uint8
gender__Male 750 non-null uint8
Home 750 non-null float64
Movies 750 non-null float64
Games 750 non-null float64
Grocery 750 non-null float64
Music 750 non-null float64
Health 750 non-null float64
Beauty 750 non-null float64
Sports 750 non-null float64
Toys 750 non-null float64
Garden 750 non-null float64
Computers 750 non-null float64
Clothing 750 non-null float64
Books 750 non-null float64
Outdoors 750 non-null float64
Industrial 750 non-null float64
Kids 750 non-null float64
Tools 750 non-null float64
Automotive 750 non-null float64
Electronics 750 non-null float64
Jewelery 750 non-null float64
Baby 750 non-null float64
Shoes 750 non-null float64
week_one 750 non-null float64
week_two 750 non-null float64
week_three 750 non-null float64
week_four 750 non-null float64
week_five 750 non-null float64
Sunday 750 non-null float64
Monday 750 non-null float64
Tuesday 750 non-null float64
Wednesday 750 non-null float64
Thursday 750 non-null float64
Friday 750 non-null float64
Saturday 750 non-null float64
hour_one 750 non-null float64
hour_two 750 non-null float64
hour_three 750 non-null float64
hour_four 750 non-null float64
hour_five 750 non-null float64
hour_six 750 non-null float64
hour_seven 750 non-null float64
hour_eight 750 non-null float64
hour_nine 750 non-null float64
hour_ten 750 non-null float64
hour_eleven 750 non-null float64
hour_twelve 750 non-null float64
hour_thirteen 750 non-null float64
hour_fourteen 750 non-null float64
hour_fithteen 750 non-null float64
hour_sixteen 750 non-null float64
hour_seventeen 750 non-null float64
hour_eighteen 750 non-null float64
hour_nineteen 750 non-null float64
hour_twenty 750 non-null float64
hour_twenty_one 750 non-null float64
hour_twenty_two 750 non-null float64
hour_twenty_three 750 non-null float64
hour_twenty_four 750 non-null float64*
I have run this data through both k-means and DBSCAN algorithms to no avail. k-means gives me 4 clusters which are far too general for my requirements and DBSCAN gives me zero clusters with each data point being treated as noise.
My apologies if anything is unclear, please feel free to ask me to clarify anything. Thanks in advance.
| 1
| 1
| 0
| 1
| 0
| 0
|
I am trying to use an SVD model for word embedding on the Brown corpus. For this, I want to first generate a word-word co-occurence matrix and then convert to PPMI matrix for the SVD matrix multiplication process.
I have tried to create a co-occurence using SkLearn CountVectorizer
count_model = CountVectorizer(ngram_range=(1,1))
X = count_model.fit_transform(corpus)
X[X > 0] = 1
Xc = (X.T * X)
Xc.setdiag(0)
print(Xc.todense())
But:
(1) Am not sure how I can control the context window with this method? I want to experiment with various context sizes and see how the impact the process.
(2) How do I then compute the PPMI properly assuming that
PMI(a, b) = log p(a, b)/p(a)p(b)
Any help on the thought process and implementation would be greatly appreciated!
Thanks (-:
| 1
| 1
| 0
| 0
| 0
| 0
|
I used gimsm for LSA as per this tutorial
https://www.datacamp.com/community/tutorials/discovering-hidden-topics-python
and I got the following output after running it for a list of text
[(1, '-0.708*"London" + 0.296*"like" + 0.294*"go" + 0.287*"dislike" + 0.268*"great" + 0.200*"romantic" + 0.174*"stress" + 0.099*"lovely" + 0.082*"good" + -0.075*"Tower" + 0.072*"see" + 0.063*"nice" + 0.061*"amazing" + -0.053*"Palace" + 0.053*"walk" + -0.050*"Eye" + 0.046*"eat" + -0.042*"Bridge" + 0.041*"Garden" + 0.040*"Covent" + -0.040*"old" + -0.039*"visit" + 0.039*"really" + 0.035*"spend" + 0.034*"watch" + 0.034*"get" + -0.032*"Buckingham" + 0.032*"Weather" + -0.032*"Museum" + -0.032*"Westminster"')]
What does -0.708 London indicate?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to search a given text for a specified wordlist. The code is pretty straightforward.
# put the words you want to match into a list
word_list = ["eat", "car", "house", "pick up", "child"]
# get input text from the user
user_prompt = input("Please enter some text: ")
# loop over each word in word_list and check if it is a substring of user_prompt
for word in word_list:
if word in user_prompt:
print("{} is in the user string".format(word))
The problem is, when I enter the following text: "I picked up my children in the car and they ate some pears." it doesn't match the words "pick up" or "eat". I imagine this is because in the text they are in the past form, and in the wordlist they are in the infinitive form. So, it will only search for exact matches and won't take into consideration inflection (verb forms, irregular verbs, etc).
Is there a way to search a text to match words from a wordlist regardless of inflection?
Thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a text file where I need to extract first five lines ones a specified keyword occurs in the paragraph.
I am able to find keywords but not able to write next five lines from that keyword.
mylines = []
with open ('D:\\Tasks\\Task_20\\txt\\CV (4).txt', 'rt') as myfile:
for line in myfile:
mylines.append(line)
for element in mylines:
print(element, end='')
print(mylines[0].find("P"))
Please help if anybody have any idea on how to do so.
Input Text File Example:-
Philippine Partner Agency: ALL POWER STAFFING SOLUTIONS, INC.
Training Objectives: : To have international cultural exposure and hands-on experience in the field
of hospitality management as a gateway to a meaningful hospitality career. To develop my hospitality
management skills and become globally competitive.
Education
Institution Name: SOUTHVILLE FOREIGN UNIVERSITY - PHILIPPINES
Location Hom as Pinas City, Philippine Institution start date: (June 2007
Required Output:-
Training Objectives: : To have international cultural exposure and hands-on experience in the field
of hospitality management as a gateway to a meaningful hospitality career. To develop my hospitality
management skills and become globally competitive.
#
I have to search Training Objective Keyword in text file and ones it find that it should write next 5 lines only.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to create a text classifier to determine whether an abstract indicates an access to care research project. I am importing from a dataset that has two fields: Abstract and Accessclass. Abstract is a 500 word description about the project and Accessclass is 0 for not access-related and 1 for access-related. I'm still in the developing stages, however when I looked at the unigrams and bigrams for 0 and 1 labels, they were the same, despite very distinctly different tones of text. Is there something I'm missing in my code? For example, am I accidentally double adding negative or positive? Any help is appreciate.
import pandas as pd
import numpy as np
import nltk
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn import naive_bayes
df = pd.read_excel("accessclasses.xlsx")
df.head()
from io import StringIO
col = ['accessclass', 'abstract']
df = df[col]
df = df[pd.notnull(df['abstract'])]
df.columns = ['accessclass', 'abstract']
df['category_id'] = df['accessclass'].factorize()[0]
category_id_df = df[['accessclass', 'category_id']].drop_duplicates().sort_values('category_id')
category_to_id = dict(category_id_df.values)
id_to_category = dict(category_id_df[['category_id', 'accessclass']].values)
df.head()
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(sublinear_tf=True, min_df=4, norm='l2', encoding='latin-1', ngram_range=(1,
2), stop_words='english')
features = tfidf.fit_transform(df.abstract).toarray()
labels = df.category_id
print(features.shape)
from sklearn.feature_selection import chi2
import numpy as np
N = 2
for accessclass, category_id in sorted(category_to_id.items()):
features_chi2 = chi2(features, labels == category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
print("# '{}':".format(accessclass))
print(" . Most correlated unigrams:
. {}".format('
. '.join(unigrams[-N:])))
print(" . Most correlated bigrams:
. {}".format('
. '.join(bigrams[-N:])))
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to do sentiment analysis on comments; the data set contains two main colums: the first one is "review" which has the reviews of the users, and the second colum is whether it is positive or negative; I got a template from a source to prepocessing the data, the training and testing is okay. However, I want to input a text and want the model to predict whether it is positive or negative. I tried so many forms of the input: string only, list of strings, numpy to array etc. However, I always got erros; any ideas how to input the data to be predicted?
here's my code:
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter='\t',quoting=3)
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
corpus=[]
for i in range(0,1000):
review=re.sub('[^a-zA-Z]',' ',dataset['Review'][i])
review.lower()
review=review.split()
ps=PorterStemmer()
review=[ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
review=' '.join(review)
corpus.append(review)
#the bag of word
from sklearn.feature_extraction.text import CountVectorizer
cv=CountVectorizer(max_features=1500)
X=cv.fit_transform(corpus).toarray()
y=dataset.iloc[:,1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Fitting Naive Bayes to the Training set
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# Predicting the Test set results
xeval=["I like it okay"]
prediction=classifier.predict(xeval)```
the error in this case is:
Expected 2D array, got 1D array instead:
array=['I like it okay'].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
| 1
| 1
| 0
| 1
| 0
| 0
|
There are some methods that can retrieve similarity between texts such as wup_similarity() cosine_similarity() etc. My purpose is to make an essay answering system.That means I want to compare the answer sheet and marking scheme. So far I did followings without using any training or modeling approch.
1.pre-processed both documents(removed punctuations,did lemmatization and etc).
2.next I get the similar words by using word-net syn-sets and made an two large arrays (Marking scheme with their synonyms and answer sheet with its synonyms)--possibly not the correct way.
3.Then I needed to compare these two large arrays and want to get similarity value
Can you please help me with this by giving some suggessions or answers. I know that word-net syn-sets are not the best because it will give unrelated answers .
eg: animal and vehicle will return 1 as similar values.
However I need to find solutions for that.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a paragraph with some spaces and special characters and "....." 's.
I would like to know if there is any function in python which helps in splitting the lines in the paragraph with specified delimiters like "...."
Thanks in advance
| 1
| 1
| 0
| 0
| 0
| 0
|
I would like to correct the misspelled words of a text in french, it seems that spacy is the most accurate and faster package to do it, but it's to complex,
I tried with textblob, but I didn't manage to do it with french words.
It works perfectly in english, but when I try to do the same in french I get the same misspelled words:
#english words
from textblob import TextBlob
misspelled=["hapenning", "mornin", "windoow", "jaket"]
[str(TextBlob(word).correct()) for word in misspelled]
#french words
misspelled2=["resaissir", "matinnée", "plonbier", "tecnicien"]
[str(TextBlob(word).correct()) for word in misspelled2]
I get this:
#english:
['happening', 'morning', 'window', 'jacket']
#french:
['resaissir', 'matinnée', 'plonbier', 'tecnicien']
| 1
| 1
| 0
| 0
| 0
| 0
|
As described, I load a trained word2vec model through pyspark.
word2vec_model = Word2VecModel.load("saving path")
After using that, I want to delete it since it will take much memory space on single node (I used the findSynonyms function, and the doc says it should be local used only)
I tried to use
del word2vec_model
gc.collect()
but it seems that doesn't word. And it's not an rdd file, I can't use .unpersist(). I didn't find any like unload() fuction in the doc.
Anyone could help me or give me some advice?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am developing a chatbot that asks the user the information that is not there in the database.
Consider the database has 40 details for every person: Name, Age, Fav food, Fav Restaurant, Fav city, Reason for Fav City, Four the most liked things in the city,etc.
So, the questions can be
"What is our name?"
"Why do you like Paris?"
"Name four places in Paris that you like the most?"
etc.
I want these questions to be generated by the bot on the fly but have no idea how to formulate these questions in English.
Any help or direction (research papers/libraries/codes, etc) would be appreciated.
| 1
| 1
| 0
| 0
| 0
| 0
|
i've combined two different datasets so that one column has text and another column has the sentiment score (binary 0, 1)
I'm trying to make a linear regression model that predicts sentiment based on words used in the text,
so far to preprocess the text, i changed the text to lowercase for all texts.
i'm wondering what the next step is after this? i've read up a bit but i'm thinking i may not have the steps in the correct order.
1. lowercase 1. lowercase
2. remove punctuation 2. tokenize
3.tokenize 3. remove punctuation
which way is more correct, if i remove the punctuation first i might lose details such as don't and can't.
| 1
| 1
| 0
| 0
| 0
| 0
|
I've already built a server which contains several spring boot microservices, and we've also wrote a python script to train AI models.
Now we want to build a service into this server to check our data at a specific time every day and run the python script to train the model.
Is there a good way to design this service? Do I need to call the Python script from Java or is there better ways? Is there any libraries you recommend?
Thanks a lot!
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to perform a word frequency count on a relatively large dataframe and don't know what approach would be the best.
Currently my dataframe looks like this -
Comment 'I' 'it' 'is' 'up'
'I was here' NaN NaN NaN NaN
'I like soup' NaN NaN NaN NaN
'whats up' NaN NaN NaN NaN
'This is it' NaN NaN NaN NaN
My goal is to perform a frequency count for each of the words in the column headers ('I', 'it', 'is', 'up') for each comment. E.g. after the counting process the result should look something like this -
Comment 'I' 'it' 'is' 'up'
'I was here' 1 0 0 0
'I like soup' 1 0 0 0
'whats up' 0 0 0 1
'This is it' 0 1 1 0
What would be the best approach to this? The real dataset contains about 50k comments and over 10k columns with different words.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm doing a pos tagging and the algorithm is Baum-Welch algorithm.
I want to send the types and tags in the .csv file but after running the code this error shows
untagged =pd.read_csv('test.csv','UTF-8','r')
print ('Tagging...')
#taggedOutput = doTagging(sent,untagged)
[w for w in sent if w in untagged]
tagged = pd.read_csv("Tagged_bangla_hmm.csv",'a',encoding="utf-8",
header=None, delimiter = r'\s+',skip_blank_lines=False, engine='python')
for sentence in tagged:
a = zip('types', 'tags')
for word, tag in a:
tagged.to_csv( types +'/' + tags + ' ')
print(tagged)
print('
')
tagged.close()
print ('Finished Tagging')
i=0
| 1
| 1
| 0
| 0
| 0
| 0
|
This example is for finding bigrams:
Given:
import pandas as pd
data = [['tom', 10], ['jobs', 15], ['phone', 14],['pop', 16], ['they_said', 11], ['this_example', 22],['lights', 14]]
test = pd.DataFrame(data, columns = ['Words', 'Freqeuncy'])
test
I'd like to write a query to only find words that are separated by a "_" such that the returning df would look like this:
data2 = [['they_said', 11], ['this_example', 22]]
test2 = pd.DataFrame(data2, columns = ['Words', 'Freqeuncy'])
test2
I'm wondering why something like this doesn't work.. data[data['Words'] == (len> 3)]
| 1
| 1
| 0
| 0
| 0
| 0
|
how do I build a knowledge graph in python from structured texts? Do I need to know any graph databases? Any resources would be of great help.
| 1
| 1
| 0
| 0
| 0
| 0
|
I tried to execute
pip install spacy
and it finally worked with Python 3.7 64 bit (not with 32 bit version) but after installation no other package imports like pandas are working. It seems that the installation is the root cause but after removing spacy the import error of pandas and many other packages is still the same.
After reinstalling python (I always install it directly in folder C:\Python),
I can sucessfully install pandas and all the other packages without the error below but I still cannot use Spacy as I would get the import error:
OSError: [WinError 193] %1 is not a valid Win32-Application
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-1-7dd3504c366f> in <module>
----> 1 import pandas as pd
c:\python\lib\site-packages\pandas\__init__.py in <module>
9 for dependency in hard_dependencies:
10 try:
---> 11 __import__(dependency)
12 except ImportError as e:
13 missing_dependencies.append("{0}: {1}".format(dependency, str(e)))
~\AppData\Roaming\Python\Python37\site-packages
umpy\__init__.py in <module>
138
139 # Allow distributors to run custom init code
--> 140 from . import _distributor_init
141
142 from . import core
~\AppData\Roaming\Python\Python37\site-packages
umpy\_distributor_init.py in <module>
24 # NOTE: would it change behavior to load ALL
25 # DLLs at this path vs. the name restriction?
---> 26 WinDLL(os.path.abspath(filename))
27 DLL_filenames.append(filename)
28 if len(DLL_filenames) > 1:
c:\python\lib\ctypes\__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error)
362
363 if handle is None:
--> 364 self._handle = _dlopen(self._name, mode)
365 else:
366 self._handle = handle
OSError: [WinError 193] %1 ist keine zulässige Win32-Anwendung´´´
| 1
| 1
| 0
| 0
| 0
| 0
|
I am a bit confused on what it means to set trainable = True when loading the Universal Sentence Encoder 3. I have a small corpus (3000 different sentences), given a sentence I want to find the 10 most similar sentences.
My current method is:
1) Load the module
embed = hub.Module("path", trainable =False)
2) Encode all sentences:
session.run(embed(sentences))
3) Find the closest sentences using cosine similarity.
It performs decent, but I would want the model to be finetuned to my own dictionary, becuase there are certain keywords which are more important than others. This is thus not a classification problem. When looking at the existing examples for retrainin the module (https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub) it is for classification.
Is it possible to make the Universal Sentence Encoder retrain on my keywords and output different embeddings (for instance by setting trainable = True)?
| 1
| 1
| 0
| 0
| 0
| 0
|
How do you identify the correct definition of a word in a sentence using NLP in Python?
For example, you have two sentences that use the verb 'get' with two different definitions:
He got a bike for his birthday. (get = to obtain, receive, or be given something)
He got a taxi from the station. (get = to use a particular vehicle to travel somewhere)
Can anyone point me in the right direction with an example of code in Python, or even an app/software that can already do this?
Thanks in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using spacy to understand phrases and I am trying to differentiate between Nouns like food, beer, wine etc. and other nouns like yesterday and today.
I am not able to come up with an idea as to how to differentiate them.
query = input()
doc = nlp(query)
displacy.serve(doc,style="dep")
What can I do to differentiate between the first three nouns and yesterday?
The diplacy rendering is as shown in the image
image link => https://imgur.com/a/cX7uQ3Z
| 1
| 1
| 0
| 0
| 0
| 0
|
I read a lot of articles that deal with different NLP classification tasks and I saw that most of them specify in the pre-processing section that they use replacement tokens:
e.g. We removed and replaced the URLs, emojis and punctuation with replacement tokens: <URL>, <EMOJI>, <PUNCT>.
I am quite new to this domain and I was wondering if there is some special way to deal with this kind of tokens/tags? Is it necessary to use < > or is this just a way to signal this replacement and for helping the classifier in finding a pattern?
Any help would be greatly appreciated.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to build a model that among its implementation that it takes two text inputs and get a one-hot vector based on one of the indices of the input.
I created the following custom functions:
def get_index(text, word):
# get index
index = get_expression_indices(text, word)
id_seq = []
for i in range(70): #length of the text
if i == index :
id_seq.insert(i, 1)
else:
id_seq.insert(i, 0)
return np.array(id_seq)
def get_index_tensor(input):
return tf.py_function(get_index, [input[0], input[1]], tf.string)
And here is a dummy model
# input layers
input_text_1 = Input(shape=(1,), dtype='string')
input_text_2 = Input(shape=(1,), dtype='string')
context = Lambda(emb_utils.get_index_tensor, output_shape=(None,))([input_text_1, input_text_2])
model = Model(inputs=[input_text_1, input_text_2], outputs=context)
I get an error: ValueError: Cannot iterate over a shape with unknown rank.
The output shape should be (batch_size, 70, 1)
When I remove output_sape=(None,) I get TypeError: object of type 'NoneType' has no len()
Any ideas on what the problem could be?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am performing some NER on Arabic language. The code is as follows:
from polyglot.text import Text
blob = "مرحبا اسمي rahul agnihotri أنا عمري 41 سنة و الهندية"
text = Text(blob)
text = Text(blob, hint_language_code='ar') #ar stands for arabic
print(text.entities)
After executing above given code in ubuntu i get below given error:
SyntaxError: Non-ASCII character '\xd9' in file ./ner.py on line 4,
but no encoding declared; see http://python.org/dev/peps/pep-0263/ for
details
However, if I include # -- coding: utf-8 -- it works and here is the output:
[I-LOC([u'\u0627\u0644\u0647\u0646\u062f\u064a\u0629'])]
This is not the desired ouptut i am looking for. The desired output should in Arabic language not this way.
FYI: All required libraries are installed.
| 1
| 1
| 0
| 0
| 0
| 0
|
Is there a good python library that specifically contains some kind of dictionary of common english "throw away words" such as "um", "uh" that I could use to clean up text for NLP?
Similarly, my colleague started making a list of slang/off words. I'd love a python library that finds all of these. His js code below does stuff like turn "nope" and "naw" into "no"
txt = txt.replace(
/\b(yeah|ya|yep|yup|yes)\b/g, "yes"
).replace(
/\b(no|naw|nope)\b/g, "no"
).replace(
/\b([ah]+|uh-huh|uh+|um+|mhm+|huh+|oh)\b/g, ""
).replace(
/\b(im|i'm|i am)\b/g, "im"
).replace(
/\b(gotta|gonna|got to|going to|wanna|want to)\b/g, "yyxxa"
).replace(
/\b(ok|okay|k)\b/g, "okay"
);
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm getting this error when I'm trying to access to run this code:
word_embedding_matrix = np.load(open("word_embedding_matrix.npy", 'rb'))
FileNotFoundError
Traceback (most recent call last)
in ()
----> 1 word_embedding_matrix = np.load(open("word_embedding_matrix.npy", 'rb'))
FileNotFoundError: [Errno 2] No such file or directory: 'word_embedding_matrix.npy'
| 1
| 1
| 0
| 0
| 0
| 0
|
For training, I have to feed the model sequence of word vector. Each sequence has on average 40 words. So, if I use a dictionary of pre-trained word embedding (like Glove), For each sequence have to hit the embedding dictionary around 40 times and for each batch, it will be around batch_size*40 times. The dataset is divided into many batches and the whole dataset has to be iterate (epoch) several times also. So, you can imagine how many times the dictionary will get hit.
This is the approach I have done already and it is taking really a lot of time.
To solve this, I tried to make a dictionary of sequence to vector. This dictionary should contain a sequence as key and a 2d python list (each row is a word vector) as a value of the key. The hope is to, I just have to look for the sequence and get the values. This should decrease the time a lot but the dictionary would be very big (I estimated the size by saved the data (sequence->vectors) in mongodb and exported it and the file is 23gb). A dictionary of size 23gb should not be problem because my I am using shared server where I can allocate as much as 100gb memory. But the program gets killed while loading the dictionary. So this is not working.
Another approach I am thinking about is to copy the word embedding vector into pytorch's nn.Embedding().
input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
embedding(input)`
Here the numbers are indices of the word. Regarding this approach,pytorch embedding uses numpy matrix as lookup table. So, my concern is, isn’t for executing the previous code, there will be 7 hits on the numpy matrix? Or it will be retrieved parallelly? Even it runs parallelly, there should be another dictionary to convert word to indices. That also needs 7 hits on the word2indices dictionary.
So, what do you think, what is the fastest and efficient way to retrieve word vectors of a sequence and fed into model?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am going through this link to understand Multi-channel CNN Model for Text Classification.
The code is based on this tutorial.
I have understood most of the things, however I can't understand how Keras defines the output shapes of certain layers.
Here is the code:
define a model with three input channels for processing 4-grams, 6-grams, and 8-grams of movie review text.
#Skipped keras imports
# load a clean dataset
def load_dataset(filename):
return load(open(filename, 'rb'))
# fit a tokenizer
def create_tokenizer(lines):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
# calculate the maximum document length
def max_length(lines):
return max([len(s.split()) for s in lines])
# encode a list of lines
def encode_text(tokenizer, lines, length):
# integer encode
encoded = tokenizer.texts_to_sequences(lines)
# pad encoded sequences
padded = pad_sequences(encoded, maxlen=length, padding='post')
return padded
# define the model
def define_model(length, vocab_size):
# channel 1
inputs1 = Input(shape=(length,))
embedding1 = Embedding(vocab_size, 100)(inputs1)
conv1 = Conv1D(filters=32, kernel_size=4, activation='relu')(embedding1)
drop1 = Dropout(0.5)(conv1)
pool1 = MaxPooling1D(pool_size=2)(drop1)
flat1 = Flatten()(pool1)
# channel 2
inputs2 = Input(shape=(length,))
embedding2 = Embedding(vocab_size, 100)(inputs2)
conv2 = Conv1D(filters=32, kernel_size=6, activation='relu')(embedding2)
drop2 = Dropout(0.5)(conv2)
pool2 = MaxPooling1D(pool_size=2)(drop2)
flat2 = Flatten()(pool2)
# channel 3
inputs3 = Input(shape=(length,))
embedding3 = Embedding(vocab_size, 100)(inputs3)
conv3 = Conv1D(filters=32, kernel_size=8, activation='relu')(embedding3)
drop3 = Dropout(0.5)(conv3)
pool3 = MaxPooling1D(pool_size=2)(drop3)
flat3 = Flatten()(pool3)
# merge
merged = concatenate([flat1, flat2, flat3])
# interpretation
dense1 = Dense(10, activation='relu')(merged)
outputs = Dense(1, activation='sigmoid')(dense1)
model = Model(inputs=[inputs1, inputs2, inputs3], outputs=outputs)
# compile
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# summarize
print(model.summary())
plot_model(model, show_shapes=True, to_file='multichannel.png')
return model
# load training dataset
trainLines, trainLabels = load_dataset('train.pkl')
# create tokenizer
tokenizer = create_tokenizer(trainLines)
# calculate max document length
length = max_length(trainLines)
# calculate vocabulary size
vocab_size = len(tokenizer.word_index) + 1
print('Max document length: %d' % length)
print('Vocabulary size: %d' % vocab_size)
# encode data
trainX = encode_text(tokenizer, trainLines, length)
print(trainX.shape)
# define model
model = define_model(length, vocab_size)
# fit model
model.fit([trainX,trainX,trainX], array(trainLabels), epochs=10, batch_size=16)
# save the model
model.save('model.h5')
Running the code:
Running the example first prints a summary of the prepared training dataset.
Max document length: 1380
Vocabulary size: 44277
(1800, 1380)
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1380) 0
____________________________________________________________________________________________________
input_2 (InputLayer) (None, 1380) 0
____________________________________________________________________________________________________
input_3 (InputLayer) (None, 1380) 0
____________________________________________________________________________________________________
embedding_1 (Embedding) (None, 1380, 100) 4427700 input_1[0][0]
____________________________________________________________________________________________________
embedding_2 (Embedding) (None, 1380, 100) 4427700 input_2[0][0]
____________________________________________________________________________________________________
embedding_3 (Embedding) (None, 1380, 100) 4427700 input_3[0][0]
____________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, 1377, 32) 12832 embedding_1[0][0]
____________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, 1375, 32) 19232 embedding_2[0][0]
____________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, 1373, 32) 25632 embedding_3[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 1377, 32) 0 conv1d_1[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout) (None, 1375, 32) 0 conv1d_2[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout) (None, 1373, 32) 0 conv1d_3[0][0]
____________________________________________________________________________________________________
max_pooling1d_1 (MaxPooling1D) (None, 688, 32) 0 dropout_1[0][0]
____________________________________________________________________________________________________
max_pooling1d_2 (MaxPooling1D) (None, 687, 32) 0 dropout_2[0][0]
____________________________________________________________________________________________________
max_pooling1d_3 (MaxPooling1D) (None, 686, 32) 0 dropout_3[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 22016) 0 max_pooling1d_1[0][0]
____________________________________________________________________________________________________
flatten_2 (Flatten) (None, 21984) 0 max_pooling1d_2[0][0]
____________________________________________________________________________________________________
flatten_3 (Flatten) (None, 21952) 0 max_pooling1d_3[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 65952) 0 flatten_1[0][0]
flatten_2[0][0]
flatten_3[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 10) 659530 concatenate_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 11 dense_1[0][0]
====================================================================================================
Total params: 14,000,337
Trainable params: 14,000,337
Non-trainable params: 0
____________________________________________________________________________________________________
And
Epoch 6/10
1800/1800 [==============================] - 30s - loss: 9.9093e-04 - acc: 1.0000
Epoch 7/10
1800/1800 [==============================] - 29s - loss: 5.1899e-04 - acc: 1.0000
Epoch 8/10
1800/1800 [==============================] - 28s - loss: 3.7958e-04 - acc: 1.0000
Epoch 9/10
1800/1800 [==============================] - 29s - loss: 3.0534e-04 - acc: 1.0000
Epoch 10/10
1800/1800 [==============================] - 29s - loss: 2.6234e-04 - acc: 1.0000
My interpretation of the Layer and output shape are as follows:
Please help me understand if its correct as I am lost in multi-dimension.
input_1 (InputLayer) (None, 1380) : ---> 1380 is the total number of features ( that is 1380 input neurons) per data point. 1800 is the total number of documents or data points.
embedding_1 (Embedding) (None, 1380, 100) 4427700 ----> Embedding layer is : 1380 as features(words) and each feature is a vector of dimension 100.
How the number of parameters here is 4427700??
conv1d_1 (Conv1D) (None, 1377, 32) 12832 ------> Conv1d is of kernel size=4. Is it 1*4 filter which is used 32 times. Then how the dimension became (None, 1377, 32) with 12832 parameters?
max_pooling1d_1 (MaxPooling1D) (None, 688, 32) with MaxPooling1D(pool_size=2) how the dimension became (None, 688, 32)?
flatten_1 (Flatten) (None, 22016) This is just multiplication of 688, 32?
** Does every epoch trains 1800 data points at once?**
Please let me know how output dimensions is calculated. Any reference or help would be appreciated.
| 1
| 1
| 0
| 1
| 0
| 0
|
I have trained a model for handwritten digits multiclass classification using CNN in Keras. I am trying to evaluate the model with the same training images to get an estimate of the accuracy of the algorithm; however, when I evaluate the CNN confusion matrix, it gives a one column only of the form:
[[4132 0 0 0 0 0 0 0 0 0]
[4684 0 0 0 0 0 0 0 0 0]
[4177 0 0 0 0 0 0 0 0 0]
[4351 0 0 0 0 0 0 0 0 0]
[4072 0 0 0 0 0 0 0 0 0]
[3795 0 0 0 0 0 0 0 0 0]
[4137 0 0 0 0 0 0 0 0 0]
[4401 0 0 0 0 0 0 0 0 0]
[4063 0 0 0 0 0 0 0 0 0]
[4188 0 0 0 0 0 0 0 0 0]]
I guess the algorithm is giving the correct result since those are the total numbers of each digit in the database; however, the confusion matrix should give something like this:
[[4132 0 0 0 0 0 0 0 0 0]
[ 0 4684 0 0 0 0 0 0 0 0]
[ 0 0 4177 0 0 0 0 0 0 0]
[ 0 0 0 4351 0 0 0 0 0 0]
[ 0 0 0 0 4072 0 0 0 0 0]
[ 0 0 0 0 0 3795 0 0 0 0]
[ 0 0 0 0 0 0 4137 0 0 0]
[ 0 0 0 0 0 0 0 4401 0 0]
[ 0 0 0 0 0 0 0 0 4063 0]
[ 0 0 0 0 0 0 0 0 0 4188]]
The code is in this link
The data can be taken from the "train.csv" file in this Kaggle project.
I would like to ask you guys what am I doing wrong in the code, such that I obtain this weird result.
| 1
| 1
| 0
| 0
| 0
| 0
|
I need a wheel for SpaCy to fix my build issue. Where can I find it? The file name is supposed to be spacy-1.10.1-cp27-cp27mu-linux_x86_64.whl
They did have wheels before. I was using a 1.9.0 wheel but I need to upgrade it to 1.10.1 and I was not able to find one.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to fit my model on Streamlit.io app, but I am getting the above Value-Error. But it doesn't give the same error on Jupyter Notebook Please any better approach will help a lot.
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
File "c:\users\8470p\anaconda3\lib\site-packages\streamlit\ScriptRunner.py", line 311, in _run_script exec(code, module.__dict__)
File "C:\Users\8470p\app2.py", line 122, in bow_transformer = CountVectorizer(analyzer=text_process).fit(messages['message'])
File "c:\users\8470p\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 1024, in fit self.fit_transform(raw_documents)
File "c:\users\8470p\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 1058, in fit_transform self.fixed_vocabulary_)
File "c:\users\8470p\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 962, in _count_vocab analyze = self.build_analyzer()
File "c:\users\8470p\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 339, in build_analyzer if self.analyzer == 'char':
File "c:\users\8470p\anaconda3\lib\site-packages\pandas\core\generic.py", line 1555, in __nonzero__ self.__class__.__name__
enter code here
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
bow_transformer =
CountVectorizer(analyzer=text_process).fit(messages['message'])
msg_train, msg_test, label_train, label_test =
train_test_split(messages['message'], messages['label'], test_size=0.2)
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)), # strings to token
integer counts
('tfidf', TfidfTransformer()), # integer counts to weighted TF-IDF scores
('classifier', MultinomialNB()), # train on TF-IDF vectors w/ Naive Bayes
classifier
])
NB_Clasifier = pipeline.fit(msg_train,label_train)
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to write a Keras model that will learn to create recipes, but I'm having trouble passing the strings through the model.
My df consists of 3 columns, one of the name, ingredients, and instructions (showing the first 2 lines of the df):
title
0 [Grammie Hamblet's Deviled Crab]
1 [Infineon Raceway Baked Beans]
ingredients
0 [['1/2 cup celery, finely chopped', '1 small
1 [['2 pounds skirt steak, cut into 1/2-inch dic...
instructions
0 [Toss ingredients lightly and spoon into a but...
1 [Watch how to make this recipe., Sprinkle the ...
All columns of the df are strings, so not sure what other preprocessing methods I can implement that will allow me to pass it through the model
Thank you!
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a pandas dataframe that looks like the following:
0 1 2
# A B C
1 D E F
2 G H I
# J K L
1 M N O
2 P Q R
3 S T U
The index has a repeating 'delimiter', namely #. I am seeking an efficient way to transform this to the following:
0 1 2 3
# A B C 1
1 D E F 1
2 G H I 1
# J K L 2
1 M N O 2
2 P Q R 2
3 S T U 2
I would like a new column (3) which is splitting by the # symbol in the rows and enumerating the chunks. This is for an NLP application and the dataset I am working with can be found here for context: https://sites.google.com/site/germeval2014ner/data.
By the way, I know I can do this with a simple iteration, but I am wondering if there is vectorized format or a split capability I am not aware of.
Thanks for your help!
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using Spacy nlp.pipe() for getting doc objects for text data in pandas Dataframe column but the parsed text returned as "text" in the code has length of only 32. However, the shape of dataframe is (14640, 16).
Here is the data link if someone wants to read the data.
nlp = spacy.load("en_core_web_sm")
for text in nlp.pipe(iter(df['text']), batch_size = 1000, n_threads=-1):
print(text)
len(text)
Result:
32
Can someone help me with this what is going on? What I am doing wrong?
| 1
| 1
| 0
| 0
| 0
| 0
|
I've amended the code found here. But i'm getting a dimension error in my in input, like below:
ValueError: Error when checking input: expected InputLayer to have 4
dimensions, but got array with shape (None, None)
This is my modified code (i'm running this on Colab):
#Power data classification/regression with CNN
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import csv as csv
import keras.backend as K
from sklearn.preprocessing import MinMaxScaler # For normalizing data
print("TensorFlow version:",tf.__version__)
!wget https://raw.githubusercontent.com/sibyjackgrove/CNN-on-Wind-Power-Data/master/MISO_power_data_classification_labels.csv
!wget https://raw.githubusercontent.com/sibyjackgrove/CNN-on-Wind-Power-Data/master/MISO_power_data_input.csv
#Read total rows in csv file without loading into memory
def data_set_size(csv_file):
with open(csv_file) as csvfile:
csv_rows = 0
for _ in csvfile:
csv_rows += 1
return csv_rows-1 #Remove header from count and return
csv_file = "./MISO_power_data_classification_labels.csv"
n_train = data_set_size(csv_file)
print("Training data set size:",n_train)
#Python generator to supply batches of traning data during training with loading full data set to memory
def power_data_generator(batch_size,gen_type=''):
valid_size = max(1,np.int(0.2*batch_size))
while 1:
df_input=pd.read_csv('./MISO_power_data_input.csv',usecols =['Wind_MWh','Actual_Load_MWh'],chunksize =24*(batch_size+valid_size), iterator=True)
df_target=pd.read_csv('./MISO_power_data_classification_labels.csv',usecols =['Mean Wind Power','Standard Deviation','WindShare'],chunksize =batch_size+valid_size, iterator=True)
for chunk, chunk2 in zip(df_input,df_target):
scaler = MinMaxScaler() # Define limits for normalize data
InputX = chunk.values
InputX = scaler.fit_transform(InputX) # Normalize input data
InputY = chunk2.values
InputY = scaler.fit_transform(InputY) # Normalize output data
if gen_type =='training':
yield (InputX[0:batch_size],InputY[0:batch_size])
elif gen_type =='validation':
yield (InputX[batch_size:batch_size+valid_size],InputY[batch_size:batch_size+valid_size])
#Define model using Keras
Yclasses = 3 #Number of output classes
def nossa_metrica(y_true, y_pred):
diff = y_true - y_pred
count = K.sum(K.cast(K.equal(diff, K.zeros_like(diff)), 'int8')) # Count how many times y_true = y_pred
return count/n_train
model = keras.Sequential([
tf.keras.layers.Input(shape=(2,24,1),name='InputLayer'),
tf.keras.layers.Conv2D(filters=4,kernel_size=(2,6),strides=(1,1),activation='relu',name='ConvLayer1'),
tf.keras.layers.Conv2D(filters=4,kernel_size=(1,6),strides=(1,1),activation='relu',name='ConvLayer2'),
tf.keras.layers.Flatten(name="Flatten"),
tf.keras.layers.Dense(units = 8,activation='relu',name='FeedForward1'),
tf.keras.layers.Dense(units = Yclasses,name='OutputLayer'),
])
model.compile(loss='mse',optimizer='adam',verbose = 2,metrics = [nossa_metrica])
model.summary()
samples_per_batch = 5
train_generator= power_data_generator(batch_size=samples_per_batch,gen_type='training')
valid_generator= power_data_generator(batch_size=samples_per_batch,gen_type='validation')
number_of_batches = np.int32(n_train/(samples_per_batch+max(1,np.int32(0.2*samples_per_batch))))
#Training starts
history = model.fit(train_generator, steps_per_epoch= number_of_batches,epochs=200,validation_data=valid_generator, validation_steps=number_of_batches,verbose=2)
If anyone can shed some light here, I would be really grateful!
| 1
| 1
| 0
| 0
| 0
| 0
|
The Transformer model has the following params. I saved and reloaded the model using h5py. I get this errors only for few datasets.
h5f = h5py.File(path + '.model.weights.h5', 'w')
# Weights reloaded
variables = []
h5f = h5py.File(path + '.model.weights.h5', 'r')
for idx in sorted([int(i) for i in h5f]):
variables.append(np.array(h5f[str(idx)]))
h5f.close()
for idx, t in enumerate(this.model.trainable_variables):
t.assign(variables[idx])
The hyperparameters to train the model are:
BUFFER_SIZE = 20000
BATCH_SIZE = 64
MAX_LENGTH = 40
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size,
pe_input=input_vocab_size,
pe_target=target_vocab_size,
rate=dropout_rate)
Once I reload the model, I get the following error. I could save all the parameters, but load fails with incompatibility error. What do those tensor shapes indicate?
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (40759, 128) and (40765, 128) are incompatible
Traceback:
File "/Users/Models/Model.py", line 400, in load
t.assign(modelTrainables[idx])
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py", line 600, in assign
self._shape.assert_is_compatible_with(value_tensor.shape)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 700, in assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (40759, 128) and (40765, 128) are incompatible
| 1
| 1
| 0
| 0
| 0
| 0
|
Hi have my own corpus and I train several Word2Vec models on it.
What is the best way to evaluate them one against each-other and choose the best one? (Not manually obviously - I am looking for various measures).
It worth noting that the embedding is for items and not word, therefore I can't use any existing benchmarks.
Thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to find out the similarity between 2 documents i.e 'document_1' and 'document_2'.
I'm using Doc2Vec Gensim's keyedvectors.py for finding similarity score.
score = model.docvecs.similarity_unseen_docs(trainedModel, document_1, document_2)
print(score)
Where score is negative.
Here document_1 and document_2 are result of NLTK's word_tokenize()
What does Negative score mean when we try to find similarity between two "tokenized" documents?
P.S: Trained the model on 10 documents(2 Pages each)=20 Pages MS
word documents.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am introducing myself to Natural Languaje Processing and artificial neural networks and I have followed this wonderful tutorial
Once finished it, I would like to know if there is any way to test the model with phrases that I can invent, (That film entertained me a lot) for example.
Because it is very good to know the percentage of success on the test set, but I want to know how to test it.
| 1
| 1
| 0
| 1
| 0
| 0
|
I have a saved model I trained on a small text (messaging) data corpus, and I'm trying to use that same model to predict either positive or negative sentiment (i.e. binary classification) on another corpus. I based the NLP model on a GOOGLE dev ML guide, which you can review here (if you think it useful - I used option A for all).
I keep getting an input shape error, I know that the error means I have to reshape the input to fit the expected shape. However, the data I want to predict on is not of this size. The error statement is:
ValueError: Error when checking input: expected dropout_8_input to have shape (519,) but got array with shape (184,)
The reason why the model expects the shape (519,) is because during training the corpus fed into the first dropout layer (in TfidfVectorized form) is print(x_train.shape) #(454, 519).
I'm new to ML, but it doesn't make sens to me that all the data I try to predict on after optimizing a model should be the same shape as the data that was used to train the model.
Has anyone experienced an issue similar to this? Is there something that I'm missing, in how to train the model so that a different sized input can be predicted on? Or, am I misunderstanding on how models are to be used for class prediction?
I am basing myself on the following functions for model training:
from tensorflow.python.keras import models
from tensorflow.python.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D
def mlp_model(layers, units, dropout_rate, input_shape, num_classes):
"""Creates an instance of a multi-layer perceptron model.
# Arguments
layers: int, number of `Dense` layers in the model.
units: int, output dimension of the layers.
dropout_rate: float, percentage of input to drop at Dropout layers.
input_shape: tuple, shape of input to the model.
num_classes: int, number of output classes.
# Returns
An MLP model instance.
"""
op_units, op_activation = _get_last_layer_units_and_activation(num_classes)
model = models.Sequential()
model.add(Dropout(rate=dropout_rate, input_shape=input_shape))
# print(input_shape)
for _ in range(layers-1):
model.add(Dense(units=units, activation='relu'))
model.add(Dropout(rate=dropout_rate))
model.add(Dense(units=op_units, activation=op_activation))
return mode
def train_ngram_model(data,
learning_rate=1e-3,
epochs=1000,
batch_size=128,
layers=2,
units=64,
dropout_rate=0.2):
"""Trains n-gram model on the given dataset.
# Arguments
data: tuples of training and test texts and labels.
learning_rate: float, learning rate for training model.
epochs: int, number of epochs.
batch_size: int, number of samples per batch.
layers: int, number of `Dense` layers in the model.
units: int, output dimension of Dense layers in the model.
dropout_rate: float: percentage of input to drop at Dropout layers.
# Raises
ValueError: If validation data has label values which were not seen
in the training data.
# Reference
For tuning hyperparameters, please visit the following page for
further explanation of each argument:
https://developers.google.com/machine-learning/guides/text-classification/step-5
"""
# Get the data.
(train_texts, train_labels), (val_texts, val_labels) = data
# Verify that validation labels are in the same range as training labels.
num_classes = get_num_classes(train_labels)
unexpected_labels = [v for v in val_labels if v not in range(num_classes)]
if len(unexpected_labels):
raise ValueError('Unexpected label values found in the validation set:'
' {unexpected_labels}. Please make sure that the '
'labels in the validation set are in the same range '
'as training labels.'.format(
unexpected_labels=unexpected_labels))
# Vectorize texts.
x_train, x_val = ngram_vectorize(
train_texts, train_labels, val_texts)
# Create model instance.
model = mlp_model(layers=layers,
units=units,
dropout_rate=dropout_rate,
input_shape=x_train.shape[1:],
num_classes=num_classes)
# num_classes determine which activation fn to use
# Compile model with learning parameters.
if num_classes == 2:
loss = 'binary_crossentropy'
else:
loss = 'sparse_categorical_crossentropy'
optimizer = tf.keras.optimizers.Adam(lr=learning_rate)
model.compile(optimizer=optimizer, loss=loss, metrics=['acc'])
# Create callback for early stopping on validation loss. If the loss does
# not decrease in two consecutive tries, stop training.
callbacks = [tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=2)]
# Train and validate model.
history = model.fit(
x_train,
train_labels,
epochs=epochs,
callbacks=callbacks,
validation_data=(x_val, val_labels),
verbose=2, # Logs once per epoch.
batch_size=batch_size)
# Print results.
history = history.history
print('Validation accuracy: {acc}, loss: {loss}'.format(
acc=history['val_acc'][-1], loss=history['val_loss'][-1]))
# Save model.
model.save('MCTR2.h5')
return history['val_acc'][-1], history['val_loss'][-1]
From this I get the architecture of the model to be:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dropout (Dropout) (None, 519) 0
_________________________________________________________________
dense (Dense) (None, 64) 33280
_________________________________________________________________
dropout_1 (Dropout) (None, 64) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 65
=================================================================
Total params: 33,345
Trainable params: 33,345
Non-trainable params: 0
_________________________________________________________________
| 1
| 1
| 0
| 1
| 0
| 0
|
I've new to nltk and I notice that to create a lemmatizer object(after importing nltk package), both
WNlemma = nltk.WordNetLemmatizer
with no explicit importing of class WordNetLemmatizer and
from nltk.stem import WordNetLemmatizer
WNlemma = WordNetLemmatizer
where we explicitely import the class WordNetLemmatizer would work.
I'm aware that both were referring to the class WordNetLemmatizer from nltk.stem.wordnet package, but why is it even "legal" to import without stating the full module path as in the first and second instance? Is it some nltk convention or generally a python thing? How could the class WordNetLemmatizer in package wordnet be found? Based on my shallow understanding of python imports, only
from nltk.stem.wordnet import WordNetLemmatizer
looks "legit" for me...
I've searched around but I couldn't find any documentation explaining this, maybe I've been searching the wrong keywords.
This might be a noob question and please point out if it's not clear, thanks for anyone who would like to help me!
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm looking to check the caption(text below each image) on a wikipedia article. I wish to parse those strings (mostly using regex) and then if it matches, I want to save the link of that image.
I've been importing wikipedia directly to parse text, but after looking around the net I saw I'd need a different kind of parser for that. I tried using mwparserfromhell and pywikibot, but I couldn't resolve the pywikibot errors for me and just mwparserfromhell gives me empty results.
Any help in doing the above, without using DBPpedia?
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to export the fasttext model created by gensim to a binary file. But the docs are unclear about how to achieve this.
What I've done so far:
model.wv.save_word2vec_format('model.bin')
But this does not seems like the best solution. Since later when I want to load the model using the :
fasttext.load_facebook_model('model.bin')
I get into an infinite loop. While loading the fasttext.model created by model.save('fasttext.model) function gets completed in around 30 seconds.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to run one git repo on Google cloud. But the system could not find the library path.
myname@cloudshell:~/text-to-text-transfer-transformer (lastproject-258210)$ python3 -c "import t5; print(t5.data.MixtureRegistry.names())"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/myname/text-to-text-transfer-transformer/t5/__init__.py", line 17, in <module>
import t5.data
File "/home/name/text-to-text-transfer-transformer/t5/data/__init__.py", line 17, in <module>
import t5.data.mixtures
File "/home/myname/text-to-text-transfer-transformer/t5/data/mixtures.py", line 26, in <module>
import t5.data.tasks # pylint: disable=unused-import
File "/home/myname/text-to-text-transfer-transformer/t5/data/tasks.py", line 25, in <module>
from t5.data.utils import set_global_cache_dirs
File "/home/myname/text-to-text-transfer-transformer/t5/data/utils.py", line 32, in <module>
from t5.data import sentencepiece_vocabulary
File "/home/myname/text-to-text-transfer-transformer/t5/data/sentencepiece_vocabulary.py", line 23, in <module>
import tensorflow_text as tf_text
File "/usr/local/lib/python3.7/site-packages/tensorflow_text-1.15.0rc0-py3.7-linux-x86_64.egg/tensorflow_text/__init__.py", line 21, in <module>
from tensorflow_text.python import metrics
File "/usr/local/lib/python3.7/site-packages/tensorflow_text-1.15.0rc0-py3.7-linux-x86_64.egg/tensorflow_text/python/metrics/__init__.py", line 20, in <module>
from tensorflow_text.python.metrics.text_similarity_metric_ops import *
File "/usr/local/lib/python3.7/site-packages/tensorflow_text-1.15.0rc0-py3.7-linux-x86_64.egg/tensorflow_text/python/metrics/text_similarity_metric_ops.py", line 28, in <module>
gen_text_similarity_metric_ops = load_library.load_op_library(resource_loader.get_path_to_datafile('_text_similarity_metric_ops.so'))
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: libtensorflow_framework.so.1: cannot open shared object file: No such file or directory
I tried to print out the location of the libtensorflow:
myname@cloudshell:~/text-to-text-transfer-transformer (lastproject-258210)$ python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())'
/usr/local/lib/python3.7/site-packages/tensorflow_core
The question is how could I change the path so the system will find the path.Thanks for your help in advance!
| 1
| 1
| 0
| 0
| 0
| 0
|
I would like to know what is the difference between token and span in spaCy.
Also what is the main reason when we have to work with span? Why can't we simply use token to do any NLP? Specially when we use spaCy matcher?
Brief Background:
My problem came up when I wanted to get index of span (its exact index in string doc not its ordered index in spaCy doc) after using spaCy matcher which returns 'match_id', 'start' and 'end', and so I could get span out of this information, not a token.
Then I needed to create a training_data which requires exact index of word in a sentence. If I had access to token, I could simply use token.idx but span does not have that! So I have to write extra codes to find the index of word (which is the same as span) in its sentence!
| 1
| 1
| 0
| 0
| 0
| 0
|
Good morning all
Does anyone of you know a tool or an API or something that takes a sentence as input and as output, it gives the topics or keywords of this sentence?
I tried TextRazor in the online demo it works well like you can see in the screenshot
but when I used as a library in my python code it always gives me a blank list even for the sentence used in the demo
this is my code in python:
import textrazor
import ssl
textrazor.api_key ="bdd69bdc3f91045cdb6d4261d39df34d887278602cb8f60401b7eb0b"
client = textrazor.TextRazor(extractors=["entities", "topics"])
client.set_cleanup_mode("cleanHTML")
client.set_classifiers(["textrazor_newscodes"])
sentence = "Adam Hill,b It's Super Bowl Sunday pastors. Get your Jesus Jukes ready! Guilt is an awesome motivator! #sarcasm"
response = client.analyze(sentence)
print(sentence)
print(len(response.topics()))
entities = list(response.entities())
print(len(entities))
for topic in response.topics():
if topic.score > 0.3:
print (topic.label)
It gives me zero for entities and topics length
Someone proposed for me to use OpenNlp but I didn't get how to extract topics and keywords if anyone of you has any tutorial or clarification please help me
And thank you in advance
| 1
| 1
| 0
| 0
| 0
| 0
|
I am having a problem on an implementation of LSTM. I am not sure if I have the right implementation or this is just an overfitting problem. I am doing essay grading using a LSTM, scoring text with score from 0 - 10 (or other range of score). I am using the ASAP kaggle competition data as one of the training data.
However, the main goal is to achieve good performance on a private dataset, with around 500 samples. The 500 samples includes validation and training set. I have previously done some experiment and got the model to work, but after fiddling with something, the model doesn't fit anymore. The model does not improve at all. I have also re-implemented the code in a cleaner manner with much more obejct oriented code and still can't reproduce my previous result.
However, I am getting the model to fit to my data, just there is tremendous overfitting. I am not sure if this is an implementation problem of some sort or just overfitting, but I cannot get the model to work. The maximum I can get it to is 0.35 kappa using LSTM on the ASAP data essay set 1. For some bizarre reason, I can get a single layer fully connected model to have 0.75 kappa. I think this is an implementation problem but I am not sure.
Here is my old code:
train.py
import gensim
import numpy as np
import pandas as pd
import torch
from sklearn.metrics import cohen_kappa_score
from torch import nn
import torch.utils.data as data_utils
from torch.optim import Adam
from dataset import AESDataset
from network import Network
from optimizer import Ranger
from qwk import quadratic_weighted_kappa, kappa
batch_size = 32
device = "cuda:0"
torch.manual_seed(1000)
# Load data from csv
file_name = "data/data_new.csv"
data = pd.read_csv(file_name)
arr = data.to_numpy()
text = arr[:, :2]
text = [str(line[0]) + str(line[1]) for line in text]
text = [gensim.utils.simple_preprocess(line) for line in text]
score = arr[:,2]
score = [sco*6 for sco in score]
score = np.asarray(score, dtype=int)
train_dataset = AESDataset(text_arr=text[:400], scores=score[:400])
test_dataset = AESDataset(text_arr=text[400:], scores=score[400:])
score = torch.tensor(score).view(-1,1).long().to(device)
train_loader = data_utils.DataLoader(train_dataset,shuffle=True, batch_size=batch_size, drop_last=True)
test_loader = data_utils.DataLoader(test_dataset,shuffle=True,batch_size=batch_size, drop_last=True)
out_class = 61
epochs = 1000
model = Network(out_class).to(device)
model.load_state_dict(torch.load("model/best_model"))
y_onehot = torch.FloatTensor(batch_size, out_class).to(device)
optimizer = Adam(model.parameters())
criti = torch.nn.CrossEntropyLoss()
# model, optimizer = amp.initialize(model, optimizer, opt_level="O2")
step = 0
for i in range(epochs):
#Testing
if i % 1 == 0:
total_loss = 0
total_kappa = 0
total_batches = 0
model.eval()
for (text, score) in test_loader:
out = model(text)
out_score = torch.argmax(out, 1)
y_onehot.zero_()
y_onehot.scatter_(1, score, 1)
kappa_l = cohen_kappa_score(score.view(batch_size).tolist(), out_score.view(batch_size).tolist())
score = score.view(-1)
loss = criti(out, score.view(-1))
total_loss += loss
total_kappa += kappa_l
total_batches += 1
print(f"Epoch {i} Testing kappa {total_kappa/total_batches} loss {total_loss/total_batches}")
with open(f"model/epoch_{i}", "wb") as f:
torch.save(model.state_dict(),f)
model.train()
#Training
for (text, score) in train_loader:
optimizer.zero_grad()
step += 1
out = model(text)
out_score = torch.argmax(out,1)
y_onehot.zero_()
y_onehot.scatter_(1, score, 1)
kappa_l = cohen_kappa_score(score.view(batch_size).tolist(),out_score.view(batch_size).tolist())
loss = criti(out, score.view(-1))
print(f"Epoch {i} step {step} kappa {kappa_l} loss {loss}")
loss.backward()
optimizer.step()
dataset.py
import gensim
import torch
import numpy as np
class AESDataset(torch.utils.data.Dataset):
def __init__(self, text_arr, scores):
self.data = text_arr
self.scores = scores
self.w2v_model = ("w2vec_model_all")
self.max_len = 500
def __getitem__(self, item):
vector = []
essay = self.data[item]
pad_vec = [1 for i in range(300)]
for i in range(self.max_len - len(essay)):
vector.append(pad_vec)
for word in essay:
word_vec = pad_vec
try:
word_vec = self.w2v_model[word]
except:
#print(f"Skipping word as word {word} not in dictionary")
word_vec = pad_vec
vector.append(word_vec)
#print(len(vector))
vector = np.stack(vector)
tensor = torch.tensor(vector[:self.max_len]).float().to("cuda")
score = self.scores[item]
score = torch.tensor(score).long().to("cuda").view(1)
return tensor, score
def __len__(self):
return len(self.scores)
network.py
import torch.nn as nn
import torch
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self, output_size):
super(Network, self).__init__()
self.lstm = nn.LSTM(300,500,1, batch_first=True)
self.dropout = nn.Dropout(p=0.5)
#self.l2 = nn.L2
self.linear = nn.Linear(500,output_size)
def forward(self,x):
x, _ = self.lstm(x)
x = x[:,-1,:]
x = self.dropout(x)
x = self.linear(x)
return x
My new code: https://github.com/Clement-Hui/EssayGrading
| 1
| 1
| 0
| 0
| 0
| 0
|
I currently have around 400K+ documents, each with an associated group and id number. They average around 24K characters and 350 lines each. In total, there is about 25 GB worth of data. Currently, they are split up by the group, reducing the number of documents need to process to around 15K at one time. I have run into the problem of both memory usage and segmentation faults (I believe the latter is a result of the former) when running on a machine with 128GB of memory. I have changed how I process the documents by using batching to handle them at one time.
Batch Code
def batchGetDoc(raw_documents):
out = []
reports = []
infos = []
# Each item in raw_documents is a tuple of 2 items, where the first item is all
# information (report number, tags) that correlate with said document. The second
# item is the raw text of the document itself
for info, report in raw_documents:
reports.append(report)
infos.append(info)
# Using en_core_web_sm as the model
docs = list(SPACY_PARSER.pipe(reports))
for i in range(len(infos)):
out.append([infos[i],docs[i]])
return out
I use a batch size of 500, and even then, it still takes a while. Are these issues in both speed and memory due to using .pipe() on full documents rather than sentences? Would it be better to go through and run SPACY_PARSER(report) individually?
I am using spaCy to get the named entities, their linked entities, the dependency graphs, and knowledge bases from each document. Will doing it this way risk losing information that will be important for spaCy later on when it comes to getting said data?
Edit: I should mention that I do need the document info for later use in predicting the accuracy based on the document's text
| 1
| 1
| 0
| 0
| 0
| 0
|
I am looking for algorithms that could tell the language of the text to me(e.g. Hello - English, Bonjour - French, Servicio - Spanish) and also correct typos of the words in english. I have already explored Google's TextBlob, it is very relevant but it got "Too many requests" error as soon as my code starts executing. I also started exploring Polyglot but I am facing a lot of issues to download the library on Windows.
Code for TextBlob
*import pandas as pd
from tkinter import filedialog
from textblob import TextBlob
import time
from time import sleep
colnames = ['Word']
x=filedialog.askopenfilename(title='Select the word list')
print("Data to be checked: " + x)
df = pd.read_excel(x,sheet_name='Sheet1',header=0,names=colnames,na_values='?',dtype=str)
words = df['Word']
i=0
Language_detector=pd.DataFrame(columns=['Word','Language','corrected_word','translated_word'])
for word in words:
b = TextBlob(word)
language_word=b.detect_language()
time.sleep(0.5)
if language_word in ['en','EN']:
corrected_word=b.correct()
time.sleep(0.5)
Language_detector.loc[i, ['corrected_word']]=corrected_word
else:
translated_word=b.translate(to='en')
time.sleep(0.5)
Language_detector.loc[i, ['Word']]=word
Language_detector.loc[i, ['Language']]=language_word
Language_detector.loc[i, ['translated_word']]=translated_word
i=i+1
filename="Language detector test v 1.xlsx"
Language_detector.to_excel(filename,sheet_name='Sheet1')
print("Languages identified for the word list")**
| 1
| 1
| 0
| 0
| 0
| 0
|
Sentence:
'I understood that that morning did not work out for her but I would still like to to make an appointment with her. I mean if she does great lashes and it's just this one little hiccup in the beginning it's well worth it as far as I'm concerned.'
How do I remove escape characters to clean the data?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am doing an object detection model to detect my custom objects which are streetlight and the label on the streetlight using yolov3.
So here's my question, I want my model to detect the label of the streetlight by drawing bounding boxes around it. After drawing the bounding boxes, I want the model to capture the image inside the bounding box and store it for OCR which is for future steps.
| 1
| 1
| 0
| 0
| 0
| 0
|
I need assistance reshaping my input to match my output. I believe my issue is with my target variable. I am getting the error as stated in the title. I have tried .reshape and .flatten(). Please help, and thanks in advance
NEnews_train = []
for line in open('/Users/db/Desktop/NE1.txt', 'r'):
NEnews_train.append(line.strip())
REPLACE_NO_SPACE = re.compile("[.;:!'?,\"()\[\]]")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
def preprocess_reviews(reviews):
reviews = [REPLACE_NO_SPACE.sub("", line.lower()) for line in reviews]
reviews = [REPLACE_WITH_SPACE.sub(" ", line) for line in reviews]
return reviews
NE_train_clean = preprocess_reviews(NEnews_train)
from nltk.corpus import stopwords
english_stop_words = stopwords.words('english')
def remove_stop_words(corpus):
removed_stop_words = []
for review in corpus:
removed_stop_words.append(
' '.join([word for word in review.split()
if word not in english_stop_words])
)
return removed_stop_words
no_stop_words = remove_stop_words(NE_train_clean)
ngram_vectorizer = CountVectorizer(binary=True, ngram_range=(1, 2))
ngram_vectorizer.fit(no_stop_words)
X = ngram_vectorizer.transform(no_stop_words)
X_test = ngram_vectorizer.transform(no_stop_words)
target = [1 if i < 12 else 0 for i in range(25)]
X_train, X_val, y_train, y_val = train_test_split(
X, target, train_size = 0.75
)
Here's the error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-281ec07b46bb> in <module>
2
3 X_train, X_val, y_train, y_val = train_test_split(
----> 4 X, target, train_size = 0.75
5 )
~/opt/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_split.py in train_test_split(*arrays, **options)
2094 raise TypeError("Invalid parameters passed: %s" % str(options))
2095
-> 2096 arrays = indexable(*arrays)
2097
2098 n_samples = _num_samples(arrays[0])
~/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py in indexable(*iterables)
228 else:
229 result.append(np.array(X))
--> 230 check_consistent_length(*result)
231 return result
232
~/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
203 if len(uniques) > 1:
204 raise ValueError("Found input variables with inconsistent numbers of"
--> 205 " samples: %r" % [int(l) for l in lengths])
206
207
ValueError: Found input variables with inconsistent numbers of samples: [24, 25]
I saw people have similar errors but their code is a bit different from mine, so I got a bit confused on trying to solve
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to see the available problems() but it is giving Error.
Can you please let me know if I am missing anything
>>> from tensor2tensor import problems
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\\Anaconda3\lib\site-packages\tensor2tensor\problems.py", line 22, in <module>
from tensor2tensor.utils import registry
File "C:\Users\\Anaconda3\lib\site-packages\tensor2tensor\utils\registry.py", line 551, in <module>
attacks = tf.contrib.framework.deprecated(None, "Use registry.attack")(attack)
AttributeError: module 'tensorflow' has no attribute 'contrib'
>>> tf.__version__
'2.0.0-beta1'
>>>
I am working on windows
| 1
| 1
| 0
| 0
| 0
| 0
|
Currently, I have a nested for-loop that amends a list. I'm trying to create the same output while using multiprocessing.
My current code is,
for test in test_data:
output.append([((ngram[-1], ngram[:-1],model.score(ngram[-1], ngram[:-1])) for ngram in
test])
Where test_data is a generator object, and model.score is from the NLTK package.
All the solutions I have found and tried, don't work (at least in my case).
Is there a way to get the same output with multiprocessing?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am trying to use string.strip([char]) function in Python using char argument. I have used it previously for trimming text but with character argument, it is behaving a little odd. I am not able to understand what is the logic behind its working.
string = ' xoxo love xoxo '
# Leading whitepsace are removed
print(string.strip())
#Result: xoxo love xoxo
print(string.strip(' xoxoe'))
#Result: lov
print(string.strip(' dove '))
#Result: lov
| 1
| 1
| 0
| 0
| 0
| 0
|
My neural network is not giving the expected output after training in Python. Is there any error in the code? Is there any way to reduce the mean squared error (MSE)?
I tried to train (Run the program) the network repeatedly but it is not learning, instead it is giving the same MSE and output.
Here is the Data I used:
https://drive.google.com/open?id=1GLm87-5E_6YhUIPZ_CtQLV9F9wcGaTj2
Here is my code:
#load and evaluate a saved model
from numpy import loadtxt
from tensorflow.keras.models import load_model
# load model
model = load_model('ANNnew.h5')
# summarize model.
model.summary()
#Model starts
import numpy as np
import pandas as pd
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Importing the dataset
X = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet1").values
y = pd.read_excel(r"C:\filelocation\Data.xlsx","Sheet2").values
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Initialising the ANN
model = Sequential()
# Adding the input layer and the first hidden layer
model.add(Dense(32, activation = 'tanh', input_dim = 4))
# Adding the second hidden layer
model.add(Dense(units = 18, activation = 'tanh'))
# Adding the third hidden layer
model.add(Dense(units = 32, activation = 'tanh'))
#model.add(Dense(1))
model.add(Dense(units = 1))
# Compiling the ANN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the ANN to the Training set
model.fit(X_train, y_train, batch_size = 100, epochs = 1000)
y_pred = model.predict(X_test)
for i in range(5):
print('%s => %d (expected %s)' % (X[i].tolist(), y_pred[i], y[i].tolist()))
plt.plot(y_test, color = 'red', label = 'Test data')
plt.plot(y_pred, color = 'blue', label = 'Predicted data')
plt.title('Prediction')
plt.legend()
plt.show()
# save model and architecture to single file
model.save("ANNnew.h5")
print("Saved model to disk")
| 1
| 1
| 0
| 1
| 0
| 0
|
I am not able to download 'stopwords' from the nltk library.
nltk.download('stopwords')
The folder nltk_data doent have any sub-folder called 'corpora', is that causing the issue? if so how do I fix it?
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/prasadkamath/nltk_data...
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/prasadkamath/anaconda2/envs/Pk/lib/python3.7/site-packages/nltk/downloader.py", line 787, in download
for msg in self.incr_download(info_or_id, download_dir, force):
File "/Users/prasadkamath/anaconda2/envs/Pk/lib/python3.7/site-packages/nltk/downloader.py", line 650, in incr_download
for msg in self._download_package(info, download_dir, force):
File "/Users/prasadkamath/anaconda2/envs/Pk/lib/python3.7/site-packages/nltk/downloader.py", line 710, in _download_package
os.mkdir(os.path.join(download_dir, info.subdir))
PermissionError: [Errno 13] Permission denied: '/Users/prasadkamath/nltk_data/corpora'
| 1
| 1
| 0
| 0
| 0
| 0
|
So I've been learning TensorFlow with this Computer Vision project and I'm not sure if I understand it well enough. I think I got the session part right, although graph seems to be the issue here. Here is my code:
def model_train(placeholder_dimensions, filter_dimensions, strides, learning_rate, num_epochs, minibatch_size, print_cost = True):
# for training purposes
tf.reset_default_graph()
# create datasets
train_set, test_set = load_dataset() custom function and and custom made dataset
X_train = np.array([ex[0] for ex in train_set])
Y_train = np.array([ex[1] for ex in train_set])
X_test = np.array([ex[0] for ex in test_set])
Y_test = np.array([ex[1] for ex in test_set])
#convert to one-hot encodings
Y_train = tf.one_hot(Y_train, depth = 10)
Y_test = tf.one_hot(Y_test, depth = 10)
m = len(train_set)
costs = []
tf.reset_default_graph()
graph = tf.get_default_graph()
with graph.as_default():
# create placeholders
X, Y = create_placeholders(*placeholder_dimensions)
# initialize parameters
parameters = initialize_parameters(filter_dimensions)
# forward propagate
Z4 = forward_propagation(X, parameters, strides)
# compute cost
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z4, labels = Y))
# define optimizer for backpropagation that minimizes the cost function
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
# initialize variables
init = tf.global_variables_initializer()
# start session
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size)
# get random minibatch
minibatches = random_minibatches(np.array([X_train, Y_train]), minibatch_size)
for minibatch in minibatches:
minibatch_X, minibatch_Y = minibatch
_ , temp_cost = sess.run([optimizer, cost], {X: minibatch_X, Y: minibatch_Y})
minibatch_cost += temp_cost / num_minibatches
if print_cost == True and epoch % 5 == 0:
print('Cost after epoch %i: %f' %(epoch, minibatch_cost))
if print_cost == True:
costs.append(minibatch_cost)
# plot the costs
plot_cost(costs, learning_rate)
# calculate correct predictions
prediction = tf.argmax(Z4, 1)
correct_prediction = tf.equal(prediction, tf.argmax(Y, 1))
# calculate accuracy on test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print('Training set accuracy:', train_accuracy)
print('Test set accuracy:', test_accuracy)
return parameters
where create_placeholder and initialize_parameters function are as follows:
def initialize_parameters(filter_dimensions):
# initialize weight parameters for convolution layers
W1 = tf.get_variable('W1', shape = filter_dimensions['W1'])
W2 = tf.get_variable('W2', shape = filter_dimensions['W2'])
parameters = {'W1': W1, 'W2': W2}
return parameters
def forward_propagation(X, parameters, strides):
with tf.variable_scope('model1'):
# first block
Z1 = tf.nn.conv2d(X, parameters['W1'], strides['conv1'], padding = 'VALID')
A1 = tf.nn.relu(Z1)
P1 = tf.nn.max_pool(A1, ksize = strides['pool1'], strides = strides['pool1'], padding = 'VALID')
# second block
Z2 = tf.nn.conv2d(P1, parameters['W2'], strides['conv2'], padding = 'VALID')
A2 = tf.nn.relu(Z2)
P2 = tf.nn.max_pool(A2, ksize = strides['pool2'], strides = strides['pool2'], padding = 'VALID')
# flatten
F = tf.contrib.layers.flatten(P2)
# dense block
Z3 = tf.contrib.layers.fully_connected(F, 50)
A3 = tf.nn.relu(Z3)
# output
Z4 = tf.contrib.layers.fully_connected(A3, 10, activation_fn = None)
return Z4
I have previous experience with Keras, yet i can't find what is the problem here.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have the challenge of find & replace patterns in order to normalize a paragraph. It's easier to understand with an example: I have a lot of words like:
nm5638238.tmp, nm23345.tmp, nm56382334.tmp, etc
myfile0x233454, myfile0x233124, myfile0x23AW54, etc
and so on. The thing is that I don't like the regex approach in the sense that is so custom (I mean, I need one regex for each case). I need an "unattended" approach, like find that one pattern is for example myfileSOMETHING, another is nmSOMETHING.tmp, etc etc. Is there any NLP technique that you can suggest to me?
Thanks!
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to get words that are distinctive of certain documents using the TfIDFVectorizer class in scikit-learn. It creates a tfidf matrix with all the words and their scores in all the documents, but then it seems to count common words, as well. This is some of the code I'm running:
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(contents)
feature_names = vectorizer.get_feature_names()
dense = tfidf_matrix.todense()
denselist = dense.tolist()
df = pd.DataFrame(denselist, columns=feature_names, index=characters)
s = pd.Series(df.loc['Adam'])
s[s > 0].sort_values(ascending=False)[:10]
I expected this to return a list of distinctive words for the document 'Adam', but what it does it return a list of common words:
and 0.497077
to 0.387147
the 0.316648
of 0.298724
in 0.186404
with 0.144583
his 0.140998
I might not understand it perfectly, but as I understand it, tf-idf is supposed to find words that are distinctive of one document in a corpus, finding words that appear frequently in one document, but not in other documents. Here, and appears frequently in other documents, so I don't know why it's returning a high value here.
The complete code I'm using to generate this is in this Jupyter notebook.
When I compute tf/idfs semi-manually, using the NLTK and computing scores for each word, I get the appropriate results. For the 'Adam' document:
fresh 0.000813
prime 0.000813
bone 0.000677
relate 0.000677
blame 0.000677
enough 0.000677
That looks about right, since these are words that appear in the 'Adam' document, but not as much in other documents in the corpus. The complete code used to generate this is in this Jupyter notebook.
Am I doing something wrong with the scikit code? Is there another way to initialize this class where it returns the right results? Of course, I can ignore stopwords by passing stop_words = 'english', but that doesn't really solve the problem, since common words of any sort shouldn't have high scores here.
| 1
| 1
| 0
| 0
| 0
| 0
|
If I have some documents like this:
doc1 = "hello hello this is a document"
doc2 = "this text is very interesting"
documents = [doc1, doc2]
And I compute a TF-IDF matrix for this in Gensim like this:
# create dictionary
dictionary = corpora.Dictionary([simple_preprocess(line) for line in documents])
# create bow corpus
corpus = [dictionary.doc2bow(simple_preprocess(line)) for line in documents]
# create the tf.idf matrix
tfidf = models.TfidfModel(corpus, smartirs='ntc')
Then for each document, I get a TF-IDF like this:
Doc1: [("hello", 0.5), ("a", 0.25), ("document", 0.25)]
Doc2: [("text", 0.333), ("very", 0.333), ("interesting", 0.333)]
But I want the TF-IDF vector for each document to include words with 0 TF-IDF values (i.e. include every word mentioned in the corpus):
Doc1: [("hello", 0.5), ("this", 0), ("is", 0), ("a", 0.25), ("document", 0.25), ("text", 0), ("very", 0), ("interesting", 0)]
Doc2: [("hello", 0), ("this", 0), ("is", 0), ("a", 0), ("document", 0), ("text", 0.333), ("very", 0.333), ("interesting", 0.333)]
How can I do this in Gensim? Or maybe there is some other library that can compute a TF-IDF matrix in this fashion (although like Gensim, it needs to be able to handle very large data sets, e.g. I achieved this result in Sci-kit on a small data set, but Sci-kit has memory problems on a large data set).
| 1
| 1
| 0
| 0
| 0
| 0
|
When running a train on an empty NER model, should I include only labeled data (data that contain necessarily at least one entity), or should I also include data that do not contain any label at all (in this case, teaching the model that in some circunstances these words do not have any label)?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am new to python and machine learning. I want to plot Zipf's distribution graph for a text file. But my code gives error.
Following is my python code
import re
from itertools import islice
#Get our corpus of medical words
frequency = {}
list(frequency)
open_file = open("abp.csv", 'r')
file_to_string = open_file.read()
words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', file_to_string)
#build dict of words based on frequency
for word in words:
count = frequency.get(word,0)
frequency[word] = count + 1
#limit words to 1000
n = 1000
frequency = {key:value for key,value in islice(frequency.items(), 0, n)}
#convert value of frequency to numpy array
s = frequency.values()
s = np.array(s)
#Calculate zipf and plot the data
a = 2. # distribution parameter
count, bins, ignored = plt.hist(s[s<50], 50, normed=True)
x = np.arange(1., 50.)
y = x**(-a) / special.zetac(a)
plt.plot(x, y/max(y), linewidth=2, color='r')
plt.show()
And the above code gives the following error:
count, bins, ignored = plt.hist(s[s<50], 50, normed=True)
TypeError: '<' not supported between instances of 'dict_values' and 'int'
| 1
| 1
| 0
| 1
| 0
| 0
|
shall i replace NaN with zero, average, or minimum year "1900" in below case,
i am trying to clean below example dataframe the second item has no garage with 0 value in both GarageArea and GarageCars columns
Edit: to be clearer i am not looking for how to? I am looking for best value of the missing date i.e "min, avarage, or zero"
without dropping row because it is a test dataset not training
i am trying to clean this test dataframe for scikit learn randomForest using pandas, since this is a date i think using zero will not be approperiate, also i am not sure about average or minimum values!!
# Year GarageArea GarageCars
1 1900 10 1
2 NaN 0 0
3 2001 50 2
4 1950 70 2
5 2019 100 4
| 1
| 1
| 0
| 0
| 0
| 0
|
LSTM(
(embed): Embedding(139948, 12, padding_idx=0)
(lstm): LSTM(12, 12, num_layers=2, batch_first=True, bidirectional=True)
(lin): Linear(in_features=240, out_features=6, bias=True)
)
Train epoch : 1, loss : 771.319284286499, accuracy :0.590
=================================================================================================
Traceback (most recent call last):enter code here
File "C:/Users/Administrator/PycharmProjects/untitled/example.py", line 297, in <module>
scores = model(x_test, x_test_seq_length)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch
n\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:/Users/Administrator/PycharmProjects/untitled/example.py", line 141, in forward
x = self.embed(x) # sequence_length(max_len), batch_size, embed_size
File "C:\ProgramData\Anaconda3\lib\site-packages\torch
n\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch
n\modules\sparse.py", line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch
n\functional.py", line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
It works fine on the training set, but I keep getting that error in the test set. I've been thinking for 10 hours.
What is problem??
| 1
| 1
| 0
| 0
| 0
| 0
|
I have the following problem:
In English language my code generates successful word embeddings with Gensim, and similar phrases are close to each other considering cosine distance:
The angle between "Response time and error measurement" and "Relation of user perceived response time to error measurement" is very small, thus they are the most similar phrases in the set.
However, when I use the same phrases in Portuguese, it doesn't work:
My code as follows:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import matplotlib.pyplot as plt
from gensim import corpora
documents = ["Interface máquina humana para aplicações computacionais de laboratório abc",
"Um levantamento da opinião do usuário sobre o tempo de resposta do sistema informático",
"O sistema de gerenciamento de interface do usuário EPS",
"Sistema e testes de engenharia de sistemas humanos de EPS",
"Relação do tempo de resposta percebido pelo usuário para a medição de erro",
"A geração de árvores não ordenadas binárias aleatórias",
"O gráfico de interseção dos caminhos nas árvores",
"Gráfico de menores IV Largura de árvores e bem quase encomendado",
"Gráficos menores Uma pesquisa"]
stoplist = set('for a of the and to in on'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
texts
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
frequency
from nltk import tokenize
texts=[tokenize.word_tokenize(documents[i], language='portuguese') for i in range(0,len(documents))]
from pprint import pprint
pprint(texts)
dictionary = corpora.Dictionary(texts)
dictionary.save('/tmp/deerwester.dict')
print(dictionary)
print(dictionary.token2id)
# VECTOR
new_doc = "Tempo de resposta e medição de erro"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec)
## VETOR OF PHRASES
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus)
print(corpus)
from gensim import corpora, models, similarities
tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
### PHRASE COORDINATES
frase=tfidf[new_vec]
print(frase)
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
print(doc)
lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2)
corpus_lsi = lsi[corpus_tfidf]
lsi.print_topics(2)
## TEXT COORDINATES
todas=[]
for doc in corpus_lsi:
todas.append(doc)
todas
from gensim import corpora, models, similarities
dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')
corpus = corpora.MmCorpus('/tmp/deerwester.mm')
print(corpus)
lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
doc = new_doc
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow]
print(vec_lsi)
p=[]
for i in range(0,len(documents)):
doc1 = documents[i]
vec_bow2 = dictionary.doc2bow(doc1.lower().split())
vec_lsi2 = lsi[vec_bow2]
p.append(vec_lsi2)
p
index = similarities.MatrixSimilarity(lsi[corpus])
index.save('/tmp/deerwester.index')
index = similarities.MatrixSimilarity.load('/tmp/deerwester.index')
sims = index[vec_lsi]
print(list(enumerate(sims)))
sims = sorted(enumerate(sims), key=lambda item: -item[1])
print(sims)
#################
import gensim
import numpy as np
import matplotlib.colors as colors
import matplotlib.cm as cmx
import matplotlib as mpl
matrix1 = gensim.matutils.corpus2dense(p, num_terms=2)
matrix3=matrix1.T
matrix3[0]
ss=[]
for i in range(0,9):
ss.append(np.insert(matrix3[i],0,[0,0]))
matrix4=ss
matrix4
matrix2 = gensim.matutils.corpus2dense([vec_lsi], num_terms=2)
matrix2=np.insert(matrix2,0,[0,0])
matrix2
DATA=np.insert(matrix4,0,matrix2)
DATA=DATA.reshape(10,4)
DATA
names=np.array(documents)
names=np.insert(names,0,new_doc)
new_doc
cmap = plt.cm.jet
cNorm = colors.Normalize(vmin=np.min(DATA[:,3])+.2, vmax=np.max(DATA[:,3]))
scalarMap = cmx.ScalarMappable(norm=cNorm,cmap=cmap)
len(DATA[:,1])
plt.subplots()
plt.figure(figsize=(12,9))
plt.scatter(matrix1[0],matrix1[1],s=60)
plt.scatter(matrix2[2],matrix2[3],color='r',s=95)
for idx in range(0,len(DATA[:,1])):
colorVal = scalarMap.to_rgba(DATA[idx,3])
plt.arrow(DATA[idx,0],
DATA[idx,1],
DATA[idx,2],
DATA[idx,3],
color=colorVal,head_width=0.002, head_length=0.001)
for i,names in enumerate (names):
plt.annotate(names, (DATA[i][2],DATA[i][3]),va='top')
plt.title("PHRASE SIMILARITY - WORD2VEC with GENSIM library")
plt.xlim(min(DATA[:,2]-.2),max(DATA[:,2]+1))
plt.ylim(min(DATA[:,3]-.2),max(DATA[:,3]+.3))
plt.show()
My question is: is there any additional set up for Gensim to generate proper word embeddings in Portuguese language or Gensim does not support this language?
| 1
| 1
| 0
| 0
| 0
| 0
|
I’m new to pytorch and have been following the many tutorials available.
But, When I did The CHATBOT TUTORIAL is not work.
Like the figure below
What should I do and what is causing this?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have taken the code from the tutorial and attempted to modify it to include bi-directionality and any arbitrary numbers of layers for GRU.
Link to the tutorial which uses uni-directional, single layer GRU:
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
The model works fine, but when i use set bidirectional=True, i get the a dimension mismatch error (shown below). Any thoughts why this is?
Encoder:
import torch.nn.init as init
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1, bidirectional=False):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.hidden_var = hidden_size//2 if bidirectional else hidden_size
self.n_layers = n_layers
self.bidirectional = bidirectional
self.n_directions = 2 if bidirectional else 1
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size,
self.hidden_var,
num_layers=self.n_layers,
bidirectional=self.bidirectional)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
#output = (output[:, :, :self.hidden_size] +
# output[:, :, self.hidden_size:])
return output, hidden
def initHidden(self):
return torch.zeros(self.n_layers*self.n_directions, 1, self.hidden_var, device=device)
AttnDecoder:
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.dropout_p = dropout_p
self.max_length = max_length
self.n_layers = n_layers
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size,
self.hidden_size,
num_layers = self.n_layers)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]), dim=1)
return output, hidden, attn_weights
def initHidden(self):
return torch.zeros(1*self.n_layers, 1, self.hidden_size, device=device)
Everything else from the tutorial is kept exactly the same apart from this code block ( to account for the new parameters):
n_layers=1
bidirectional = True
hidden_size = 256
encoder1 = EncoderRNN(input_lang.n_words, hidden_size, n_layers=n_layers, bidirectional=bidirectional).to(device)
attn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words, dropout_p=0.1, n_layers=n_layers).to(device)
trainIters(encoder1, attn_decoder1, 75000, print_every=5000)
Error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-133-37084c93a197> in <module>
5 attn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words, dropout_p=0.1, n_layers=n_layers).to(device)
6
----> 7 trainIters(encoder1, attn_decoder1, 75000, print_every=5000)
<ipython-input-131-774ce8edefa6> in trainIters(encoder, decoder, n_iters, print_every, plot_every, learning_rate)
16
17 loss = train(input_tensor, target_tensor, encoder,
---> 18 decoder, encoder_optimizer, decoder_optimizer, criterion)
19 print_loss_total += loss
20 plot_loss_total += loss
<ipython-input-130-67be7e8c2a58> in train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length)
39 for di in range(target_length):
40 decoder_output, decoder_hidden, decoder_attention = decoder(
---> 41 decoder_input, decoder_hidden, encoder_outputs)
42 topv, topi = decoder_output.topk(1)
43 decoder_input = topi.squeeze().detach() # detach from history as input
~/miniconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
<ipython-input-129-6dd1d30fe28f> in forward(self, input, hidden, encoder_outputs)
24
25 attn_weights = F.softmax(
---> 26 self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)
27 attn_applied = torch.bmm(attn_weights.unsqueeze(0),
28 encoder_outputs.unsqueeze(0))
~/miniconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/miniconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
85
86 def forward(self, input):
---> 87 return F.linear(input, self.weight, self.bias)
88
89 def extra_repr(self):
~/miniconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1367 if input.dim() == 2 and bias is not None:
1368 # fused op is marginally faster
-> 1369 ret = torch.addmm(bias, input, weight.t())
1370 else:
1371 output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [1 x 384], m2: [512 x 10] at /tmp/pip-req-build-58y_cjjl/aten/src/TH/generic/THTensorMath.cpp:752
Any help would be appreciated!
Update based on user3923920 comment (encoder-decoder also includes LSTM option & now works with bidirectionality)
New working and adapted Encoder
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1, bidirectional=False, method='GRU'):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.hidden_var = hidden_size // 2 if bidirectional else hidden_size
self.n_layers = n_layers
self.bidirectional = bidirectional
self.n_directions = 2 if bidirectional else 1
self.method = method
self.embedding = nn.Embedding(input_size, hidden_size)
if self.method == 'GRU':
self.net = nn.GRU(hidden_size,
self.hidden_var,
num_layers=self.n_layers,
bidirectional=self.bidirectional)
elif self.method == 'LSTM':
self.net = nn.LSTM(hidden_size,
self.hidden_var,
num_layers=self.n_layers,
bidirectional=self.bidirectional)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
output, hidden = self.net(output, hidden)
# output = (output[:, :, :self.hidden_size] +
# output[:, :, self.hidden_size:])
return output, hidden, embedded
def initHidden(self):
if self.method == 'GRU':
return torch.zeros(self.n_layers * self.n_directions, 1, self.hidden_var, device=device)
elif self.method == 'LSTM':
h_state = torch.zeros(self.n_layers * self.n_directions, 1, self.hidden_var)
c_state = torch.zeros(self.n_layers * self.n_directions, 1, self.hidden_var)
hidden = (h_state, c_state)
return hidden
New working and adapted Decoder
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1,
max_length=MAX_LENGTH, method='GRU', bidirectional=False):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.dropout_p = dropout_p
self.max_length = max_length
self.n_layers = n_layers
self.method = method
self.bidirectional = bidirectional
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
if self.method == 'GRU':
self.net = nn.GRU(self.hidden_size,
self.hidden_size,
num_layers=self.n_layers)
elif self.method == 'LSTM':
self.net = nn.LSTM(self.hidden_size,
self.hidden_size,
num_layers=self.n_layers)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_outputs):
# Embed
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
self.hidden = hidden
# Concatenate all of the layers
hidden_h_rows = ()
hidden_c_rows = ()
if self.method == 'LSTM':
# hidden is a tuple of h_state and c_state
decoder_h, decoder_c = hidden
print(decoder_h.shape)
hidden_shape = decoder_h.shape[0]
# h_state
for x in range(0, hidden_shape):
hidden_h_rows += (decoder_h[x],)
# c_state
for x in range(0, hidden_shape):
hidden_c_rows += (decoder_c[x],)
elif self.method == "GRU":
# hidden is not a tuple (GRU)
decoder_h = hidden
hidden_shape = decoder_h.shape[0]
# h_state
for x in range(0, hidden_shape):
hidden_h_rows += (decoder_h[x],)
if self.bidirectional:
decoder_h_cat = torch.cat(hidden_h_rows, 1)
# Make sure the h_dim size is compatible with num_layers with concatenation.
decoder_h = decoder_h_cat.view((self.n_layers, 1, self.hidden_size)) # hidden_size=256
if self.method == "LSTM":
decoder_c_cat = torch.cat(hidden_c_rows, 1)
decoder_c = decoder_c_cat.view((self.n_layers, 1, self.hidden_size)) # hidden_size=256
hidden_lstm = (decoder_h, decoder_c)
elif self.method == "GRU":
hidden_gru = decoder_h
# Attention Block
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden_lstm[0][0] if self.method == "LSTM" else \
hidden_gru[0]), 1)), dim=1)
attn_applied = torch.bmm(attn_weights.unsqueeze(0), encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
output = F.relu(output)
output, hidden = self.net(output,
hidden_lstm if self.method == "LSTM" else hidden_gru) # I am not sure about this!
output = F.log_softmax(self.out(output[0]), dim=1)
return output, hidden, attn_weights
def initHidden(self):
if self.method == 'GRU':
return torch.zeros(self.n_layers * 1, 1, self.hidden_var, device=device)
elif self.method == 'LSTM':
h_state = torch.zeros(self.n_layers * 1, 1, self.hidden_var)
c_state = torch.zeros(self.n_layers * 1, 1, self.hidden_var)
hidden = (h_state, c_state)
return hidden
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using Spacy for NLP in Python. I am trying to use nlp.pipe() to generate a list of Spacy doc objects, which I can then analyze. Oddly enough, nlp.pipe() returns an object of the class <generator object pipe at 0x7f28640fefa0>. How can I get it to return a list of docs, as intended?
import Spacy
nlp = spacy.load('en_depent_web_md', disable=['tagging', 'parser'])
matches = ['one', 'two', 'three']
docs = nlp.pipe(matches)
docs
| 1
| 1
| 0
| 0
| 0
| 0
|
In spacy, I'd like characters like '€', '$', or '¥' to be always considered a token. However it seems sometimes they are made part of a bigger token.
For example, this is good (two tokens)
>>> len(nlp("100€"))
2
But the following is not what I want (I'd like to obtain two tokens in this case also):
>>> len(nlp("N€"))
1
How could I achieve that with spacy?
By the way, don't get too focused on the currency example. I've had this kind of problematic with other kind of characters that have nothing to do with numbers or currencies. The problem is how to make sure a character is always treated as a full token and not glued to some other string in the sentence.
| 1
| 1
| 0
| 0
| 0
| 0
|
Unable to find where did my pattern go wrong to cause the outcome.
The Sentence I want to find:"#1 – January 31, 2015" and any date that follows this format.
The pattern pattern1=[{'ORTH':'#'},{'is_digital':True},{'is_space':True},{'ORTH':'-'},{'is_space':True},{'is_alpha':True},{'is_space':True},{'is_digital':True},{'is_punct':True},{'is_space':True},{'is_digital':True}]
The print code:print("Matches1:", [doc[start:end].text for match_id, start, end in matches1])
The result: ['#', '#', '#']
Expected result: ['#1 – January 31, 2015','#5 – March 15, 2017','#177 – Novenmber 22, 2019']
| 1
| 1
| 0
| 0
| 0
| 0
|
I am a beginner in NLP and it's my first time to do Topic Modeling. I was able to generate my model however I cannot produce the coherence metric.
Converting the term-document matrix into a new gensim format, from df --> sparse matrix --> gensim corpus
sparse_counts = scipy.sparse.csr_matrix(data_dtm)
corpus = matutils.Sparse2Corpus(sparse_counts)
corpus
df_lemmatized.head()
# Gensim also requires dictionary of the all terms and their respective location in the term-document matrix
tfidfv = pickle.load(open("tfidf.pkl", "rb"))
id2word = dict((v, k) for k, v in tfidfv.vocabulary_.items())
id2word
This is my model:
lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=15, passes=10, random_state=43)
lda.print_topics()
And finally, here is where I attempted to get Coherence Score Using Coherence Model:
# Compute Perplexity
print('
Perplexity: ', lda.log_perplexity(corpus))
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda, texts=df_lemmatized.long_title, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('
Coherence Score: ', coherence_lda)
This is the error:
---> 57 if not dictionary.id2token: # may not be initialized in the standard gensim.corpora.Dictionary
58 setattr(dictionary, 'id2token', {v: k for k, v in dictionary.token2id.items()})
59
AttributeError: 'dict' object has no attribute 'id2token'
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm just starting to use NLTK and I don't quite understand how to get a list of words from text. If I use nltk.word_tokenize(), I get a list of words and punctuation. I need only the words instead. How can I get rid of punctuation? Also word_tokenize doesn't work with multiple sentences: dots are added to the last word.
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm searching for a way to make sure any time the sequence "#*" appears in the text, spacy gives me the token "#*". I tried every possible way of adding special cases with add_special_case, building a custom Tokenizer using prefix_search, suffix_search, infix_finditer and token_match, but there are still cases when if a "#*" appears in a sentence, even when it's surrounded by tokens that are not weird (tokens that should be recognized without a problem), the "#*" is splitted into [#, *].
What can I do?
Thanks.
| 1
| 1
| 0
| 0
| 0
| 0
|
I am wondering if it is possible to write a telegram bot that will answer similar to FAQ questions of any chosen website. Since I couldn't find any examples similar to my idea, I've decided to post this question here.
Probably, it is worth using DialogFlow framework here, but, again, there are no examples on the web.
| 1
| 1
| 0
| 0
| 0
| 0
|
So I have texts that look like the one below:
He also may have
recurrent seizures which should be treated with ativan IV or IM
and do not neccessarily indicate patient needs to return to
hospital unless they continue for greater than 5 minutes or he
has multiple recurrent seizures or complications such as
aspiration.
and also annotation files which are like:
T1 Reason 16 33 recurrent seizures
The above annotation tells the ID of the entity, the span (character position) and the entity itself. My goal is to do NER (Named Entity Recongnition) on the above data. From my research I know that I have to do BIO (Beginning, Inside and Outside) tagging on the data which will make my data look as follows:
O - also
O - may
O - have
B - recurrent
I - seizures
After the BIO tagging I want to use the data to get some word embeddings and input it to a classifier which will let me get the Entity types with the test data.
Is the process outline that I gave right or can anyone please explain how I can go about this problem?
| 1
| 1
| 0
| 1
| 0
| 0
|
I have seen others have posted similar questions. But the difference is I'm running a Keras Functional API instead of a sequential model.
from keras.models import Model
from keras import layers
from keras import Input
text_vocabulary_size = 10000
question_vocabulary_size = 10000
answer_vocabulary_size = 500
text_input = Input(shape=(None,), dtype='int32', name='text')
embedded_text = layers.Embedding(64, text_vocabulary_size)(text_input)
encoded_text = layers.LSTM(32)(embedded_text)
question_input = Input(shape=(None,), dtype='int32', name='question')
embedded_question = layers.Embedding( 32, question_vocabulary_size)(question_input)
encoded_question = layers.LSTM(16)(embedded_question)
concatenated = layers.concatenate([encoded_text, encoded_question],axis=-1)
## Concatenates the encoded question and encoded text
answer = layers.Dense(answer_vocabulary_size, activation='softmax')(concatenated)
model = Model([text_input, question_input], answer)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['acc'])
Feeding data to a multi-input model
import numpy as np
num_samples = 1000
max_length = 100
text = np.random.randint(1, text_vocabulary_size, size=(num_samples, max_length))
question = np.random.randint(1, question_vocabulary_size, size=(num_samples, max_length))
answers = np.random.randint(0, 1, size=(num_samples, answer_vocabulary_size))
Fitting using a list of inputs
model.fit([text, question], answers, epochs=10, batch_size=128)
The error I get while trying into fitting the model is as follows.
InvalidArgumentError: indices[120,0] = 3080 is not in [0, 32)
[[{{node embedding_6/embedding_lookup}}]]
| 1
| 1
| 0
| 0
| 0
| 0
|
I know how word2vec works, but I am having trouble with finding out how to implement word sense disambiguation using word2vec. Can you help with the process?
| 1
| 1
| 0
| 0
| 0
| 0
|
I use LibShortText for short-text classification.
I trained a model and use it to get class predictions on my test set by running:
python text-train.py -L 0 -f ./demo/train_file
python text-predict.py ./demo/train_file train_file.model output
The output file contains the score of each class for each test sample. She is the beginning of the output file:
version: 1
analyzable: 1
text-src: ./demo/train_file
extra-files:
model-id: 22d9e6defd38ed92e45662d576262915d10c3374
Tickets Tickets 1.045974012515694 -0.1533289000025808 -0.142460215262256 -0.1530588765291932 -0.1249182478102407 -0.1190708362082807 -0.06841237067728836 0.04587568197139553 -0.2283616562229066 -0.102238591774343
Stamps Stamps -0.1187719176481736 1.118188003417143 -0.08034439513604429 -0.1973997029054026 -0.06355109135595602 -0.1786639939826796 -0.1169254102259164 -0.01967861752032143 -0.06964465109882922 -0.2732082235438185
Music Music -0.1315596826953709 -0.2641082947449856 1.008713836384851 -0.04068831625284784 -0.1545790157496564 -0.1010212095804389 -0.02069378431571431 -0.02404317930606417 0.008960552873498827 -0.2809809066132714
Jewelry & Watches Jewelry & Watches -0.0749032450936907 -0.1369122108940684 -0.2159355702219642 0.9582440549577076 -0.141187218792264 -0.1290355317490395 -0.04287756450848382 -0.0919782002284954 -0.04312539181047169 -0.0822891216592294
Tickets Tickets 0.9291396425612148 -0.1597595507175184 -0.07086077554348413 -0.07087036006347401 -0.1111802245732816 -0.2329161314957608 -0.07080154336497513 -0.07093153970747144 -0.07096098431125453 -0.07085853278399512
Books Books -0.03482279197164031 -0.02622229736755784 -0.08576360644172253 -0.1209545478269265 0.9735039690597804 -0.02640896142537765 -0.1511226188239169 -0.1785299152500055 -0.1569282110333412 -0.1927510189192921
Tickets Tickets 1.165624491239117 -0.1643444003616841 -0.279795018266336 -0.05911033737681937 -0.1496733471948844 -0.1774767469424229 -0.1806900189575362 -0.05711408596057094 0.06427848575613292 -0.1616990219349959
Art Art -0.07563152438778584 -0.1926345255861422 -0.1379519287608234 -0.1728869014895525 -0.2081235484009353 0.9764371359082827 -0.06097998223834129 -0.06082239643658216 -0.0434090642865785 -0.0239972643215402
Art Art -0.21374038053991 0.0146962630542977 -0.02279914632208601 -0.001108284295731699 -0.2621058759589903 1.016592310148241 0.01436347343617804 -0.04476369315079338 -0.1246095742882179 -0.3765250920829869
Books Books -0.08063364674726788 -0.08053738921453879 -0.08032365427931695 -0.1496633152184083 0.9195583554164264 -0.08011940998873018 -0.08053175336913043 -0.16302082274963 -0.1105339242133948 -0.09419443963601073
How can I know to which class each score corresponds to?
I know I could infer it by looking at the predicted class and the maximum score for several test samples, but I'm hoping there exist some mmore direct way.
| 1
| 1
| 0
| 0
| 0
| 0
|
def resolveSentences(s1, s2):
"""
given two sentences s1 and s2 return the resolved sentence
"""
clauses = []
for p1 in s1:
for p2 in s2:
if p1.name == p2.name and p1.sign != p2.sign:
s1 = remove(p1,s1)
s2 = remove(p2, s2)
s = None
if s1 and s2:
s = list(set(s1).union(s2))
elif s1 and not s2:
s = list(set(s1))
elif s2 and not s1:
s = list(set(s2))
if s:
for pred in s:
clauses.append(pred)
if len(clauses)> 0:
return clauses
else:
return None
I call the function using:
if __name__ == "__main__":
p1 = convertToPredicate("A(x)")
p2 = convertToPredicate("B(x)")
p3 = convertToPredicate("~A(x)")
p4 = convertToPredicate("~B(x)")
p5 = convertToPredicate("C(x)")
s1 = [p1,p2,p5]
# A(x)| B(x) | C(x)
s2 = [p3,p4]
# ~A(x)| ~B(x)
trial = resolveSentences(s1,s2)
for t in trial:
print(t.sign, t.name, t.arguments, sep="\t")
My expected answer is:
C(x)
My current answer is:
B(x)| ~B(x)| C(x)
Question: Why doesn't it remove B(x)?
My observation:
The first for loop skips the second predicate in the function resolveSentences(). I am not able to figure out why though. Any help would be appreciated.
The remove function is as follows:
def equals( p1, p2):
"""
returns True is predicate p1 is equal to p2
"""
if p1.name == p2.name:
if p1.sign == p2.sign:
return True
return False
def remove( predicate_to_remove, sentence):
"""
removes all instances of predicates from sentence and returns a list of predicates
"""
for predicate in sentence:
if equals(predicate_to_remove, predicate):
sentence.remove(predicate)
return sentence
Predicate is a class that has attributes: name, sign, constants, variables, arguments
the class defn for Predicate is:
class Predicate:
"""
determining the contents of a predicate.
sign: 1 if negative, else positive
"""
name = None
sign = None
constants = None
variables = None
arguments = None
def __init__(self):
self.arguments = []
self.name = ""
self.sign = 0
self.constants = []
self.variables = []
function convertToPredicate is:
def convertToPredicate(query):
"""
converts the raw input to an object of class predicate
(str) -> (Predicate)
"""
std_predicate = Predicate()
#determining sign of predicate
std_predicate.sign = 1 if query[0] == "~" else 0
query = query.replace("~","")
#determining name of predicate
std_predicate.name = query[:query.find("(")]
#determining arg/var/const of predicate
std_predicate.arguments = query[query.find("(")+1:query.find(")")].replace(" ", "").split(",")
for arg in std_predicate.arguments:
if arg[0].isupper():
std_predicate.constants.append(arg)
elif arg[0].islower():
std_predicate.variables.append(arg)
return std_predicate
| 1
| 1
| 0
| 0
| 0
| 0
|
I have downloaded the treetagger wrapper for python from pip to use it for POS tagging purposes, Also i have downloaded the official Treetagger application from http://www.smo.uhi.ac.uk/~oduibhin/oideasra/interfaces/winttinterface.htm
Also I have downloaded the language model file for english the one with the name "english-bnc.par", which I later renamed it to "english-utf8" as per the encoding support in python 3.
Also I have included the TreeDirectory path using the TAGDIR argument while creating TreeTagger Object.
Now I get a error saying invalid binary !
I am a newbie to python and natural language processing , So if anyone have come across this issue please do let me know, Thanks in advance enter image description here
Python 3.7.1 (default, Dec 10 2018, 22:54:23) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
import pprint
import treetaggerwrapper
C:\Users\ranak_viod5a3\Anaconda3\treetaggerwrapper.py:740: FutureWarning: Possible nested set at position 8
re.IGNORECASE | re.VERBOSE)
C:\Users\ranak_viod5a3\Anaconda3\treetaggerwrapper.py:2044: FutureWarning: Possible nested set at position 152
re.VERBOSE | re.IGNORECASE)
C:\Users\ranak_viod5a3\Anaconda3\treetaggerwrapper.py:2067: FutureWarning: Possible nested set at position 409
UrlMatch_re = re.compile(UrlMatch_expression, re.VERBOSE | re.IGNORECASE)
C:\Users\ranak_viod5a3\Anaconda3\treetaggerwrapper.py:2079: FutureWarning: Possible nested set at position 192
EmailMatch_re = re.compile(EmailMatch_expression, re.VERBOSE | re.IGNORECASE)
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en',TAGDIR='C:/TreeTagger/bin')
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\ranak_viod5a3\Anaconda3\treetaggerwrapper.py", line 1006, in init
self._set_tagger(kargs)
File "C:\Users\ranak_viod5a3\Anaconda3\treetaggerwrapper.py", line 1072, in _set_tagger
raise TreeTaggerError("TreeTagger binary invalid: " + self.tagbin)
treetaggerwrapper.TreeTaggerError: TreeTagger binary invalid: C:\TreeTagger\bin\bin\tree-tagger.exe
| 1
| 1
| 0
| 0
| 0
| 0
|
I'm trying to start with NLP using Python using nltk or spaCy.
My question is, if I have the sentence, 'Barack Obama was the former President of united states', How can I retrieve the word president to provide the class of the entity?
| 1
| 1
| 0
| 0
| 0
| 0
|
I know that that SpaCy provides start and end of each entity in a sentence. I want the start of the entity in the whole document (not just the sentence).
| 1
| 1
| 0
| 0
| 0
| 0
|
I did extraction of text from image. I got unstructured data after extracting text. I have to convert this to a structured form but I'm not able to do the so.
The unstructured data extracted from image in python:
EQUITY-LARGE CAP ©@ SBIMUTUAL FUND
A’ A PARTNER FOR LIFE
LSS LAST DIVIDENDS Ct EV a A)
i Option NAV @) Record Date Dividend (in /Unit) NAV (@)
BLUE CH | Pp FU N D Reg-Plan-Growth 34.9294 23-Sep-16 (Reg Plan) 1.00 18.5964
—————— a 23-Sep-16 (Dir Plan) 1.20 21.8569
= Reg-Plan-Dividend 19.8776 9 =
An Open-ended Growth Scheme = -Reg-Plan-Dividend 188776 TT a5 Reg Plan) 2.50 17.6880
Dir-Plan-Dividend 23.5613 17-Jul-15 (Dir Plan) 2.90 20.5395
. . ir a 21- Mar-14 (Reg Plan) 1.80 12.7618
Investment Objective Dir-Plan-Growth 36.2961
a. . a. Pursuant to payment of dividend, the NAV of Dividend Option of
To provide investors with opportunities scheme/plans would fall to the extent of payout and statutory levy, if
for long-term growth in capital through applicable.
anactive management of investments ina
diversified basket of equity stocks of
companies whose market capitalization
is at least equal to or more than the least PORTFOLIO
market capitalized stock of S&P BSE 100
face Stock Name (%) Of Total AUM Stock Name (%) Of Total AUM
. HDFC Bank Ltd. 8.29 Apollo Hospitals Enterprises Ltd. 1.04
Fund Details Larsen & Toubro Ltd. 4.46 Tata Motors Ltd. (Dvr-A-Ordy) 0.85
ITC Ltd. 4.07 Eicher Motors Ltd. 0.84
+ Type of Scheme UPL Ltd. 2.95 Shriram City Union Finance Ltd. 0.79
An Open - Ended Growth Scheme Infosys Ltd. 2.93 Divi's Laboratories Ltd. 0.73
Mahindra & Mahindra Ltd. 2.92 Pidilite Industries Ltd. 0.62
+ Date of Allotment: 14/02/2006 Nestle India Ltd. 2.90 Fag Bearings India Ltd. 0.62
. . Reliance Industries Ltd. 2.86 Sadbhav Engineering Ltd. 0.61
Reno AS ono /OG/2007 Indusind Bank Ltd. 2.68 Grasim Industries Ltd. 0.60
+ AAUM for the Month of June 2017 State Bank Of India 2.63 Petronet LNG Ltd. 0.60
214,204.29¢ Kotak Mahindra Bank Ltd. 2.57 Hudco Ltd. 0.58
, rores HCL Technologies Ltd. 2.50 Torrent Pharmaceuticals Ltd. 0.55
+» AUMas on June 30, 2017 Bharat Electronics Ltd. 2.48 Thermax Ltd. 0.52
% 14,292.59 Crores Cholamandalam Investment And Dr. Lal Path Labs Ltd. 0.49
: — - Finance Company Ltd. 2.36 Coal India Ltd. 0.44
+ Fund Manager: Ms. Sohini Andani Hero Motocorp Ltd. 2.16 Narayana Hrudayalaya Ltd. 0.41
Managing Since: Sep-2010 Hindustan Petroleum Corporation Ltd. 2.11 Britannia Industries Ltd. 0.40
i . Motherson Sumi Systems Ltd. 1.98 Tata Steel Ltd. 0.38
Total Experience: Over 22 years Maruti Suzuki India Ltd. 1.90 Procter & Gamble Hygiene And
+ Benchmark: S&P BSE 100 Index ICICI Bank Ltd. 1.88 Health Care Ltd. 0.38
— Sun Pharmaceuticals Industries Ltd. 1.66 SKF India Ltd. 0.35
+ Exit Load: HDFC Ltd. 1.66 ff Tata Motors Ltd. 0.26
For exit within 1 year from the date of Strides Shasun Ltd. 1.59 Equity Shares Total 90.22
allotment - 1%; For exit after 1 year Titan Company Ltd. 1.58 Motilal Oswal Securities Ltd
fi he d f n il Hindalco Industries Ltd. 1.57 CP Mat 28.07.2017. 0.42
rom the date of allotment - Ni Ultratech Cement Ltd. 1.52 [| Commercial Paper Total 0.42
+ Entry Load: N.A. Voltas Ltd. 1.48 HDFC Bank Ltd. 0.14
- - Mahindra & Mahindra Financial Services Ltd. 1.42 Fixed Deposits Total 0.14
+ Plans Available: Regular, Direct The Ramco Cements Ltd. 1.41 CBLO 8.24
. a ao PI Industries Ltd. 1.40 Cash & Other Receivables (4.29)
Options: Growth, Dividend Aurobindo Pharma Ltd. 1.39 Futures 4.72
+ SIP Indian Oil Corporation Ltd. 1.36 HDFC Ltd. 0.56
Weekly - Minimum & 1000 & in multiples The Federal Bank Ltd. 1.22 Warrants Total 0.56
LIC Housing Finance Ltd. 1.18 Grand Total 100.00
of = 1 thereafter for a minimum of 6 Shriram Transport Finance Company Ltd. 1.10
instalments.
Monthly - Minimum = 1000 & in
Eee ee aC PORTFOLIO CLASSIFICATION BY PORTFOLIO CLASSIFICATION BY
See ee eae Oe INDUSTRY ALLOCATION (%) ASSET ALLOCATION (%)
multiples of = 1 thereafter for minimum
one year. Financial Services 29.34
Quarterly - Minimum % 1500 & in Automobile 10.90 s.o6 172
multiples of = 1 thereafter for minimum ronsumer Goods 03
nergy :
one WEEN Construction 6.54 18.66
+ Minimum Investment Pharma 5.93 *
= 5000 & in multiples of = 1 IT 5.43
resi Fertilisers & Pesticides 4.35
. Additional Investment Industrial Manufacturing 3.97
< HOO © tho coawlittas Gtr Cement & Cement Products 3.53
Metals 2.39 71.55
Quantitative Data Healthcare Services 1.93
Chemicals 0.62
Standard Deviation® 112.21% Cash & Other Recivables -4.29 L c = Mia
mLarge Cap jidcap
Beta* :0.86 Futures 4.72
ae cBLO 8.24
Sharpe Ratio’ 0.76 Fixed Deposits 0.14 m Cash & Other Current Assets Futures
Portfolio Turnover* 11.03
*Source: CRISIL Fund Analyser Riskometor SBI Blue Chip Fund
“Portfolio Turnover = lower of total sale or one] > This product is suitable for investors who are seeking:
total purchase for the last 12 months L\E * Long term capital appreciation,
Fe on C aL a GCM cL OT LT Ss BAA Z*3\ * Investment in equity shares of companies whose market capitalization is at least equal to or more
Risk Free rate: FBIL Overnight Mibor rate Inve EE sical than the least market capitalized stock of S&P BSE 100 index to provide long term capital growth
(6.25% as on 30th June 2017) Basis for will best Moderately Highrisk | OPPOrtunities.
Ratio Calculation: eavcarsiMonthiy{Data ‘Alnvestors should consult their financial advisers if in doubt about whether the product is suitable for them.
The image:
Please help to convert this unstructured data to structure data. Any library or any function suggested?
| 1
| 1
| 0
| 1
| 0
| 0
|
I'm doing a Text Classification (NLP) model using fastai train on googlecolab (gpu) after I load the model using load_learner without any error but when I change the cpu usage, I get an error "RuntimeError: _th_index_select not supported on CPUType for Half"
Is there any way for me to predict cpu usage results?
from fastai import *
from fastai.text import *
from sklearn.metrics import f1_score
defaults.device = torch.device('cpu')
@np_func
def f1(inp,targ): return f1_score(targ, np.argmax(inp, axis=-1))
path = Path('/content/drive/My Drive/Test_fast_ai')
learn = load_learner(path)
learn.predict("so sad")
RuntimeError Traceback (most recent call last)
<ipython-input-13-3775eb2bfe91> in <module>()
----> 1 learn.predict("so sad")
11 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1504 # remove once script supports set_grad_enabled
1505 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1506 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1507
1508
RuntimeError: _th_index_select not supported on CPUType for Half
| 1
| 1
| 0
| 0
| 0
| 0
|
I am using spacy to match a particular expression in some text (in italian). My text can appear in multiple forms and I am trying to learn what's the best way to write a general rule. I have 4 cases as below,, and I would like to write a general patter that could work with all of the cases. Something like:
# case 1
text = 'Superfici principali e secondarie: 90 mq'
# case 2
# text = 'Superfici principali e secondarie di 90 mq'
# case 3
# text = 'Superfici principali e secondarie circa 90 mq'
# case 4
# text = 'Superfici principali e secondarie di circa 90 mq'
nlp = spacy.load('it_core_news_sm')
doc = nlp(text)
matcher = Matcher(nlp.vocab)
pattern = [{"LOWER": "superfici"}, {"LOWER": "principali"}, {"LOWER": "e"}, {"LOWER": "secondarie"}, << "some token here that allows max 3 tokens or a IS_PUNCT or nothing at all" >>, {"IS_DIGIT": True}, {"LOWER": "mq"}]
matcher.add("Superficie", None, pattern)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a text dataframe like this,
id text
1 Thanks. I appreciate your help. I really like this chat service as it is very convenient. I hope you have a wonderful day! thanks!
2 Got it. Thanks for the help; good nite.
I want to split those text sentences and match them to each id. My expected output is,
id text
1 Thanks.
1 I appreciate your help.
1 I really like this chat service as it is very convenient.
1 I hope you have a wonderful day!
1 thanks!
2 Got it.
2 Thanks for the help;
2 good nite.
Is there any nltk functions that can handle this problem?
| 1
| 1
| 0
| 0
| 0
| 0
|
I am new to spacy and i am trying to match some measurements in some text. My problem is that the unit of measure sometimes is before, sometimes is after the value. In some other cases has a different name. Here is some code:
nlp = spacy.load('en_core_web_sm')
# case 1:
text = "the surface is 31 sq"
# case 2:
# text = "the surface is sq 31"
# case 3:
# text = "the surface is square meters 31"
# case 4:
# text = "the surface is 31 square meters"
# case 5:
# text = "the surface is about 31 square meters"
# case 6:
# text = "the surface is 31 kilograms"
pattern = [
{"IS_STOP": True},
{"LOWER": "surface"},
{"LEMMA": "be", "OP": "?"},
{"LOWER": "sq", "OP": "?"},
{"LOWER": "square", "OP": "?"},
{"LOWER": "meters", "OP": "?"},
{"IS_DIGIT": True},
{"LOWER": "square", "OP": "?"},
{"LOWER": "meters", "OP": "?"},
{"LOWER": "sq", "OP": "?"}
]
doc = nlp(text)
matcher = Matcher(nlp.vocab)
matcher.add("Surface", None, pattern)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
I have two problems :
1 - the pattern should be able to match all cases 1 to 5, but in my case 1 the output is
4898162435462687487 Surface 0 4 the surface is 31
4898162435462687487 Surface 0 5 the surface is 31 sq
which to me seems that it is a duplicate match.
2 - case 6 should not match, but instead, with my pattern it is matched.
Any suggestion on how to improve this?
EDIT:
is it possible to build an OR condition within the pattern? something like
pattern = [
{"POS": "DET", "OP": "?"},
{"LOWER": "surface"},
{"LEMMA": "be", "OP": "?"},
[
[{"LOWER": "sq", "OP": "?"},
{"LOWER": "square", "OP": "?"},
{"LOWER": "meters", "OP": "?"},
{"IS_ALPHA": True, "OP": "?"},
{"LIKE_NUM": True}]
OR
[{"LIKE_NUM": True},
{"LOWER": "square", "OP": "?"},
{"LOWER": "meters", "OP": "?"},
{"LOWER": "sq", "OP": "?"} ]
]
]
| 1
| 1
| 0
| 0
| 0
| 0
|
I ran a word2vec algo on text of about 750k words (before removing some stop words). Using my model, I started looking at the most similar words to particular words of my choosing, and the similarity scores (for model.wv.most_similar method) are all super close to 1. The tenth closest score is still like .998, so I feel like I'm not getting any significant differences between the similarity of words which leads to meaningless similar words.
My constructor for the model is
model = Word2Vec(all_words, size=75, min_count=30, window=10, sg=1)
I think the problem may lie in how I structure the text to run the neural net on. I store all the words like so:
all_sentences = nltk.sent_tokenize(v)
all_words = [nltk.word_tokenize(sent) for sent in all_sentences]
all_words = [[word for word in all_words[0] if word not in nltk.stopwords('English')]]
...where v is the result of calling read() on a txt file.
| 1
| 1
| 0
| 1
| 0
| 0
|
I am attempting to implement the algorithm from the TD-Gammon article by Gerald Tesauro. The core of the learning algorithm is described in the following paragraph:
I have decided to have a single hidden layer (if that was enough to play world-class backgammon in the early 1990's, then it's enough for me). I am pretty certain that everything except the train() function is correct (they are easier to test), but I have no idea whether I have implemented this final algorithm correctly.
import numpy as np
class TD_network:
"""
Neural network with a single hidden layer and a Temporal Displacement training algorithm
taken from G. Tesauro's 1995 TD-Gammon article.
"""
def __init__(self, num_input, num_hidden, num_output, hnorm, dhnorm, onorm, donorm):
self.w21 = 2*np.random.rand(num_hidden, num_input) - 1
self.w32 = 2*np.random.rand(num_output, num_hidden) - 1
self.b2 = 2*np.random.rand(num_hidden) - 1
self.b3 = 2*np.random.rand(num_output) - 1
self.hnorm = hnorm
self.dhnorm = dhnorm
self.onorm = onorm
self.donorm = donorm
def value(self, input):
"""Evaluates the NN output"""
assert(input.shape == self.w21[1,:].shape)
h = self.w21.dot(input) + self.b2
hn = self.hnorm(h)
o = self.w32.dot(hn) + self.b3
return(self.onorm(o))
def gradient(self, input):
"""
Calculates the gradient of the NN at the given input. Outputs a list of dictionaries
where each dict corresponds to the gradient of an output node, and each element in
a given dict gives the gradient for a subset of the weights.
"""
assert(input.shape == self.w21[1,:].shape)
J = []
h = self.w21.dot(input) + self.b2
hn = self.hnorm(h)
o = self.w32.dot(hn) + self.b3
for i in range(len(self.b3)):
db3 = np.zeros(self.b3.shape)
db3[i] = self.donorm(o[i])
dw32 = np.zeros(self.w32.shape)
dw32[i, :] = self.donorm(o[i])*hn
db2 = np.multiply(self.dhnorm(h), self.w32[i,:])*self.donorm(o[i])
dw21 = np.transpose(np.outer(input, db2))
J.append(dict(db3 = db3, dw32 = dw32, db2 = db2, dw21 = dw21))
return(J)
def train(self, input_states, end_result, a = 0.1, l = 0.7):
"""
Trains the network using a single series of input states representing a game from beginning
to end, and a final (supervised / desired) output for the end state
"""
outputs = [self(input_state) for input_state in input_states]
outputs.append(end_result)
for t in range(len(input_states)):
delta = dict(
db3 = np.zeros(self.b3.shape),
dw32 = np.zeros(self.w32.shape),
db2 = np.zeros(self.b2.shape),
dw21 = np.zeros(self.w21.shape))
grad = self.gradient(input_states[t])
for i in range(len(self.b3)):
for key in delta.keys():
td_sum = sum([l**(t-k)*grad[i][key] for k in range(t + 1)])
delta[key] += a*(outputs[t + 1][i] - outputs[t][i])*td_sum
self.w21 += delta["dw21"]
self.w32 += delta["dw32"]
self.b2 += delta["db2"]
self.b3 += delta["db3"]
The way I use this is I play through a whole game (or rather, the neural net plays against itself), and then I send the states of that game, from start to finish, into train(), along with the final result. It then takes this game log, and applies the above formula to alter weights using the first game state, then the first and second game states, and so on until the final time, when it uses the entire list of game states. Then I repeat that many times and hope that the network learns.
To be clear, I am not after feedback on my code writing. This was never meant to be more than a quick and dirty implementation to see that I have all the nuts and bolts in the right spots.
However, I have no idea whether it is correct, as I have thus far been unable to make it capable of playing tic-tac-toe at any reasonable level. There could be many reasons for that. Maybe I'm not giving it enough hidden nodes (I have used 10 to 12). Maybe it needs more games to train (I have used 200 000). Maybe it would do better with different normalisation functions (I've tried sigmoid and ReLU, leaky and non-leaky, in different variations). Maybe the learning parameters are not tuned right. Maybe tic-tac-toe and its deterministic gameplay means it "locks in" on certain paths in the game tree. Or maybe the training implementation is just wrong. Which is why I'm here.
Have I misunderstood Tesauro's algorithm?
| 1
| 1
| 0
| 0
| 0
| 0
|
I have a TF IDF vocabulary I already get from gensim or tfidfvectorizer. Is there any specific metric or method to drop tails of TF IDF vocabulary? I mean tails at Zipf diagram. How to visualize it?
I would like to see how accuracy changes when I drop number of words in vocabulary. For instance, I have vocabulary that has 175000 of words.
| 1
| 1
| 0
| 1
| 0
| 0
|
I have a custom rule matching in spacy, and I am able to match some sentences in a document. I would like to extract some numbers now from the matched sentences. However, the matched sentences do not have always have the same shape and form. What is the best way to do this?
# case 1:
texts = ["the surface is 31 sq",
"the surface is sq 31"
,"the surface is square meters 31"
,"the surface is 31 square meters"
,"the surface is about 31,2 square"
,"the surface is 31 kilograms"]
pattern = [
{"LOWER": "surface"},
{"LEMMA": "be", "OP": "?"},
{"TEXT" : {"REGEX": "^(?i:sq(?:uare)?|m(?:et(?:er|re)s?)?)$"}, "OP": "+"},
{"IS_ALPHA": True, "OP": "?"},
{"LIKE_NUM": True},
]
pattern_1 = [
{"LOWER": "surface"},
{"LEMMA": "be", "OP": "?"},
{"IS_ALPHA": True, "OP": "?"},
{"LIKE_NUM": True},
{"TEXT" : {"REGEX": "^(?i:sq(?:uare)?|m(?:et(?:er|re)s?)?)$", "OP": "+"}}
]
matcher = Matcher(nlp.vocab)
matcher.add("Surface", None, pattern, pattern_1)
for index, text in enumerate(texts):
print(f"Case {index}")
doc = nlp(text)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
my output will be
Case 0
4898162435462687487 Surface 1 5 surface is 31 sq
Case 1
4898162435462687487 Surface 1 5 surface is sq 31
Case 2
4898162435462687487 Surface 1 6 surface is square meters 31
Case 3
4898162435462687487 Surface 1 5 surface is 31 square
Case 4
4898162435462687487 Surface 1 6 surface is about 31,2 square
Case 5
I would like to return the number (square meters) only. Something like [31, 31, 31, 31, 31.2] rather than the full text. What is the correct way to do this in spacy?
| 1
| 1
| 0
| 0
| 0
| 0
|
I would like to know if it's possible for me to use my own tokenized/segmented documents (with my own vocab file as well) as the input file to the create_pretraining_data.py script (git source: https://github.com/google-research/bert).
The main reason for this question is because the segmentation/tokenization for the Khmer language is different than that of English.
Original:
វាមានមកជាមួយនូវ
Segmented/Tokenized:
វា មាន មក ជាមួយ នូវ
I tried something on my own and managed to get some results after running the create_pretraining_data.py and run_pretraining.py script. However, I'm not sure if what I'm doing can be considered correct.
I also would like to know the method that I should use to verify my model.
Any help is highly appreciated!
Script Modifications
The modifications that I did were:
1. Make input file in a list format
Instead of a normal plain text, my input file is from my custom Khmer tokenization output where I then make it into a list format, mimicking the output that I get when running the sample English text.
[[['ដំណាំ', 'សាវម៉ាវ', 'ជា', 'ប្រភេទ', 'ឈើ', 'ហូប', 'ផ្លែ'],
['វា', 'ផ្តល់', 'ផប្រយោជន៍', 'យ៉ាង', 'ច្រើន', 'ដល់', 'សុខភាព']],
[['cmt', '$', '270', 'នាំ', 'លាភ', 'នាំ', 'សំណាង', 'ហេង', 'ហេង']]]
* The outer bracket indicates a source file, the first nested bracket indicates a document and the second nested bracket indicates a sentence. Exactly the same structure as the variable all_documents inside the create_training_instances() function
2. Vocab file from unique segmented words
This is the part that I'm really really having some serious doubt with. To create my vocab file, all I did was find the unique tokens from the whole documents. I then add the core token requirement [CLS], [SEP], [UNK] and [MASK]. I'm not sure if this the correct way to do it.
Feedback on this part is highly appreciated!
3. Skip tokenization step inside the create_training_instances() function
Since my input file already matches what the variable all_documents is, I skip line 183 to line 207. I replaced it with reading my input as-is:
for input_file in input_files:
with tf.gfile.GFile(input_file, "r") as reader:
lines = reader.read()
all_documents = ast.literal_eval(lines)
Results/Output
The raw input file (before custom tokenization) is from random web-scraping.
Some information on the raw and vocab file:
Number of documents/articles: 5
Number of sentences: 78
Number of vocabs: 649 (including [CLS], [SEP] etc.)
Below is the output (tail end of it) after running the create_pretraining_data.py
And this is what I get after running the run_pretraining.py
As shown in the diagram above I'm getting a very low accuracy from this and hence my concern if I'm doing it correctly.
| 1
| 1
| 0
| 0
| 0
| 0
|
The excel file contains Indian language data. The excel file is being read but while displaying the content it shows \u200d in between. I need to avoid it to do further processing of data. Kindly help.
| 1
| 1
| 0
| 0
| 0
| 0
|
I have 2 node2vec models in different timestamps. I want to calculate the distance between 2 models. Two models have the same vocab and we update the models.
My models are like this
model1:
"1":0.1,0.5,...
"2":0.3,-0.4,...
"3":0.2,0.5,...
.
.
.
model2:
"1":0.15,0.54,...
"2":0.24,-0.35,...
"3":0.24,0.47,...
.
.
.
| 1
| 1
| 0
| 0
| 0
| 0
|
As far as I'm concerned, there is no question like this. I'm working on a NLP and sentiment analysis project in Kaggle and first of all I'm preparing my data.
The dataframe is a text column followed by a number from 0 to 9 which categorizes which cluster does the row (the document) belongs.
I'm using TF-IDF Vectorizer in sklearn. I want to get rid of anything that's not an english language word, so I'm using the following:
class LemmaTokenizer(object):
def __init__(self):
self.wnl = WordNetLemmatizer()
def __call__(self, doc):
return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
s_words = list(nltk.corpus.stopwords.words("english"))
c = TfidfVectorizer(sublinear_tf=False,
stop_words=s_words,
token_pattern =r"(?ui)\\b\\w*[a-z]+\\w*\\b",
tokenizer = LemmaTokenizer(),
analyzer = "word",
strip_accents = "unicode")
#a_df is the original dataframe
X = a_df['Text']
X_text = c.fit_transform(X)
which as far as I know, when calling c.get_feature_names() should return only the tokens which are proper words, without numbers or punctuation symbols.
I found the regex in a post in StackOverflow, but using a simpler one like [a-zA-Z]+ will do exactly the same (this is, nothing).
When I call the feature names, I get stuff like
["''abalone",
"#",
"?",
"$",
"'",
"'0",
"'01",
"'accidentally",
...]
Those are just examples, but it's representative of the output I get, instead of just the words.
I've been stuck with this for days trying different regular expressions or methods to call. Even hardcoded some of the outputs for the features on the stop words.
I'm asking this because later I'm using LDA to get the topics of each cluster and get punctuation symbols as the "topics".
I hope I'm not duplicating another post. Anymore information I need to provide will do gladly. Thank you in advance!
| 1
| 1
| 0
| 1
| 0
| 0
|
So I have a simple dataframe in pandas, where one of the column consist of tweet messages. Each cell or row contains a tweet message. I am trying to do a word frequency count to detect what are the top 10 words in my dataframe. Reason being to remove them from my dataset by adding them to my list of stopwords.
Tried a few code snippets on my dataset, however confused as to why it yields different results when it comes to the count of frequency. Below comparison of codes.
Code 1
top_N = 10
a = train_data['tweet'].str.cat(sep='')
words = nltk.tokenize.word_tokenize(a)
word_dist = nltk.FreqDist(words)
Code 2
word_dist = pd.Series(' '.join(train_data['tweet']).lower().split()).value_counts()[:10]
The top 10 most frequent words are the same in both codes but the values or count of word distribution/frequency differed slightly i.e Code 1 had a slightly lower count for the same list of words in Code 2. They are both analyzing the same dataset. The difference is around 100 words. The only difference I see is that Code 1 tokenizes the words where as Code 2 splits the words, but they are essentially the same thing so what am I missing here? I realized that Code 1 yields nltk.probability.FreqDist whereas Code 2 pandas.core.series.Series. Can someone kindly break this down to me and explain the difference please ?
| 1
| 1
| 0
| 0
| 0
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.