text stringlengths 0 27.6k | python int64 0 1 | DeepLearning or NLP int64 0 1 | Other int64 0 1 | Machine Learning int64 0 1 | Mathematics int64 0 1 | Trash int64 0 1 |
|---|---|---|---|---|---|---|
I apologize for the nature of this question but I'm relatively new to tensorflow.
I am having trouble understanding the bayesflow monte carlo operations of tensorflow, as described here
As far as I know, it is an op for estimating the expected outcome of a function(?).
Additionally, how would I use it?
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm looking to take all the relevant text sections of text of certain web pages and parse it into a structured format, e.g. a CSV file for later use.
However, the web pages I want to take info from don't strictly follow the same format, for example, the pages:
http://www.cs.bham.ac.uk/research/groupings/machine-learning/
http://www.cs.bham.ac.uk/research/groupings/robotics/
http://www.cs.bham.ac.uk/research/groupings/reasoning/
I have been using BeautifulSoup and this has been fine for the web pages that follow a well-defined format, but these particular websites don't follow a standard format.
How can I write my code to extract the main text from these pages?
Could I either extract all the text and strip away the irrelevant/commonly occurring text?
Or can I somehow select these larger text bodies even though they don't occur uniformly?
The websites are formatted differently but not in such a convoluted way that I think this is impossible?
Originally I had code like this for dealing with the structured pages:
from urllib.request import urlopen
from bs4 import BeautifulSoup
import sqlite3
conn = sqlite3.connect('/Users/tom/PycharmProjects/tmc765/Parsing/MScProject.db')
c = conn.cursor()
### Specify URL
programme_list = ["http://www.cs.bham.ac.uk/internal/programmes/2017/0144",
"http://www.cs.bham.ac.uk/internal/programmes/2017/9502",
"http://www.cs.bham.ac.uk/internal/programmes/2017/452B",
"http://www.cs.bham.ac.uk/internal/programmes/2017/4436",
"http://www.cs.bham.ac.uk/internal/programmes/2017/5914",
"http://www.cs.bham.ac.uk/internal/programmes/2017/9503",
"http://www.cs.bham.ac.uk/internal/programmes/2017/9499",
"http://www.cs.bham.ac.uk/internal/programmes/2017/5571",
"http://www.cs.bham.ac.uk/internal/programmes/2017/5955",
"http://www.cs.bham.ac.uk/internal/programmes/2017/4443",
"http://www.cs.bham.ac.uk/internal/programmes/2017/9509",
"http://www.cs.bham.ac.uk/internal/programmes/2017/5576",
"http://www.cs.bham.ac.uk/internal/programmes/2017/9501",
"http://www.cs.bham.ac.uk/internal/programmes/2017/4754",
"http://www.cs.bham.ac.uk/internal/programmes/2017/5196"]
for programme_page in programme_list:
# Query page, return html to a variable
page = urlopen(programme_page)
soupPage = BeautifulSoup(page, 'html.parser')
name_box = soupPage.find('h1')
Programme_Identifier = name_box.text.strip()
Programme_Award = soupPage.find("td", text="Final Award").find_next_sibling("td").text
Interim_Award = soupPage.find("td", text="Interim Award")
if Interim_Award is not None:
Interim_Award = Interim_Award = soupPage.find("td", text="Interim Award").find_next_sibling("td").text
Programme_Title = soupPage.find("td", text="Programme Title").find_next_sibling("td").text
School_Department = soupPage.find("td", text="School/Department").find_next_sibling("td").text
Banner_Code = soupPage.find("td", text="Banner Code").find_next_sibling("td").text
Programme_Length = soupPage.find("td", text="Length of Programme").find_next_sibling("td").text
Total_Credits = soupPage.find("td", text="Total Credits").find_next_sibling("td").text
UCAS_Code = soupPage.find("td", text="UCAS Code").find_next_sibling("td").text
Awarding_Institution = soupPage.find("td", text="Awarding Institution").find_next_sibling("td").text
QAA_Benchmarking_Groups = soupPage.find("td", text="QAA Benchmarking Groups").find_next_sibling("td").text
#SQL code for inserting into database
with conn:
c.execute("INSERT INTO Programme_Pages VALUES (?,?,?,?,?,?,?,?,?,?,?,?)",
(Programme_Identifier, Programme_Award, Interim_Award, Programme_Title,
School_Department, Banner_Code, Programme_Length, Total_Credits,
UCAS_Code, Awarding_Institution, QAA_Benchmarking_Groups, programme_page))
print("Program Title: ", Programme_Identifier)
print("Program Award: ", Programme_Award)
print("Interim Award: ", Interim_Award)
print("Program Title: ", Programme_Title)
print("School/Department: ", School_Department)
print("Banner Code: ", Banner_Code)
print("Length of Program: ", Programme_Length)
print("Total Credits: ", Total_Credits)
print("UCAS Code: ", UCAS_Code)
print("Awarding Institution: ", Awarding_Institution)
print("QAA Benchmarking Groups: ", QAA_Benchmarking_Groups)
print("~~~~~~~~~~
~~~~~~~~~~")
Educational_Aims = soupPage.find('div', {"class": "programme-text-block"})
Educational_Aims_Title = Educational_Aims.find('h2')
Educational_Aims_Title = Educational_Aims_Title.text.strip()
Educational_Aims_List = Educational_Aims.findAll("li")
print(Educational_Aims_Title)
for el in Educational_Aims_List:
text = el.text.strip()
with conn:
c.execute("INSERT INTO Programme_Info VALUES (?,?,?,?)", (Programme_Identifier, text,
Educational_Aims_Title, programme_page))
print(el.text.strip())
However, I've not found a way yet to write a script to pull out the relevant text from the unstructured pages I've linked above.
I was considering trying to pull all the sections tagged and then processing them as they come.
I just thought someone might have any insight on an easier way.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a List of Parts of Speech tagged words (each element is in the format of "word|tag") and I am trying to find a way to delete the corresponding "tag" after I delete a certain "word." More specifically, my algorithm can only deal with the "word" portion of each element, so I first split my current "word"|"tag" list into two separate lists of words and tags. After I remove certain unnecessary words from the Words list though, I want to concatenate the corresponding tags. How can I effectively delete the corresponding tag from a different list? Or is there a better way to do this? I tried running my cleaning algorithm with the tagged words initially, but couldn't find a way to ignore the tags from each word.
My issue may be more clear by showing my code:
my_list = ['I|PN', 'am|V', 'very|ADV', 'happy|ADJ']
tags = []
words = []
for i, x in enumerate(my_list):
front, mid, end = x.partition('|')
words.append(front)
tags.append(mid+end)
Current Output (after I run the words list through my cleaning algorithm):
words = ['I', 'very', 'happy']
tags = ['PN', 'V', 'ADV', 'ADJ']
Clearly, I can not concatenate these lists element-wise anymore because I did not delete the corresponding tag from the removed word.
Desired Output:
words = ['I', 'very', 'happy']
tags = ['PN', 'ADV', 'ADJ']
How can I achieve the above output?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am currently working on a custom named-entitie recognizer so as to recognize 4 types of entitiy: car, equipment, date, issue.
To do so, I use rasa_nlu with NER_crf from sklearn-crfsuite. However, before tagging hundreds of sentences, I asked myself two questions and I haven't found the answers:
If you have for example "On 31st Jan., the wheels of AA-075-ZP exhibited an increase in friction". Is it better to tag "On 31st Jan." or "31st Jan." as a date ? Same question for "the wheels" or "wheels" as an equipment.
I took a look at how does CRF work. From what I understood, the probability for a word w to be classified as an entity e1 depends on the fact that this word has already been tagged e1 in other documents but also on the fact that it follows a word w2 tagged e2 and that we often see words tagged e1 following words tagged e2.
Then, the question is: is it better to prefer entity tagging sequences or entity tagging content ?
Is it more interesting to say that a date comes after "on" or that it is composed of "on" so as to detect this date ?
My samples are often a description of the issue such as: "On 31st Jan., the wheels of AA-075-ZP exhibited an increase in friction. This was caused by ... and .... on ... No more impact on the car, the four rubbers have been replaced"
Is it interesting to tag "rubbers" as an equipment considering that it comes at the end of a long description and that I most of the time just want to get the first entities in the text ? Is it worth to increase the number of occurences for rubber (so that rubber has more chance to be tagged as an equipment) but to give at the same time importance to the pattern "an equipment coming after a lot of words" ?
Thank you in advance
| 1 | 1 | 0 | 0 | 0 | 0 |
I am having a pandas dataframe which consists of scraped articles from websites as rows. I have 100 thousand articles in the similar nature.
Here is a glimse of my dataset.
text
0 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
1 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
2 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
3 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
4 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
5 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
6 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
7 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
8 which brings not only warmer weather but also the unsettling realization that the year is more than halfway over. So
for those who werent as productive as they would have liked during the first half of 2018
28 for those who werent as productive as they would have liked during the first half of 2018
29 for those who werent as productive as they would have liked during the first half of 2018
30 for those who werent as productive as they would have liked during the first half of 2018
31 for those who werent as productive as they would have liked during the first half of 2018
32 for those who werent as productive as they would have liked during the first half of 2018
Now, these are intials of each texts and they are repetitive. The main text lies after these texts.
Is there any way or a function possible, which identifies these texts and swipe them out in a few lines of code.
| 1 | 1 | 0 | 0 | 0 | 0 |
How would I be able to apply this function to just the values within a python dictionary:
def split_sentences(text):
"""
Utility function to return a list of sentences.
@param text The text that must be split in to sentences.
"""
sentence_delimiters = re.compile(u'[\\[\\]
.!?,;:\t\\-\\"\\(\\)\\'\u2019\u2013]')
sentences = (sentence_delimiters.split(text))
return sentences
The code I have used to create the dictionary from a CSV file input:
with open('second_table.csv', mode='r') as infile:
#Read in the csv file
reader = csv.reader(infile)
#Skip the headers
next(reader, None)
#Iterates through each row to get the key value pairs
mydict = {rows[0]:rows[1] for rows in reader}
The python dictionary looks like so:
{'INC000007581947': '$BREM - CATIAV5 - Catia does not start',
'INC000007581991': '$SPAI - REACT - react',
'INC000007582037': 'access request',
'INC000007582095': '$HAMB - DVOBROWSER - ACCESS RIGHTS',
'INC000007582136': 'SIGLUM issue by opening a REACT request'}
| 1 | 1 | 0 | 0 | 0 | 0 |
I trained a Glove model in python using Maciejkula's implementation (github repo).
For the next step I need a word-to-embedding dictionary.
However I can't seem to find an easy way to extract such a dictionary from the glove model I trained.
I can extract the embeddings by accessing model.word_vectors but this only returns an array containing the vectors without a mapping to the corresponding words.
There is also the model.dictionary attribute containing word-to-index pairs.
I thought that these indexes might correspond to the embedding-indexes in the model.word_vectors array, but I'm not sure that this is correct.
Do the indexes correspond or is there another easy way to get a word-to-embedding dictionary from a glove-python model?
I realize that Sanj asked I similar although wider question, but since there is no response yet I thought I'd ask this more specific question.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a word2vec dataframe like this which saved from save_word2vec_format using Gensim under txt file. After using pandas to read this file. (Picture below). How to delete first row and make them as a index?
My txt file: https://drive.google.com/file/d/1O206N93hPSmvMjwc0W5ATyqQMdMwhRlF/view?usp=sharing
| 1 | 1 | 0 | 0 | 0 | 0 |
I need to generate word2vec array for a dictionary of words. The dictionary looks something like this
test={0: 'tench, Tinca tinca',
1: 'goldfish, Carassius auratus',
2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',
3: 'tiger shark, Galeocerdo cuvieri',
4: 'hammerhead, hammerhead shark'}
The loop should go through each line, check if the word exists in the model, if yes then store the vector in an array otherwise check the next word in the line. If none of the words are present in the gensim model, then it should do nothing (array is initialised with zeros)
However if a word doesn't exist in the pre trained model, then it raises this exception:
KeyError: "word 'Galeocerdo cuvieri' not in vocabulary"
What should be the ideal loop that also has the exception in order to bypass the error raised?
This is my starting code:
import gensim
model = gensim.models.KeyedVectors.load_word2vec_format('/home/shikhar /Downloads/GoogleNews-vectors-negative300.bin',binary=True)
array=np.zeros((4,300))
for i in test:
synonyms=test[i].split(',')
| 1 | 1 | 0 | 0 | 0 | 0 |
I would like to add words to the vader_lexicon.txt to specify polarity scores to a word. What is the right way to do so?
I saw this file in AppData\Roaming
ltk_data\sentiment\vader_lexicon. The file consists of the word, its polarity, intensity, and an array of 10 intensity scores given by "10 independent human raters". [1] However, when I edited it, nothing changed in the results of the following code:
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sia = SentimentIntensityAnalyzer()
s = sia.polarity_scores("my string here")
I think that this text file is accessed by my code when I called SentimentIntensityAnalyzer's constructor. [2] Do you have any ideas on how I can edit a pre-made lexicon?
Sources:
[1] https://github.com/cjhutto/vaderSentiment
[2] http://www.nltk.org/api/nltk.sentiment.html
| 1 | 1 | 0 | 0 | 0 | 0 |
I am making a Stock Market Predictor machine learning application that will try to predict the price for a certain stock. It will take news articles/tweets regarding that particular company and the company's historical data for this reason.
My issue is that I need to first construct a sentiment analyser for the headlines/tweets for that company. I dont want to train a model to give me the sentiment scores rather, I want a sentiment lexicon that contains a bag of words related to stock market and finance.
Is there any such lexicons/dictionaries available that I can use in my project?
Thanks
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a text based dataset where I am looking to apply SpaCy's EntityRecognizer to each row for a specific column.
I can apply the general spaCy pipeline by doing something like this:
df['new_col'] = df['col'].apply(lambda x: nlp(x))
How do I just apply just the entity recongnizer and get its values?
| 1 | 1 | 0 | 0 | 0 | 0 |
This is my code so far:
from JMSSGraphics import *
import math
import random
class Zombie:
# attributes:
# x
# y
# infected
# image
# we need to define a special function
# that lets us 'construct' an object from this class
# this special function is called a constructor
def __init__(self):
# creating my attributes
# and assigning them to initial values
self.x = 0
self.y = 0
self.img = None
self.speed = 0
self.rotation = 0
# Must use self so that variables
# do not lose values after function returns
class Player:
def __init__(self):
self.x = 0
self.y = 0
self.img = None
self.speed = 0
self.rotation = 0
self.fireSpeed = 0
class Bullet:
def __init__(self):
self.x = 0
self.y = 0
self.speed = 0
self.img = None
self.rotation = 0
self.locationVector = []
def __del__(self):
pass
class Wall:
def __init__(self):
self.x1 = 0
self.y1 = 0
self.x2 = 0
self.y2 = 0
jmss = Graphics(width = 800, height = 600, title = "city", fps = 120)
#Zombie ratio
zombieHeightFactor = 1.205
zombieWidth = 50
zombieHeight = zombieWidth * zombieHeightFactor
#Player ratio
playerHeightFactor = 0.66
playerWidth = 50
playerHeight = playerWidth * playerHeightFactor
#Bullet ratio
bulletHeightFactor = 0.28
bulletWidth = 35
bulletHeight = bulletWidth * bulletHeightFactor
zombiesList = []
n = 0
while n < 7:
newZombie = Zombie()
newZombie.img = jmss.loadImage("zombieImage.png")
newZombie.x = random.randint(10,790)
newZombie.y = random.randint(10,590)
newZombie.speed = random.uniform(1,3)
print(newZombie.speed)
zombiesList.append(newZombie)
n+=1
#Creating player object
player = Player()
player.img = jmss.loadImage("PlayerSprite.png")
player.x = 400
player.y = 300
player.speed = 10
player.fireSpeed = 20
bulletList = []
cooldown = 0
@jmss.mainloop
def Game():
global cooldown
####################PLAYER LOOK###################################
mouseX = jmss.mouseCoordinate()[0]
mouseY = jmss.mouseCoordinate()[1]
if mouseX-player.x > 0:
angle = 360 - math.degrees(math.atan((mouseY-player.y)/(mouseX-player.x))) #Calculates angle between player and mouse in degrees
player.rotation = angle
if mouseX - player.x < 0:
angle = 360 - math.degrees(math.atan((mouseY-player.y)/(mouseX-player.x))) #Calculates angle between player and mouse in degrees
player.rotation = angle + 180
####################PLAYER MOVEMENT#################################
jmss.clear(1,1,1,1)
if jmss.isKeyDown(KEY_W):
player.y += player.speed
if jmss.isKeyDown(KEY_A):
player.x -= player.speed
if jmss.isKeyDown(KEY_D):
player.x += player.speed
if jmss.isKeyDown(KEY_S):
player.y -= player.speed
if player.x > 800: ##ADDING BORDERS
player.x = 800
if player.x < 0:
player.x = 0
if player.y > 600:
player.y = 600
if player.y < 0:
player.y = 0
jmss.drawImage(player.img,player.x,player.y,width = playerWidth,height = playerHeight,rotation = player.rotation)
####################PLAYER SHOOT####################################
if jmss.isKeyDown(KEY_SPACE) and cooldown > player.fireSpeed:
cooldown = 0
bullet = Bullet()
bullet.img = jmss.loadImage("bullet.png")
bullet.x = player.x
bullet.y = player.y
bullet.speed = 20
bullet.locationx = mouseX
bullet.locationy = mouseY
bullet.rotation = player.rotation
bulletList.append(bullet)
n = 0
while n < len(bulletList):
bullet = bulletList[n]
bullet.locationVector = [math.cos(math.radians(bullet.rotation)),math.sin(math.radians(bullet.rotation))]
bullet.x += bullet.locationVector[0]*bullet.speed
bullet.y += -bullet.locationVector[1]*bullet.speed
jmss.drawImage(bullet.img,bullet.x,bullet.y,width = bulletWidth,height = bulletHeight,rotation = bullet.rotation)
if bullet.x > 800:
del bulletList[n]
elif bullet.y > 600:
del bulletList[n]
n += 1
cooldown += 1
############################ZOMBIE AI#########################################
n = 0
while n < len(zombiesList):
currentZombie = zombiesList[n]
if player.x-currentZombie.x > 0:
angle = 360 - math.degrees(math.atan((player.y-currentZombie.y)/(player.x-currentZombie.x))) #Calculates angle between player and mouse in degrees
currentZombie.rotation = angle
if player.x - currentZombie.x < 0:
angle = 360 - math.degrees(math.atan((player.y-currentZombie.y)/(player.x-currentZombie.x))) #Calculates angle between player and mouse in degrees
currentZombie.rotation = angle + 180
if currentZombie.x < player.x:
currentZombie.x += currentZombie.speed
if currentZombie.x > player.x:
currentZombie.x -= currentZombie.speed
if currentZombie.y < player.y:
currentZombie.y += currentZombie.speed
if currentZombie.y > player.y:
currentZombie.y -= currentZombie.speed
jmss.drawImage(currentZombie.img,currentZombie.x,currentZombie.y,zombieWidth,zombieHeight,currentZombie.rotation)
currentZombie.speed += 0.001
n += 1
######################POWER UP################################################
spawnChance = random.randint(0,10000)
if spawnChance == 5000:
print("SPAWN POWERUP")
##########################CREATING ENVIRONMENT###############################
jmss.run()
What I wanted to to was improve the zombie AI so that is was far less boring and predictable. Any help would be appreciated :)
So far what I have done on the AI is just simply adjust the x and y coordinates of the enemy object depending on whether or not they are less or more than that of the player. It only works on a 2 dimensional level with x or y but not both.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to move a trained model into a production environment and have encountered an issue trying to replicate the behavior of the Keras hashing_trick() function in C#. When I go to encode the sentence my output is different in C# than it is in python:
Text: "Information - The configuration processing is completed."
Python: [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 217 142 262 113 319 413]
C#: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 433, 426, 425, 461, 336, 146, 52]
(copied from debugger, both sequences have length 30)
What I've tried:
changing the encoding of the text bytes in C# to match the python string.encode() function default (UTF8)
Changing capitalization of letters to lowercase and upper case
Tried using Convert.ToUInt32 instead of BitConverter (resulted in overflow error)
My code (below) is my implementation of the Keras hashing_trick function. A single input sentence is given and then the function will return the corresponding encoded sequence.
public uint[] HashingTrick(string data)
{
const int VOCAB_SIZE = 534; //Determined through python debugging of model
var filters = "!#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t
".ToCharArray().ToList();
filters.ForEach(x =>
{
data = data.Replace(x, '\0');
});
string[] parts = data.Split(' ');
var encoded = new List<uint>();
parts.ToList().ForEach(x =>
{
using (System.Security.Cryptography.MD5 md5 = System.Security.Cryptography.MD5.Create())
{
byte[] inputBytes = System.Text.Encoding.UTF8.GetBytes(x);
byte[] hashBytes = md5.ComputeHash(inputBytes);
uint val = BitConverter.ToUInt32(hashBytes, 0);
encoded.Add(val % (VOCAB_SIZE - 1) + 1);
}
});
return PadSequence(encoded, 30);
}
private uint[] PadSequence(List<uint> seq, int maxLen)
{
if (seq.Count < maxLen)
{
while (seq.Count < maxLen)
{
seq.Insert(0, 0);
}
return seq.ToArray();
}
else if (seq.Count > maxLen)
{
return seq.GetRange(seq.Count - maxLen - 1, maxLen).ToArray();
}
else
{
return seq.ToArray();
}
}
The keras implementation of the hashing trick can be found here
If it helps, I am using an ASP.NET Web API as my solution type.
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to find the base-form for input words in python
something like
get_base_form({running, best, eyes, moody})
--> run, good, eye, mood
A solution, that just deals with regular forms would be fine. But an answer, that also deals with irregular would be perfect.
If there is no library that does this, a web-service would be fine, too.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am vectorizing a text blob with tokens that have the following style:
hi__(how are you), 908__(number code), the__(POS)
As you can see the tokens have attached some information with __(info), I am extracting key words using tfidf, as follows:
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(doc)
indices = np.argsort(vectorizer.idf_)[::-1]
features = vectorizer.get_feature_names()
The problem is that when I do the above procedure for extracting keywords, I am suspecting that the vectorizer object is removing the parenthesis from my textblob. Thus, which parameter from the tfidf vectorizer object can I use in order to preserve such information in the parenthesis?
UPDATE
I also tried to:
from sklearn.feature_extraction.text import TfidfVectorizer
def dummy_fun(doc):
return doc
tfidf = TfidfVectorizer(
analyzer='word',
tokenizer=dummy_fun,
preprocessor=dummy_fun,
token_pattern=None)
and
from sklearn.feature_extraction.text import TfidfVectorizer
def dummy_fun(doc):
return doc
tfidf = TfidfVectorizer(
tokenizer=dummy_fun,
preprocessor=dummy_fun,
token_pattern=None)
However, this returns me a sequence of characters instead of tokens that I already tokenize:
['e', 's', '_', 'a', 't', 'o', 'c', 'r', 'i', 'n']
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a dictionary having words and the frequency of each words.
{'cxampphtdocsemployeesphp': 1,
'emptiness': 1,
'encodingundefinedconversionerror': 1,
'msbuildexe': 2,
'e5': 1,
'lnk4049': 1,
'specifierqualifierlist': 2, .... }
Now I want to create a bag of words model using this dictionary( I don't want to use standard library and function. I want to apply this using the algorithm.)
Find N most popular words in the dictionary and numerate them. Now we have a dictionary of the most popular words.
For each title in the dictionary create a zero vector with the dimension equals to N.
For each text in the corpora iterate over words which are in the dictionary and increase by 1 the corresponding coordinate.
I have my text which I will use to create the vector using a function.
The function would look like this,
def my_bag_of_words(text, words_to_index, dict_size):
"""
text: a string
dict_size: size of the dictionary
return a vector which is a bag-of-words representation of 'text'
"""
Let say we have N = 4 and the list of the most popular words is
['hi', 'you', 'me', 'are']
Then we need to numerate them, for example, like this:
{'hi': 0, 'you': 1, 'me': 2, 'are': 3}
And we have the text, which we want to transform to the vector:
'hi how are you'
For this text we create a corresponding zero vector
[0, 0, 0, 0]
And iterate over all words, and if the word is in the dictionary, we increase the value of the corresponding position in the vector:
'hi': [1, 0, 0, 0]
'how': [1, 0, 0, 0] # word 'how' is not in our dictionary
'are': [1, 0, 0, 1]
'you': [1, 1, 0, 1]
The resulting vector will be
[1, 1, 0, 1]
Any help in applying this would be really helpful. I am using python for implementation.
Thanks,
Neel
| 1 | 1 | 0 | 0 | 0 | 0 |
How to use Polish language in Rasa NLU project? SpaCy supports Polish in tokenization https://spacy.io/usage/models#alpha-support
My config.json file looks like this:
{
"pipeline" : [ "nlp_spacy",
"tokenizer_spacy",
"ner_crf",
"ner_spacy",
"intent_featurizer_spacy",
"intent_classifier_sklearn"],
"language" : "en",
"path" : "./models/nlu",
"data" : "./data/training_data.json"
}
but once I change the language for 'pl' the 'language not supported' error occurs.
Should I download different models than these two:
python -m spacy download en_core_web_md
python -m spacy link en_core_web_md en
?
I know I can use it this way:
from spacy.lang.pl import Polish
nlp = Polish ()
but I don't know how to implement it to my config file.
Thank you!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to find categories of a word in Python.
For ex-
age = young,old
I have tried using wordnet from NLTK but couldn't find something satisfactory.
Any help is much appreciated
Thanks in advance.
| 1 | 1 | 0 | 0 | 0 | 0 |
So just an FYI, I have a pretty limited understanding of the mechanics of machine learning, LSTM, and time series modeling, but based on my current understanding, I feel like since I have a LSTM time series model trained on many time series plots, I should be able to get its "average" time series based on all of the ones it's trained on.
What's the best way to accomplish that?
I have a keras Sequential model, and I don't know if any code would even be helpful in this instance, but if there is any code that would assist, let me know!
EDIT: Here is some of the data
32.1576
31.92
31.7
31.85
32.05
32.5
32.3
31.975
31.7
32.15
32.6
32.55
32.4
32.4835
32.25
32.15
32.25
32.45
32.4
32.5002
32.45
32.5
32.5752
33.1748
33
33.35
33.45
33.45
33.425
Thanks!
| 1 | 1 | 0 | 1 | 0 | 0 |
I am trying to write a custom AI API for a game which uses the Unreal engine 4. While I can read the process memory using Python just fine, I have confronted a bigger issue - reading the process memory only when relevant and sending in inputs only when possible - thus only once a frame is rendered. If I want to send inputs, they need to be sent on frames specifically (the game being a fighting game).
Therefore, I need to update my own AI API with the same framerate as the game itself. My first idea was to look into the process memory and find out if there's any value that's updated each frame - while there're values updated all the time, they seem to be updated in memory after 8 frames occur. Unfortunately, 8 frames don't allow the AI to perform the inputs properly as the update loop would not update fast enough.
I will be looking through the memory more but I was wondering if it's possible to attach a program to the running process to look at the window itself - and in case it has been updated (something new has been rendered), update the gamestate in the AI itself. Is there a way this can be achieved?
| 1 | 1 | 0 | 0 | 0 | 0 |
I would like to extract key terms from documents with chi-squared test, thus I tried the following:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_selection import SelectKBest, chi2
Texts=["should schools have uniform","schools discipline","legalize marriage","marriage culture"]
vectorizer = TfidfVectorizer()
term_doc=vectorizer.fit_transform(Texts)
ch2 = SelectKBest(chi2, "all")
X_train = ch2.fit_transform(term_doc)
print (ch2.scores_)
vectorizer.get_feature_names()
However, I do not have labels and when I run the above code I got:
TypeError: fit() missing 1 required positional argument: 'y'
Is there any way of using chi-squared test to extract most important words without having any labels?
| 1 | 1 | 0 | 1 | 0 | 0 |
everyone,
I have a CNN-LSTM model trained in keras. As input, i loaded sets of 15 frames per video with 30x30 and with just one channel (15, 30, 30, 1).
I extracted them from a total of 279 videos, and stored them in a big tensor with dimensions (279, 15, 30, 30, 1).
X_data.shape = (279, 15, 30, 30, 1)
y_data.shape = (279,)
I'm working with two classes of videos (so targets are 0 and 1).
The input layer of my time distributed CNN (before my LSTM layer) is:
input_layer = Input(shape=(None, 30, 30, 1))
Ok, they feeded in my network and everything worked well, but now i need to predict these videos and i want to display the output in the video i'm classifying.
I wrote this to read the video and display the text:
vid = cv2.VideoCapture(video_path)
while(vid.isOpened()):
ret, frame = vid.read()
if ret == True:
texto = predict_video(frame)
frame = cv2.resize(frame,(750,500),interpolation=cv2.INTER_AREA)
frame = cv2.putText(frame,str(texto),(0,130), cv2.FONT_HERSHEY_SIMPLEX, 2.5, (255, 0, 0), 2, cv2.LINE_AA)
cv2.imshow('Video', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
else:
break
vid.release()
cv2.destroyAllWindows()
The predict_video() is used to generate the predicted output as a text, as you can see:
def predict_video(frame):
count_frames = 0
frame_list = []
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = cv2.resize(frame,(30,30),interpolation=cv2.INTER_AREA)
while count_frames < 15:
frame_list.append(frame)
count_frames = 0
frame_set = np.array(frame_list)
frame_set = frame_set.reshape(1, 15, 30, 30, 1)
pred = model.predict(frame_set)
pred_ = np.argmax(pred,axis=1) #i'm using the Model object from Keras
if pred_ == 1:
return 'Archery'
elif pred_ == 0:
return 'Basketball'
Due to the fact that the input dimension of the CNN-LSTM is equal to (None, 30, 30, 1) i need to predict with model.predict(sample) a sample with dimensions like this (1, 15, 30, 30, 1).
How i can predict a video in real time, once i want to predict not frame by frame but with a model based on sets of 15 frames?
The actual predict_video() function "freeze" my computer.
Thanks for the attention!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am working on some Latin texts that contain dates and was using various regex patterns and rule based statements to extract dates. I was wondering if I can use an algorithm to train to extract these dates instead of the method I am currently using. Thanks
This is an extract of my algorithm:
def checkLatinDates(i, record, no):
if(i == 0 and isNumber(record[i])): #get deed no
df.loc[no,'DeedNo'] = record[i]
rec = record[i].lower()
split = rec.split()
if(split[0] == 'die'):
items = deque(split)
items.popleft()
split = list(items)
if('eodem' in rec):
n = no-1
if(no>1):
while ( pd.isnull(df.ix[n]['LatinDate'])):
n = n-1
print n
df['LatinDate'][no] = df.ix[n]['LatinDate']
if(words_in_string(latinMonths, rec.lower()) and len(split)<10):
if not (dates.loc[dates['Latin'] == split[0], 'Number'].empty):
day = dates.loc[dates['Latin'] == split[0], 'Number'].iloc[0]
split[0] = day
nd = ' '.join(map(str, split))
df['LatinDate'][no] = nd
elif(convertArabic(split[0])!= ''):
day = convertArabic(split[0])
split[0] = day
nd = ' '.join(map(str, split))
df['LatinDate'][no] = nd
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using vader in nltk to find sentiments of each line in a file. I have 2 questions:
I need to add words in vader_lexicon.txt however the syntax of which looks like :
assaults -2.5 0.92195 [-1, -3, -3, -3, -4, -3, -1, -2, -2, -3]
What does -2.5 and 0.92195 [-1, -3, -3, -3, -4, -3, -1, -2, -2, -3] represent?
How should i code it for a new word? Say i have to add something like '100%' , 'A1'.
I can also see positive and negative words txt in nltk_data\corpora\opinion_lexicon folder. How are these getting utilised? Can I add my words in these txt files too?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am working with social media data. I am getting almost a neutral score for positive sentences too and code is not understanding the statement rather just classifying using the corpus.
Is there any way to improve this sentiment score ?People have suggested to use compound score but it is not helping much
Any other work around to add our own corpus and use it in vader . I mean i dont want to add words manually , is there any social media corpus with predefined sentiments ?
Any other model/way altogether to use for data without labels ?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have lot of PDF, DOC[X], TIFF and others files (scans from a shared folder). Each file converted into pack of text files: one text file per page.
Each pack of files could contain multiple documents (for example thee contracts). Document kind could be not only contract.
During the processing the pack of the files I don't know what kind of the documents current pack contains and it's possible that one pack contains multiple document kinds (contracts, invoices, etc).
I'm looking for some possible approaches to solve this programmatically.
I'm tried to search something like that but without any success.
UPD: I tried to create binary classificator with scikit-learn and now looking for another solution.
| 1 | 1 | 0 | 1 | 0 | 0 |
I am trying to install tensorflow for their image recognition program thing, but I get the error in the title, it does work when i go to C:\Users\Diederik\AppData\Local\Programs\Python\Python36\Scripts, and type pip3 install --upgrade tensorflow there, but I am not sure if that is how I am supposed to do it, since I tried that before and I got errors while running the classify_image.py, so I thought maybe I should try it the way tensorflow told me to, but that didn't work, Please help me, I am happy to provide any extra information you need. I am on windows 10
| 1 | 1 | 0 | 0 | 0 | 0 |
I am comparing the content of CVs (.txt files with stop-words already removed) with really compact job descriptions (JDs), like this:
project management,
leadership,
sales,
SAP,
marketing
The CVs have around 600 words and the JDs only the words highlighted above.
The problem that I am currently experiencing, and I am sure this is due to my lack of knowledge, is that when I apply similarity measures to it, I get confuse results. For example I have the CV number 1 which contains all the words from the JD, sometimes repeated more than once. I also have CV 2 which only contains the word project in comparsion to the JD. Even though, when I apply cosine similarity, diff, jaccard distance and edit distance, all these measures return to me a higher degree of similarity between the CV2 and the JD, which for me is strange, because only one word is equal between them, while the CV1 possesses all the words from the JD.
I am applying the wrong measures to assess similarity? I am sorry if this is a naive question, I am a beginner with programming.
Codes follow
Diff
from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
similar('job.txt','LucasQuadros.txt')
0.43478260869565216
similar('job.txt','BrunaA.Fernandes.txt')
0.2962962962962963
Cosine
from sklearn.feature_extraction.text import TfidfVectorizer
document= ('job.txt','LucasQuadros.txt','BrunaA.Fernandes')
tfidf = TfidfVectorizer().fit_transform(document)
matrix= tfidf * tfidf.T
matrix.todense()
matrix([[1. , 0.36644682, 0. ],
[0.36644682, 1. , 0. ],
[0. , 0. , 1. ]])
Edit distance
import nltk
w1= ('job.txt')
w2= ('LucasQuadros.txt')
w3= ('BrunaA.Fernandes.txt')
nltk.edit_distance(w1,w2)
11
nltk.edit_distance(w1,w3)
16
Jaccard distance
import nltk
a1= set('job.txt')
a2= set('LucasQuadros.txt')
a3= set('BrunaA.Fernandes.txt')
nltk.jaccard_distance(a1,a2)
0.7142857142857143
nltk.jaccard_distance(a1,a3)
0.8125
As you guys can see, the 'LucasQuadros.txt'(CV1) has a higher similarity with the 'job.txt'(Job Description), even though it only contains one word from the job description.
| 1 | 1 | 0 | 0 | 0 | 0 |
Is there a way in python (by using NLTK or SpaCy or any other library) that I can predict the POS tag of the word that are likely to follow the words so far I have entered.
Eg- If i input
I am going to
It shows the POS tag of the next most likely word
eg NN, becuase college can come after this
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a data frame which one of the variables is a fairly long paragraph containing many sentences. Sometimes the sentences are separated by a full stop sometimes by a comma. I'm trying to create a new variable by extracting only selected parts of the text using selected words. Please see below a short sample of the data frame the result I have at the moment, followed by the code I'm using. Note - the text in the first variable is pretty large.
PhysicalMentalDemands Physical_driving Physical_telephones
[driving may be necessary [driving......] [telephones...]
occasionally.
as well as telephones will also
be occasional to frequent.]
Code used:
searched_words = ['driving' , 'telephones']
for i in searched_words:
Test ['Physical' +"_"+ str(i)] = Test ['PhysicalMentalDemands'].apply(lambda text: [sent for sent in sent_tokenize(text)
if any(True for w in word_tokenize(sent)
if w.lower() in searched_words)])
Issue:
At the moment my code extract the sentences but extract using both of the words. I've seem other similar posts but none managed to solve my issue.
Fixed
searched_words = ['driving', 'physical']
for i in searched_words:
df['Physical' + '_' + i] = result['PhysicalMentalDemands'].str.lower().apply(lambda text: [sent for sent in sent_tokenize(text)
if i in word_tokenize(sent)])
| 1 | 1 | 0 | 0 | 0 | 0 |
I have trained a model in word2vec and want to use googles analogy test set to test its accuracy. I want to use COSADD, COSMUL and hopefully euclidean distance.
To use COSADD i simply use the code:
model.wv.accuracy(‘questions-words.txt’).
I’m not sure how to use the others. The accuracy method has the following optional parameters
accuracy(.txt file, restrict_vocab=..., most_similar=...)
where I feel like I should be able to write most_similar = COSMUL
but this does not work :(
Does anyone know how to do the accuracy test with COSMUL or euclidean distance (or both)?
| 1 | 1 | 0 | 1 | 0 | 0 |
I want to check my loss values using MSE during the training process, how to fetching the loss values using MSE at each of iteration?., thank you.
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_absolute_error
dataset = open_dataset("forex.csv")
dataset_vector = [float(i[-1]) for i in dataset]
normalized_dataset_vector = normalize_vector(dataset_vector)
training_vector, validation_vector, testing_vector = split_dataset(training_size, validation_size, testing_size, normalized_dataset_vector)
training_features = get_features(training_vector)
training_fact = get_fact(training_vector)
validation_features = get_features(validation_vector)
validation_fact = get_fact(validation_vector)
model = MLPRegressor(activation=activation, alpha=alpha, hidden_layer_sizes=(neural_net_structure[1],), max_iter=number_of_iteration, random_state=seed)
model.fit(training_features, training_fact)
pred = model.predict(training_features)
err = mean_absolute_error(pred, validation_fact)
print(err)
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm really new to Chatbots and starting to learn these stuff using frameworks. I'm starting to use this opensource framework RASA and learning about it. Then I found that this entity extraction tool Spacy, is used by RASA.
Can anybody explain what's the actual relation between these ? What's the role of Spacy within RASA ?
| 1 | 1 | 0 | 0 | 0 | 0 |
I would like to grab a piece of the string that may not be exactly matched.
for example:
str1 = 'invoice#'
str2 = 'sold to wal-mart corp invoice no 91058780'
expected output
invoice no 91058780
The valid cases here for str1
Invoice number
Invoice Num
Invoice no
Invoice#
Invoice:
inv number
I have used the regex expressions but more substrings exist in between.
Regex I have been using is INV_regex = re.escape(str1) + r"\.?:?\s?\w+"
some case will need more complicated regex to capture and it would be impossible to cover all these cases
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm relatively new to machine learning and the Tensorflow framework. I was trying to take my trained model heavily influenced by the code presented here, using the MNIST handwritten digit dataset and perform inferences on testing examples that I have created. However, I am doing the training on a remote machine with a GPU and am trying to save the data to a directory so that I can transfer the data and inference on a local machine
It seems that I was able to save some of the model with tf.saved_model.simple_save, however, I'm unsure of how to use the saved data to do inferencing and to use the data to make a prediction given a new image. It seems like there are multiple ways to save a model, but I am unsure of what the convention or of what the "correct way" is to do it with the Tensorflow framwork.
So far, this is the line that I think I would need, but am unsure if it is correct.
tf.saved_model.simple_save(sess, 'mnist_model',
inputs={'x': self.x},
outputs={'y_': self.y_, 'y_conv':self.y_conv})
If someone could point me in the direction of how to properly save trained models and which variables to use to be able to inference using the saved model, I'd really appreciate it.
| 1 | 1 | 0 | 1 | 0 | 0 |
I bumped into some code which uses the modulo operator in a way that I haven't seen before. The line in question is data_index = (data_index + 1) % len(data).
I have no idea what this code is trying to do when it updates data_index:
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
# What is this doing?
data_index = (data_index + 1) % len(data)
# ... More stuff ...
| 1 | 1 | 0 | 0 | 0 | 0 |
Trying to follow the simple Doc initialization in the docs in Python 2 doesn't work:
>>> import textacy
>>> content = '''
... The apparent symmetry between the quark and lepton families of
... the Standard Model (SM) are, at the very least, suggestive of
... a more fundamental relationship between them. In some Beyond the
... Standard Model theories, such interactions are mediated by
... leptoquarks (LQs): hypothetical color-triplet bosons with both
... lepton and baryon number and fractional electric charge.'''
>>> metadata = {
... 'title': 'A Search for 2nd-generation Leptoquarks at √s = 7 TeV',
... 'author': 'Burton DeWilde',
... 'pub_date': '2012-08-01'}
>>> doc = textacy.Doc(content, metadata=metadata)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/textacy/doc.py", line 120, in __init__
{compat.unicode_, SpacyDoc}, type(content)))
ValueError: `Doc` must be initialized with set([<type 'unicode'>, <type 'spacy.tokens.doc.Doc'>]) content, not "<type 'str'>"
What should that simple intialization look like for a string or a sequence of strings?
UPDATE:
Passing unicode(content) to textacy.Doc() spits out
ImportError: 'cld2-cffi' must be installed to use textacy's automatic language detection; you may do so via 'pip install cld2-cffi' or 'pip install textacy[lang]'.
which would've been nice to have from the moment when textacy was installed, imo.
Even after instaliing cld2-cffi, attempting the code above throws out
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/textacy/doc.py", line 114, in __init__
self._init_from_text(content, metadata, lang)
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/textacy/doc.py", line 136, in _init_from_text
spacy_lang = cache.load_spacy(langstr)
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/cachetools/__init__.py", line 46, in wrapper
v = func(*args, **kwargs)
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/textacy/cache.py", line 99, in load_spacy
return spacy.load(name, disable=disable)
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/spacy/__init__.py", line 21, in load
return util.load_model(name, **overrides)
File "/Users/a/anaconda/envs/env1/lib/python2.7/site-packages/spacy/util.py", line 120, in load_model
raise IOError("Can't find model '%s'" % name)
IOError: Can't find model 'en'
| 1 | 1 | 0 | 0 | 0 | 0 |
There is ONE word not being recognized as stopword, despite being on the list.
I'm working with spacy 2.0.11, python 3.7, conda env, Debian 9.5
import spacy
from spacy.lang.es.stop_words import STOP_WORDS
nlp = spacy.load('es', disable=['tagger', 'parser', 'ner'])
STOP_WORDS.add('y')
Doing some tests:
>>> word = 'y'
>>> word in STOP_WORDS
True
>>> nlp(word)[0].is_stop
False
>>> len(STOP_WORDS)
305
>>> [word for word in STOP_WORDS if not nlp(word)[0].is_stop]
['y']
So, from 305 listed in STOP_WORDS, one is not flagged as such. I don't know what I'm doing wrong... Maybe it's a bug?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have email messages in a pandas data frame. Before applying sent_tokenize, I could remove the punctuation like this.
def removePunctuation(fullCorpus):
punctuationRemoved = fullCorpus['text'].str.replace(r'[^\w\s]+', '')
return punctuationRemoved
After applying sent_tokenize the data frame looks like below. How can I remove the punctuation while keeping the sentences as tokenized in the lists?
sent_tokenize
def tokenizeSentences(fullCorpus):
sent_tokenized = fullCorpus['body_text'].apply(sent_tokenize)
return sent_tokenized
Sample of data frame after tokenizing into sentences
[Nah I don't think he goes to usf, he lives around here though]
[Even my brother is not like to speak with me., They treat me like aids patent.]
[I HAVE A DATE ON SUNDAY WITH WILL!, !]
[As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your callertune for all Callers., Press *9 to copy your friends Callertune]
[WINNER!!, As a valued network customer you have been selected to receivea £900 prize reward!, To claim call 09061701461., Claim code KL341., Valid 12 hours only.]
| 1 | 1 | 0 | 0 | 0 | 0 |
After applying tokenizing, I have a pandas data frame as shown below. I want to apply the nltk lemmatizer in this data frame. What I tried is give here. I am getting error saying 'if form in exceptions:TypeError: unhashable type: 'list''. How can I properly implement the lemmatizer here?
Also please note that the 5th data frame cell has an empty list. How can I remove such lists in this data frame?
[[ive, searching, right, words, thank, breather], [i, promise, wont, take, help, granted, fulfil, promise], [you, wonderful, blessing, times]]
[[free, entry, 2, wkly, comp, win, fa, cup, final, tkts, 21st, may, 2005], [text, fa, 87121, receive, entry, questionstd, txt, ratetcs, apply, 08452810075over18s]]
[[nah, dont, think, goes, usf, lives, around, though]]
[[even, brother, like, speak, me], [they, treat, like, aids, patent]]
[[i, date, sunday, will], []]
The lemmatizer function I tried
def lemmatize(fullCorpus):
lemmatizer = nltk.stem.WordNetLemmatizer()
lemmatized = fullCorpus['tokenized'].apply(lambda row: list(map([lemmatizer.lemmatize(y) for y in row])))
return lemmatized
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm having an issue with v2.0.12 that I've traced into thinc. pip list shows me:
msgpack (0.5.6)
msgpack-numpy (0.4.3.1)
murmurhash (0.28.0)
regex (2017.4.5)
scikit-learn (0.19.2)
scipy (1.1.0)
spacy (2.0.12)
thinc (6.10.3)
I have code that works fine on my Mac, but fails in production. The stack trace goes into spacy and then into thinc -- and then django literally crashes. This all worked when I used an earlier version of spacy -- this has only come about since I'm attempting to upgrade to v2.0.12.
My requirements.txt file has these lines:
regex==2017.4.5
spacy==2.0.12
scikit-learn==0.19.2
scipy==1.1.0
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz
The last line pulls the en_core_web_sm down during deployment. I'm doing this so I can get those models loaded on Heroku during deployment.
I then load the parser like this:
import en_core_web_sm
en_core_web_sm.load()
Then the stack trace shows the problem here in thinc:
File "spacy/language.py", line 352, in __call__
doc = proc(doc)
File "pipeline.pyx", line 426, in spacy.pipeline.Tagger.__call__
File "pipeline.pyx", line 438, in spacy.pipeline.Tagger.predict
File "thinc/neural/_classes/model.py", line 161, in __call__
return self.predict(x)
File "thinc/api.py", line 55, in predict
X = layer(X)
File "thinc/neural/_classes/model.py", line 161, in __call__
return self.predict(x)
File "thinc/api.py", line 293, in predict
X = layer(layer.ops.flatten(seqs_in, pad=pad))
File "thinc/neural/_classes/model.py", line 161, in __call__
eturn self.predict(x)
File "thinc/api.py", line 55, in predict
X = layer(X)
File "thinc/neural/_classes/model.py", line 161, in __call__
return self.predict(x)
File "thinc/neural/_classes/model.py", line 125, in predict
y, _ = self.begin_update(X)
File "thinc/api.py", line 374, in uniqued_fwd
Y_uniq, bp_Y_uniq = layer.begin_update(X_uniq, drop=drop)
File "thinc/api.py", line 61, in begin_update
X, inc_layer_grad = layer.begin_update(X, drop=drop)
File "thinc/neural/_classes/layernorm.py", line 51, in begin_update
X, backprop_child = self.child.begin_update(X, drop=0.)
File "thinc/neural/_classes/maxout.py", line 69, in begin_update
output__boc = self.ops.batch_dot(X__bi, W)
File "gunicorn/workers/base.py", line 192, in handle_abort
sys.exit(1)
Again -- this all works on my laptop.
Is there something wrong with how I'm loading? Or is my version of thinc out of date? If so, what should my requirements.txt file look like?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a dataframe like this after some preprocessing. I want to create bigrams from each list in the dataframe rows. How I tried is given below. I get an error saying
lambda row: list((map(ngrams(2), row))))
TypeError: ngrams() missing 1 required positional argument: 'n'
What should be ngrams' first parameter here? How should I modify this code?
Also I may be asking questions on my every function. But I am having a hard time understanding the lamda and map functions that I am using. Please explain me how I should apply lamda and map functions on this dataframe in the future?
Dataframe
[[ive, searching, right, word, thank, breather], [i, promise, wont, take, help, granted, fulfil, promise], [you, wonderful, blessing, time]]
[[free, entry, 2, wkly, comp, win, fa, cup, final, tkts, 21st, may, 2005], [text, fa, 87121, receive, entry, questionstd, txt, ratetcs, apply, 08452810075over18s]]
[[nah, dont, think, go, usf, life, around, though]]
[[even, brother, like, speak, me], [they, treat, like, aid, patent]]
[[i, date, sunday, will], []]
What I need
[(even, brother), (brother,like), (like,speak), (speak,me), (they, treat), (treat,like), (like,aid), (aid,patent)]
What I tried
def toBigram(fullCorpus):
bigram = fullCorpus['lemmatized'].apply(
lambda row: list((map(ngrams(2), row))))
return bigram
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a pandas datagram like this. I want to concatenate the nested lists in the pandas cells. I referred the question in Making a flat list out of list of lists in Python which has all possible solutions to do this. I chose to do it with numpy. However, I am getting this error - lambda row: [np.concatenate(x) for x in row])
ValueError: need at least one array to concatenate. I don't have enough knowledge in python to solve this on my own. How should I modify my method to properly concatenate these nested lists?
Datagram
[[(ive, searching), (searching, right), (right, word), (word, thank), (thank, breather)], [(i, promise), (promise, wont), (wont, take), (take, help), (help, granted), (granted, fulfil), (fulfil, promise)], [(you, wonderful), (wonderful, blessing), (blessing, time)]]
[[(free, entry), (entry, 2), (2, wkly), (wkly, comp), (comp, win), (win, fa), (fa, cup), (cup, final), (final, tkts), (tkts, 21st), (21st, may), (may, 2005)], [(text, fa), (fa, 87121), (87121, receive), (receive, entry), (entry, questionstd), (questionstd, txt), (txt, ratetcs), (ratetcs, apply), (apply, 08452810075over18s)]]
[[(nah, dont), (dont, think), (think, go), (go, usf), (usf, life), (life, around), (around, though)]]
[[(even, brother), (brother, like), (like, speak), (speak, me)], [(they, treat), (treat, like), (like, aid), (aid, patent)]]
Concatenation method
def toFlatListBigram(fullCorpus):
flatListBigram = fullCorpus['bigrams'].apply(
lambda row: [np.concatenate(x) for x in row])
return flatListBigram
| 1 | 1 | 0 | 0 | 0 | 0 |
Stanford NER provides it NER jars to detect POS tags and NERs. But I am facing one issue with one of the sentences when trying to parse. The sentence is as follows:
Joseph E. Seagram & Sons, INC said on Thursday that it is merging its two United States based wine companies
Below is my code
st = StanfordNERTagger('./stanford- ner/classifiers/english.all.3class.distsim.crf.ser.gz',
'./stanford-ner/stanford-ner.jar',
encoding='utf-8')
ne_in_sent = []
with open("./CCAT/2551newsML.txt") as fd:
lines = fd.readlines()
for line in lines:
print(line)
tokenized_text = word_tokenize(line)
classified_text = st.tag(tokenized_text)
ne_tree = stanfordNE2tree(classified_text)
for subtree in ne_tree:
# If subtree is a noun chunk, i.e. NE != "O"
if type(subtree) == Tree:
ne_label = subtree.label()
ne_string = " ".join([token for token, pos in subtree.leaves()])
ne_in_sent.append((ne_string, ne_label))
print(ne_in_sent)
when I parse it I get the following entities as the organization.
(Joseph E. Seagram & Sons, Organization) and (Inc, Organization)
Also for some other texts in the file like
TransCo has a very big plane. Transco is moving south.
It differentiates the organizations due to capitalization hence I get
2 entities (TransCo, organization) and (Transco, organization).
Is it possible to convert these into one entity?
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to convert the plural words from my column "Phrase" into singular word. How do I iterate over each row and each item?
my_data = [('Audi Cars', 'Vehicles'),
('Two Parrots', 'animals'),
('Tall Buildings', 'Landmark')]
test = pd.DataFrame(my_data)
test.columns = ["Phrase","Connection"]
test
I tried
test["Phrase"] = test["Phrase"].str.lower().str.split()
import inflection as inf
test["Phrase"].apply(lambda x:inf.singularize([item for item in x]))
My desired output is
Phrase: Connection:
Audi Car Vehicles
Two Parrot animals
Tall Building Landmark
Kindly note, I want to singularize only one column Phase
| 1 | 1 | 0 | 0 | 0 | 0 |
I want to use a RegexParser to chunk all consecutive overlapping nouns from a text, for example, I have the following tagged text:
[('APPLE', 'NN'), ('BANANA', 'NN'), ('GRAPE', 'NN'), ('PEAR', 'NN')]
I want to extract:
['APPLE BANANA', 'BANANA GRAPE', 'GRAPE PEAR']
I tried using the following grammar to avoid consuming the matched consecutive noun but it doesn't work:
"CONSEC_NOUNS: {(?=(<NN>{2}))}"
Is there any possible way to do that?
EDIT: code
import nltk
extract = []
grammar = "CONSEC_NOUNS: {(?=(<NN>{2}))}"
cp = nltk.RegexpParser(grammar)
result = cp.parse([('APPLE', 'NN'), ('BANANA', 'NN'), ('GRAPE', 'NN'), ('PEAR', 'NN')])
for elem in result:
if type(elem) == nltk.tree.Tree:
extract.append(' '.join([pair[0] for pair in elem.leaves()]))
>>> print(extract) //[]
// but I want to get ['APPLE BANANA', 'BANANA GRAPE', 'GRAPE PEAR']
| 1 | 1 | 0 | 0 | 0 | 0 |
So basically I have created an A.I assistant and was wondering if anyone had suggestions on how to make visuals that react with the sound?
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying use del.icio.us API and following the examples from the book Programming Collective Intelligence
When I use these commands in python 3.6.2:
>>> from deliciousrec import *
>>> delusers=initializeUserDict('programming')
I get this error:
<urlopen error [Errno 11001] getaddrinfo failed>, 4 tries left.
<urlopen error [Errno 11001] getaddrinfo failed>, 3 tries left.
<urlopen error [Errno 11001] getaddrinfo failed>, 2 tries left.
<urlopen error [Errno 11001] getaddrinfo failed>, 1 tries left.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File"C:\Users\user\AppData\Local\Programs\Python\Python36\deliciousrec.py", line 10, in initializeUserDict
for p1 in get_popular(tag=tag)[0:count]:
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\pydelicious-0.6-py3.6.egg\pydelicious\__init__.py", line 1042, in get_popular
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\pydelicious-0.6-py3.6.egg\pydelicious\__init__.py", line 1026, in getrss
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\pydelicious-0.6-py3.6.egg\pydelicious\__init__.py", line 455, in dlcs_rss_request
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\pydelicious-0.6-py3.6.egg\pydelicious\__init__.py", line 239, in http_request
UnboundLocalError: local variable 'e' referenced before assignment
I can not open pydelicious-0.6-py3.6.egg and access the init file that is asked to be modified here.
Has anyone seen this type of error before. How do I solve it?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a datagram like below
lable unigrams
ham [ive, searching, right, word, thank, breather, i, promise, wont]
spam [free, entry, 2, wkly, comp, win, fa, cup, final, tkts, 21st, may]
I want to count the distinct/ unique ham unigrams and distinct spam unigrams.
I can count the distinct values in a column using df.unigrams.nunique().
I can count the number of occurrences of a given unigram in ham using unigramCount = unigramCorpus.loc["ham", "unigrams"].count('ive')
But how can I count the number of distinct values in a given list? Ex: ["ham", "spam"]
Expected output:
ham = 9
spam = 12
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to serialize/deserialize spaCy documents (setup is Windows 7, Anaconda) and am getting errors. I haven't been able to find any explanations. Here is a snippet of code and the error it generates:
import spacy
nlp = spacy.load('en')
text = 'This is a test.'
doc = nlp(text)
fout = 'test.spacy' # <-- according to the API for Doc.to_disk(), this needs to be a directory (but for me, spaCy writes a file)
doc.to_disk(fout)
doc.from_disk(fout)
Traceback (most recent call last):
File "<ipython-input-7-aa22bf1b9689>", line 1, in <module>
doc.from_disk(fout)
File "doc.pyx", line 763, in spacy.tokens.doc.Doc.from_disk
File "doc.pyx", line 806, in spacy.tokens.doc.Doc.from_bytes
ValueError: [E033] Cannot load into non-empty Doc of length 5.
I have also tried creating a new Doc object and loading from that, as shown in the example ("Example: Saving and loading a document") in the spaCy docs, which results in a different error:
from spacy.tokens import Doc
from spacy.vocab import Vocab
new_doc = Doc(Vocab()).from_disk(fout)
Traceback (most recent call last):
File "<ipython-input-16-4d99a1199f43>", line 1, in <module>
Doc(Vocab()).from_disk(fout)
File "doc.pyx", line 763, in spacy.tokens.doc.Doc.from_disk
File "doc.pyx", line 838, in spacy.tokens.doc.Doc.from_bytes
File "stringsource", line 646, in View.MemoryView.memoryview_cwrapper
File "stringsource", line 347, in View.MemoryView.memoryview.__cinit__
ValueError: buffer source array is read-only
EDIT:
As pointed out in the replies, the path provided should be a directory. However, the first code snippet creates a file. Changing this to a non-existing directory path doesn't help as spaCy still creates a file. Attempting to write to an existing directory causes an error too:
fout = 'data'
doc.to_disk(fout) Traceback (most recent call last):
File "<ipython-input-8-6c30638f4750>", line 1, in <module>
doc.to_disk(fout)
File "doc.pyx", line 749, in spacy.tokens.doc.Doc.to_disk
File "C:\Users\Username\AppData\Local\Continuum\anaconda3\lib\pathlib.py", line 1161, in open
opener=self._opener)
File "C:\Users\Username\AppData\Local\Continuum\anaconda3\lib\pathlib.py", line 1015, in _opener
return self._accessor.open(self, flags, mode)
File "C:\Users\Username\AppData\Local\Continuum\anaconda3\lib\pathlib.py", line 387, in wrapped
return strfunc(str(pathobj), *args)
PermissionError: [Errno 13] Permission denied: 'data'
Python has no problem writing at this location via standard file operations (open/read/write).
Trying with a Path object yields the same results:
from pathlib import Path
import os
fout = Path(os.path.join(os.getcwd(), 'data'))
doc.to_disk(fout)
Traceback (most recent call last):
File "<ipython-input-17-6c30638f4750>", line 1, in <module>
doc.to_disk(fout)
File "doc.pyx", line 749, in spacy.tokens.doc.Doc.to_disk
File "C:\Users\Username\AppData\Local\Continuum\anaconda3\lib\pathlib.py", line 1161, in open
opener=self._opener)
File "C:\Users\Username\AppData\Local\Continuum\anaconda3\lib\pathlib.py", line 1015, in _opener
return self._accessor.open(self, flags, mode)
File "C:\Users\Username\AppData\Local\Continuum\anaconda3\lib\pathlib.py", line 387, in wrapped
return strfunc(str(pathobj), *args)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Username\\workspace\\data'
Any ideas why this might be happening?
| 1 | 1 | 0 | 0 | 0 | 0 |
I need to lemmatize text using nltk. In order to do this, I apply nltk.pos_tag to each sentence and then convert the resulting Penn Treebank tags (http://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html) to WordNet tags. I need to do this because WordNetLemmatizer.lemmatize() expects both the word and its correct pos_tag as arguments, otherwise it will just assume everything is a verb.
I just found that there are five different tags defined in WordNet:
wn.VERB
wn.ADV
wn.NOUN
wn.ADJ
wn.ADJ_SAT
However, every example I found on the internet just ignores wn.ADJ_SAT when converting Treebank tags to WordNet tags. They are all just mapping Penn tags to WordNet tags like this:
If Penn tag starts with J: convert to wn.ADJ
If Penn tag starts with V: convert to wn.VERB
If Penn tag starts with N: convert to wn.NOUN
If Penn tag starts with R: convert to wn.ADV
So wn.ADJ_SAT is never used.
My question now is if there are cases where the lemmatizer returns a different result for ADJ_SAT than for ADJ. What are examples for words that are satellite adjectives (ADJ_SAT) and no normal adjectives (ADJ)?
| 1 | 1 | 0 | 0 | 0 | 0 |
My model is defined as such:
model = keras.models.Sequential()
model.add(layers.Embedding(max_features, 128, input_length=max_len,
input_shape=(max_len,), name='embed'))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.MaxPooling1D(5))
model.add(layers.Conv1D(32, 7, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(1))
and when I use the plot_model function to draw it out:
from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file='model.png')
The drawing I get is
Where the input layer is a series of numbers. Does anybody know how it let it show the input properly?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a string:
"2
Our
strategy drives
sustainably higher profits and margins
Strengthening our hubs is a critical foundation to maximize profitability
Driving revenue improvements from all areas of business
Improving efficiency and productivity
Greater accountability and transparency
"
The output should be:
"2 Our strategy drives sustainably higher profits and margins
Strengthening our hubs is a critical foundation to maximize profitability
Driving revenue improvements from all areas of business
Improving efficiency and productivity
Greater accountability and transparency "
| 1 | 1 | 0 | 0 | 0 | 0 |
I have been struggling with an issue for few days now and I can not understand whats going on , I have developed an seq2seq model , in one function I create some Tensorflow operations and variables then return them to the caller , I would like that function to reuse all the variables and no matter what I do in scopes I do not seem to get it right , below is the function :
def create_complete_cell(rnn_size,num_layers,encoder_outputs_tr,batch_size,encoder_state , beam_width ):
with tf.variable_scope("InnerScope" , reuse=tf.AUTO_REUSE):
encoder_outputs_tr =tf.contrib.seq2seq.tile_batch(encoder_outputs_tr, multiplier=beam_width)
encoder_state = tf.contrib.seq2seq.tile_batch(encoder_state, multiplier=beam_width)
batch_size = batch_size * beam_width
dec_cell = tf.contrib.rnn.MultiRNNCell([create_cell(rnn_size) for _ in range(num_layers)])
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units=rnn_size, memory=encoder_outputs_tr )
attn_cell = tf.contrib.seq2seq.AttentionWrapper(dec_cell, attention_mechanism , attention_layer_size=rnn_size , output_attention=False)
attn_zero = attn_cell.zero_state(batch_size , tf.float32 )
attn_zero = attn_zero.clone(cell_state = encoder_state)
return attn_zero , attn_cell
and below is the code calling the above function :
with tf.variable_scope('scope' ):
intial_train_state , train_cell = create_complete_cell(rnn_size,num_layers,encoder_outputs_tr,batch_size,encoder_state , 1 )
with tf.variable_scope('scope' ,reuse=True):
intial_infer_state , infer_cell = create_complete_cell(rnn_size,num_layers,encoder_outputs_tr,batch_size,encoder_state , beam_width )
print("intial_train_state" , intial_train_state)
print("intial_infer_state" , intial_infer_state)
the print outputs the below :
first print command outputs:
('intial_train_state', AttentionWrapperState(cell_state=(LSTMStateTuple(c=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_1:0' shape=(?, 512) dtype=float32>), LSTMStateTuple(c=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_2:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_3:0' shape=(?, 512) dtype=float32>), LSTMStateTuple(c=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_4:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_5:0' shape=(?, 512) dtype=float32>), LSTMStateTuple(c=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_6:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope/InnerScope/tile_batch_1/Reshape_7:0' shape=(?, 512) dtype=float32>)), attention=<tf.Tensor 'scope/InnerScope/AttentionWrapperZeroState/zeros_1:0' shape=(100, 512) dtype=float32>, time=<tf.Tensor 'scope/InnerScope/AttentionWrapperZeroState/zeros:0' shape=() dtype=int32>, alignments=<tf.Tensor 'scope/InnerScope/AttentionWrapperZeroState/zeros_2:0' shape=(100, ?) dtype=float32>, alignment_history=()))
and the second print commands outputs :
('intial_infer_state', AttentionWrapperState(cell_state=(LSTMStateTuple(c=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_1:0' shape=(?, 512) dtype=float32>), LSTMStateTuple(c=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_2:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_3:0' shape=(?, 512) dtype=float32>), LSTMStateTuple(c=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_4:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_5:0' shape=(?, 512) dtype=float32>), LSTMStateTuple(c=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_6:0' shape=(?, 512) dtype=float32>, h=<tf.Tensor 'scope_1/InnerScope/tile_batch_1/Reshape_7:0' shape=(?, 512) dtype=float32>)), attention=<tf.Tensor 'scope_1/InnerScope/AttentionWrapperZeroState/zeros_1:0' shape=(300, 512) dtype=float32>, time=<tf.Tensor 'scope_1/InnerScope/AttentionWrapperZeroState/zeros:0' shape=() dtype=int32>, alignments=<tf.Tensor 'scope_1/InnerScope/AttentionWrapperZeroState/zeros_2:0' shape=(300, ?) dtype=float32>, alignment_history=()))
I was expecting that both output would be the same since I'm reusing the variables but as you can see that for example in the first variable the output has something like this
scope/InnerScope/tile_batch_1/Reshape_1:0
and in the second variable
scope_1/InnerScope/tile_batch_1/Reshape_1:0
I do not know why _1 is added to scope in the second call , and I'm a bit confused if the variable is being shared or not , and if not what should I do to return the same variable ( shared) .
thank you
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm looking for a Python package that will get a list of words, and then search for a word inside a text based on the list of words given to it.
I tried using FlashText (http://flashtext.readthedocs.io/en/latest/)
So I built a class that added keywords from file code: keyword_processor.add_keyword(word)
And than search for keywords in a text with the code: keyword_processor.extract_keywords(text)
But I'm also getting partial words, for example I have a "keyword" (in Hebrew): גיל
And a sentence: האישה בגילה הלכה לפארק
The word "בגילה" comes up as a found keyword because it contains גיל inside of it, so it is not good for me...
Does anyone here have an experience with a different Python package that is doing what I described in here and will not return "partial keywords"?
And maybe as fast as flashtext, that from the tests I took is very fast.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm starting to program with NLTK in Python for Natural Italian Language processing. I've seen some simple examples of the WordNet Library that has a nice set of SynSet that permits you to navigate from a word (for example: "dog") to his synonyms and his antonyms, his hyponyms and hypernyms and so on...
My question is:
If I start with an italian word (for example:"cane" - that means "dog") is there a way to navigate between synonyms, antonyms, hyponyms... for the italian word as you do for the english one? Or... There is an Equivalent to WordNet for the Italian Language ?
Thanks in advance
| 1 | 1 | 0 | 0 | 0 | 0 |
Given I got a word2vec model (by gensim), I want to get the rank similarity between to words.
For example, let's say I have the word "desk" and the most similar words to "desk" are:
table 0.64
chair 0.61
book 0.59
pencil 0.52
I want to create a function such that:
f(desk,book) = 3
Since book is the 3rd most similar word to desk.
Does it exists? what is the most efficient way to do this?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to train & build a tokenizer using Keras & here is the snippet of code where I am doing this:
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
txt1="""What makes this problem difficult is that the sequences can vary in length,
be comprised of a very large vocabulary of input symbols and may require the model
to learn the long term context or dependencies between symbols in the input sequence."""
#txt1 is used for fitting
tk = Tokenizer(nb_words=2000, lower=True, split=" ",char_level=False)
tk.fit_on_texts(txt1)
#convert text to sequencech
t= tk.texts_to_sequences(txt1)
#padding to feed the sequence to keras model
t=pad_sequences(t, maxlen=10)
Upon testing which words the Tokenizer has learned, it gives that it has only learned characters but not words.
print(tk.word_index)
output:
{'e': 1, 't': 2, 'n': 3, 'a': 4, 's': 5, 'o': 6, 'i': 7, 'r': 8, 'l': 9, 'h': 10, 'm': 11, 'c': 12, 'u': 13, 'b': 14, 'd': 15, 'y': 16, 'p': 17, 'f': 18, 'q': 19, 'v': 20, 'g': 21, 'w': 22, 'k': 23, 'x': 24}
why it does not have any words ?
Furthermore, if I print t, it clearly shows that, words are ignored and each word is tokenized char by char
print(t)
Output:
[[ 0 0 0 ... 0 0 22]
[ 0 0 0 ... 0 0 10]
[ 0 0 0 ... 0 0 4]
...
[ 0 0 0 ... 0 0 12]
[ 0 0 0 ... 0 0 1]
[ 0 0 0 ... 0 0 0]]
| 1 | 1 | 0 | 0 | 0 | 0 |
Any idea if there are any more POS taggers for Latin apart from CLTK available for Python or any other language? I have tried the CLTK POS taggers but they are not giving me very accurate results for my corpus
| 1 | 1 | 0 | 0 | 0 | 0 |
I have the following code to split each paragraph of a docx file and append to a list, but I need to identify the page breaks within the xml tree structure and create a list of text for each page. Happy to provide the exact namespaces if it'd be helpful:
xml_content = document.read('word/document.xml')
tree = XML(xml_content)
aggText = []
#tree.getiterator method looks at previously defined word namespaces
for paragraph in tree.getiterator(PARA):
texts = [node.text
for node in paragraph.getiterator(TEXT)
if node.text]
if texts:
aggText.append(''.join(texts))
I'm imagining that the updated loop will looking something like the below, but am unsure about locating the page break within the xml tree structure:
aggText = []
for paragraph in tree.getiterator(PARA):
texts = [node.text
for node in paragraph.getiterator(TEXT)
if node.text]
#page breaks in xml read 'w:lastRenderedPageBreak'
#below doesn't work, need a way to search raw xml for the page break identifier
if texts.count(lastRenderedPageBreak) > 0:
pages = aggText.append(''.join(texts))
texts = []
Any ideas would be greatly appreciated!
| 1 | 1 | 0 | 0 | 0 | 0 |
I am working on NLP with python and my next step is to gather huge-huge data regarding specific topics available in English grammar.
For example : all words that can define a "Department" say "Accounts".
So can any tell me how I can gather such data (if possible, through any API).
| 1 | 1 | 0 | 1 | 0 | 0 |
Have a script for getting italian synonyms from Wordnet like this:
from nltk.corpus import wordnet as wn
it_lemmas = wn.lemmas("problema", lang="ita")
hypernyms = it_lemmas[0].synset().hypernyms()
print(hypernyms[0].lemmas(lang="ita"))
When I do the looping I get message that list indices must be integers or slices, not Lemma
How should I do the looping to get not only one value ([0]) but all the values in this dictionary (the amount can be different) and print them all?
| 1 | 1 | 0 | 0 | 0 | 0 |
If I process the sentence
'Return target card to your hand'
with spacy and the en_web_core_lg model, it recognize the tokens as below:
Return NOUN target NOUN card NOUN to ADP your ADJ hand NOUN
How can I force 'Return' to be tagged as a VERB? And how can I do it before the parser, so that the parser can better interpret relations between tokens?
There are other situations in which this would be useful. I am dealing with text which contains specific symbols such as {G}. These three characters should be considered a NOUN, as a whole, and {T} should be a VERB. But right now I do not know how to achieve that, without developing a new model for tokenizing and for tagging. If I could "force" a token, I could replace these symbols for something that would be recognized as one token and force it to be tagged appropriately. For example, I could replace {G} with SYMBOLG and force tagging SYMBOLG as NOUN.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm not data-scientist but I'm on a project where i need to do aspect based sentiments analysis, I've already done an classifier for sentiment analysis but now, I need to do the "aspect based" part.
I have a list of aspects (4) and I need to find this aspect in a text, get all his depencies and analyse the sentiment of this group of words.
The cake had good taste but the tea wasn't good at all
"The cake had good taste" = POS / "the tea wasn't good at all" = NEG
I've already explore stanford CoreNLP depencies parser but in french (because i've to do this in french) it's not so good (maybe I need to only keep Nouns and Adjectives for the parsing).
If you've any suggestions...
| 1 | 1 | 0 | 0 | 0 | 0 |
I have just been started learning about word embeddings and gensim and I tried this code
. In this article during the visualisation it says we need PCA to convert high-dimensional vectors into low-dimensions. Now we have a parameter "size" in Word2Vec method, so why can't we set that size equals to 2 rather using PCA.
So, I tried to do this and compare both graphs (one with 100 size and other with 2 as size) and got very different result. Now I am confused that what this "size" depicts? How the size of vectors affect this?
This is what I got when I used 100 as size.
This is what I got when I used 2 as size.
| 1 | 1 | 0 | 0 | 0 | 0 |
Using NLTK to analyze a nestled list of numbers. Each sublist is independent from the others, so I used the from_document method. However, unlike the from_words method, from_document does not have a window size input. I want to expand the window size such that it matches each document size. My code so far:
split_list = [[6, 3, 7, 8, 7, 5, 8, 8, 8, 3, 2, 1, 4],
[5, 7, 8, 1, 8, 10, 3, 5, 5, 6, 8, 8, 5],
[8, 9, 1, 2, 3, 8, 6, 3, 11],...]
bigram_measures = nltk.collocations.BigramAssocMeasures()
finder = BigramCollocationFinder.from_documents(split_list)
finder.score_ngrams(bigram_measures.pmi)
output:
[((10, 4), 2.6544750245287965),
((1, 4), 2.270073203392851),
((2, 1), 1.6606985694144463),
((10, 10), 1.3898880959117932),
((4, 1), 1.2139301253553185),...]
But this only solves for bigrams with a window size of 2, when I want all possible bigrams from a document (e.g. window size = document size). I could go through and calculate everything manually using itertools.combinations to make all the combinations of bigrams, calculate their frequency, and use the non-iterated frequency of the unigrams to eventually get the pmi. However, this seems like a very roundabout way. Is there any way I could get NLTK to expand the window size?
| 1 | 1 | 0 | 0 | 0 | 0 |
For example we have following text:
"Spark is a framework for writing fast, distributed programs. Spark
solves similar problems as Hadoop MapReduce does but with a fast
in-memory approach and a clean functional style API. ..."
I need all possible section of this text respectively, for one word by one word, then two by two, three by three to five to five.
like this:
ones : ['Spark', 'is', 'a', 'framework', 'for', 'writing, 'fast',
'distributed', 'programs', ...]
twos : ['Spark is', 'is a', 'a framework', 'framework for', 'for writing'
...]
threes : ['Spark is a', 'is a framework', 'a framework for',
'framework for writing', 'for writing fast', ...]
. . .
fives : ['Spark is a framework for', 'is a framework for writing',
'a framework for writing fast','framework for writing fast distributed', ...]
Please note that the text to be processed is huge text( about 100GB).
I need the best solution for this process. May be it should be processed multi thread in parallel.
I don't need whole list at once, it can be streaming.
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to use NLTK to parse Russian text, but it does not work on abbreviations and initials like А. И. Манташева and Я. Вышинский.
Instead, it breaks like below:
организовывал забастовки и демонстрации, поднимал рабочих на бакинских предприятиях А.
И.
Манташева.
It did the same when I used russian.pickle from https://github.com/mhq/train_punkt ,
Is this a general NLTK limitation or language-specific?
| 1 | 1 | 0 | 0 | 0 | 0 |
I hope to make an electronic project where I connect electronics with Tensorflow, and I decided to use Raspberry pi 3 B+. I previously used Arduino. On the Raspberry Pi GPIO is for electronics, is it possible for me to connect GPIO with Tensorflow by using "import tensorflow as tf?"
| 1 | 1 | 0 | 0 | 0 | 0 |
I am kind of new to deep learning and I have been trying to create a simple sentiment analyzer using deep learning methods for natural language processing and using the Reuters dataset. Here is my code:
import numpy as np
from keras.datasets import reuters
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Dropout, GRU
from keras.utils import np_utils
max_length=3000
vocab_size=100000
epochs=10
batch_size=32
validation_split=0.2
(x_train, y_train), (x_test, y_test) = reuters.load_data(path="reuters.npz",
num_words=vocab_size,
skip_top=5,
maxlen=None,
test_split=0.2,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
tokenizer = Tokenizer(num_words=max_length)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
y_train = np_utils.to_categorical(y_train, 50)
y_test = np_utils.to_categorical(y_test, 50)
model = Sequential()
model.add(GRU(50, input_shape = (49,1), return_sequences = True))
model.add(Dropout(0.2))
model.add(Dense(256, input_shape=(max_length,), activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(50, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_split=validation_split)
score = model.evaluate(x_test, y_test)
print('Test Accuracy:', round(score[1]*100,2))
What I do not understand is why, every time I try to use a GRU or LSTM cell instead of a Dense one, I get this error:
ValueError: Error when checking input: expected gru_1_input to have 3
dimensions, but got array with shape (8982, 3000)
I have seen online that adding return_sequences = True could solve the issue, but as you can see, the issue remains in my case.
What should I do in this case?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have seen mlpy.dtw_std(x, y, dist_only=True) but that seems to support only 1D-DTW.
I've also tried to use R:
def getDTW(A, B):
""" Calculate the distance of A and B by greedy dynamic time warping.
@param list A list of points
@param list B list of points
@return float Minimal distance you have to move points from A to get B
>>> '%.2f' % greedyMatchingDTW([{'x': 0, 'y': 0}, {'x': 1, 'y': 1}], \
[{'x': 0, 'y': 0}, {'x': 0, 'y': 5}])
'4.12'
>>> '%.2f' % greedyMatchingDTW([{'x': 0, 'y': 0}, {'x':0, 'y': 10}, \
{'x': 1, 'y': 22}, {'x': 2, 'y': 2}], \
[{'x': 0, 'y': 0}, {'x': 0, 'y': 5}])
'30.63'
>>> '%.2f' % greedyMatchingDTW( [{'x': 0, 'y': 0}, {'x': 0, 'y': 5}], \
[{'x': 0, 'y': 0}, {'x':0, 'y': 10}, \
{'x': 1, 'y': 22}, {'x': 2, 'y': 2}])
'30.63'
"""
global logging
import numpy as np
import rpy2.robjects.numpy2ri
from rpy2.robjects.packages import importr
rpy2.robjects.numpy2ri.activate()
# Set up our R namespaces
R = rpy2.robjects.r
DTW = importr('dtw')
An, Bn = [], []
for p in A:
An.append([p['x'], p['y']])
for p in B:
Bn.append([p['x'], p['y']])
alignment = R.dtw(np.array(An), np.array(Bn), keep=True)
dist = alignment.rx('distance')[0][0]
return dist
# I would expect 0 + sqrt(1**2 + (-4)**1) = sqrt(17) = 4.123105625617661
print(getDTW([{'x': 0, 'y': 0}, {'x': 1, 'y': 1}],
[{'x': 0, 'y': 0}, {'x': 0, 'y': 5}]))
# prints 5.53731918799 - why?
But as I denoted at the bottom, R does not give back the expected solution.
So: How can I calculate the DTW between two lists of 2D points in Python?
| 1 | 1 | 0 | 0 | 0 | 0 |
Gradient Descent and Overflow Error
I am currently implementing vectorized gradient descent in python. However, I continue to get an overflow error. The numbers in my dataset are not extremely large though. I am using this formula:
I choose this implementation to avoid using derivatives. Does anyone have any suggestion on how to remedy this problem or am I implementing it wrong? Thank you in advance!
Dataset Link: https://www.kaggle.com/CooperUnion/anime-recommendations-database/data
## Cleaning Data ##
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
data = pd.read_csv('anime.csv')
# print(data.corr())
# print(data['members'].isnull().values.any()) # Prints False
# print(data['rating'].isnull().values.any()) # Prints True
members = [] # Corresponding fan club size for row
ratings = [] # Corresponding rating for row
for row in data.iterrows():
if not math.isnan(row[1]['rating']): # Checks for Null ratings
members.append(row[1]['members'])
ratings.append(row[1]['rating'])
plt.plot(members, ratings)
plt.savefig('scatterplot.png')
theta0 = 0.3 # Random guess
theta1 = 0.3 # Random guess
error = 0
Formula's
def hypothesis(x, theta0, theta1):
return theta0 + theta1 * x
def costFunction(x, y, theta0, theta1, m):
loss = 0
for i in range(m): # Represents summation
loss += (hypothesis(x[i], theta0, theta1) - y[i])**2
loss *= 1 / (2 * m) # Represents 1/2m
return loss
def gradientDescent(x, y, theta0, theta1, alpha, m, iterations=1500):
for i in range(iterations):
gradient0 = 0
gradient1 = 0
for j in range(m):
gradient0 += hypothesis(x[j], theta0, theta1) - y[j]
gradient1 += (hypothesis(x[j], theta0, theta1) - y[j]) * x[j]
gradient0 *= 1/m
gradient1 *= 1/m
temp0 = theta0 - alpha * gradient0
temp1 = theta1 - alpha * gradient1
theta0 = temp0
theta1 = temp1
error = costFunction(x, y, theta0, theta1, len(y))
print("Error is:", error)
return theta0, theta1
print(gradientDescent(members, ratings, theta0, theta1, 0.01, len(ratings)))
Error's
After several iterations, my costFunction being called within my gradientDescent function gives me an OverflowError: (34, 'Result too large'). However, I expect my code to continually print out a decreasing error value.
Error is: 1.7515692852199285e+23
Error is: 2.012089675182454e+38
Error is: 2.3113586742689143e+53
Error is: 2.6551395730578252e+68
Error is: 3.05005286756189e+83
Error is: 3.503703756035943e+98
Error is: 4.024828599077087e+113
Error is: 4.623463163528686e+128
Error is: 5.311135890211131e+143
Error is: 6.101089907410428e+158
Error is: 7.008538065634975e+173
Error is: 8.050955905074458e+188
Error is: 9.248418197694096e+203
Error is: 1.0623985545062037e+219
Error is: 1.220414847696018e+234
Error is: 1.4019337603196565e+249
Error is: 1.6104509643047377e+264
Error is: 1.8499820618048921e+279
Error is: 2.1251399172389593e+294
Traceback (most recent call last):
File "tyreeGradientDescent.py", line 54, in <module>
print(gradientDescent(members, ratings, theta0, theta1, 0.01, len(ratings)))
File "tyreeGradientDescent.py", line 50, in gradientDescent
error = costFunction(x, y, theta0, theta1, len(y))
File "tyreeGradientDescent.py", line 33, in costFunction
loss += (hypothesis(x[i], theta0, theta1) - y[i])**2
OverflowError: (34, 'Result too large')
| 1 | 1 | 0 | 1 | 0 | 0 |
I am leaning NLP and noticed that TextBlob classification based in Naive Bayes (textblob is Build on top of NLTK) https://textblob.readthedocs.io/en/dev/classifiers.html works fine when training data is list of sentences and does not work at all when training data are individual words (where each word and assigned classification).
Why?
| 1 | 1 | 0 | 0 | 0 | 0 |
so I am using a convolutional layer as the first layer of a neural network for deep reinforcement learning to get the spatial features out of a simulation I built. The simulation gives different maps that are of different lengths and heights to process. If I understand convolutional networks, this should not matter since the channel size is kept constant. In between the convolutional network and the fully connected layers there is a spatial pyramid pooling layer so that the varying image sizes does not matter. Also the spatial data is pretty sparse. Usually it is able to go through a few states and sometimes a few episodes before the first convolutional layer spits out all Nans. Even when I fix the map size this happens. I do not know where the problem lies, where can the problem lie?
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying to build a sequence to sequence model in Tensorflow , I have followed several tutorials and all is good. Untill I reached a point where I decided to remove the teacher forcing in my model .
below is a sample of decoder network that I'm using :
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)[0]
return training_decoder_output
As per my understanding the TrainingHelper is doing the teacher forcing. Especially that is it taking the true output as part of its arguments. I tried to use the decoder without training help but it appears to be mandatory. I tried to set the true output to 0 but apparently the output is needed by the TrainingHelper . I have also tried to google a solution but I did not find anything related .
===================Update=============
I apologize for not mentioning this earlier but I tried using GreedyEmbeddingHelper as well .The model runs fine a couple of iterations and then starts throwing a run time error . it appears that the GreedyEmbeddingHelper starts predicting output different that the expectected shape . Below is my function when using the GreedyEmbeddingHelper
def decoding_layer_train(encoder_state, dec_cell, dec_embeddings,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
training_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_vocab_to_int['<EOS>'])
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)[0]
return training_decoder_output
this is a sample of the error that gets thrown after a coupe of training iterations :
Ok
Epoch 0 Batch 5/91 - Train Accuracy: 0.4347, Validation Accuracy: 0.3557, Loss: 2.8656
++++Epoch 0 Batch 5/91 - Train WER: 1.0000, Validation WER: 1.0000
Epoch 0 Batch 10/91 - Train Accuracy: 0.4050, Validation Accuracy: 0.3864, Loss: 2.6347
++++Epoch 0 Batch 10/91 - Train WER: 1.0000, Validation WER: 1.0000
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-115-1d2a9495ad42> in <module>()
57 target_sequence_length: targets_lengths,
58 source_sequence_length: sources_lengths,
---> 59 keep_prob: keep_probability})
60
61
/Users/alsulaimi/Documents/AI/Tensorflow-make/workspace/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
887 try:
888 result = self._run(None, fetches, feed_dict, options_ptr,
--> 889 run_metadata_ptr)
890 if run_metadata:
891 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/Users/alsulaimi/Documents/AI/Tensorflow-make/workspace/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
1116 if final_fetches or final_targets or (handle and feed_dict_tensor):
1117 results = self._do_run(handle, final_targets, final_fetches,
-> 1118 feed_dict_tensor, options, run_metadata)
1119 else:
1120 results = []
/Users/alsulaimi/Documents/AI/Tensorflow-make/workspace/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1313 if handle is None:
1314 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1315 options, run_metadata)
1316 else:
1317 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)
/Users/alsulaimi/Documents/AI/Tensorflow-make/workspace/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
1332 except KeyError:
1333 pass
-> 1334 raise type(e)(node_def, op, message)
1335
1336 def _extend_graph(self):
InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [1100,78] and labels shape [1400]
I'm not sure but I guess the GreedyEmbeddingHepler should not be used for training. , I would appreciate your help and thoughts on how to stop the teacher forcing.
thank you.
| 1 | 1 | 0 | 0 | 0 | 0 |
I work on text mining problem and need to extract all mentioned of certain keywords. For example, given the list:
list_of_keywords = ['citalopram', 'trazodone', 'aspirin']
I need to find all occurrences of the keywords in a text. That could be easily done with Pandas (assuming my text is read in from a csv file):
import pandas as pd
df_text = pd.read_csv('text.csv')
df_text['matches'] = df_text.str.findall('|'.join(list_of_keywords))
However, there are spelling mistakes in the text and some times my keywords will be written as:
'citalopram' as 'cetalopram'
or
'trazodone' as 'trazadon'
Searching on the web, I found some suggestions how to implement the spell checker, but I am not sure where to insert the spell checker and I reckon that it may slow down the search in the case a very large text.
As another option, it has been suggested to use a wild card with regex and insert in the potential locations of confusion (conceptually written)
.findall('c*t*l*pr*m')
However I am not convinced that I can capture all possible problematic cases. I tried some out-of-the-box spell checkers, but my texts are some-what specific and I need a spell checker that 'knows' my domain (medical domain).
QUESTION
Is there any efficient way to extract keywords from a text including spelling mistakes?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a big pandas dataset with job descriptions. I want to tokenize it, but before this I should remove stopwords and punctuation. I have no problems with stopwords.
If I will use regex for removing punctuation, I can lose very important words that describe jobs (e.g. c++ developer, c#, .net, etc.).
List of such important words is very big, because it consists not only programming languages names but also companies names.
For exmaple, I want the next way of removing punctuation:
Before:
Hi! We are looking for smart, young and hard-working c++ developer. Our perfect candidate should know: - c++, c#, .NET in expert level;
After:
Hi We are looking for smart young and hard-working c++ developer Our perfect candidate should know c++ c# .NET in expert level
Can you advise me advance tockenizers or methods for removing punctuation?
| 1 | 1 | 0 | 0 | 0 | 0 |
How to Fetching all Loss per iteration in MLPRegressor, to plotting convergence, i need to fetching all loss per iteration (Loss History)
plotting convergence like bellow
| 1 | 1 | 0 | 0 | 0 | 0 |
Before asking this question i went through these( question_1, question_2) , both are not exactly my use-cases
I am using nltk tree.draw() method to get tree visualisation of a sentence, but i need to do that for all sentences in paragraph
so i want to store the output of all sentences of a paragraph in a file, where i can preserve the representation and which will help in analysing those structures
the output through tree.draw is in this way
i want tree representations of all sentences of a paragraph in a file(text/image/ . ) so that it will be easy to analyse
is there an way to achieve that ?
edit : output with treeview -
https://imgur.com/a/DYgv5qh
| 1 | 1 | 0 | 0 | 0 | 0 |
I am new to Wit.ai and have started to implement it in my code. I was pondering an easier way than hardcoding to extract all the confidence levels from a given wit.ai API output.
For example(API output):
{
"_text": "I believe I am a human",
"entities": {
"statement": [
{
"confidence": 0.97691847787856,
"value": "I",
"type": "value"
},
{
"confidence": 0.91728476663947,
"value": "I",
"type": "value"
}
],
"query": [
{
"confidence": 1,
"value": "am",
"type": "value"
}
]
},
"msg_id": "0YKCUvDvHC2gyydiU"
}
Thank You in advance.
| 1 | 1 | 0 | 0 | 0 | 0 |
I randomly encounter the same error whenever I run XGBoost model (both the normal run and grid search). The error message says this:
H2OConnectionError: Local server has died unexpectedly. RIP.
I don't know what happens, I tried to change versions but didn't work. I'm currently using the version 3.18.0.5. Does anyone have any idea what is happening? Thanks in advance
| 1 | 1 | 0 | 1 | 0 | 0 |
I am new to programming. I have a DataFrame shown in as below:
Col-2 Col-3
have a account A
account summary B
Cancel C
Both D
Update credit card E
Block Credit card F
I need my output as:
Col-2 Col-3
have a account A
account summary B
Update credit card E
Block Credit card F
Means I need those values where Col-2 is having more than one word. Single word present in Col-2 should be removed. Both and Cancel are single words, that's why those rows have been removed from the output.
| 1 | 1 | 0 | 0 | 0 | 0 |
So I have made an AnnoyIndexer and am running some most_similar queries to find the nearest neighbours of some vectors in a 300dimensional vector space. This is the code for it:
def most_similar(self, vector, num_neighbors):
"""Find the approximate `num_neighbors` most similar items.
Parameters
----------
vector : numpy.array
Vector for word/document.
num_neighbors : int
Number of most similar items
Returns
-------
list of (str, float)
List of most similar items in format [(`item`, `cosine_distance`), ... ]
"""
ids, distances = self.index.get_nns_by_vector(
vector, num_neighbors, include_distances=True)
return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]
I am wondering why the returned values for the distances are all taken away from 1 and then divided by 2? Surely after doing that, largest/smallest distances are all messed up?
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a list of tokens and need to find them in a text. I'm using pandas to store my text. However, I noticed that sometimes the tokens I am looking for are misspelled and thus I am thinking about adding the Levenshtein distance to pick those misspelled tokens. At the moment, I implemented a very simple approach:
df_texts['Text'].str.findall('|'.join(list_of_tokens))
That works perfectly find. My question is how to add edit_distance to account for misspelled tokens? NLTK packages offers a nice function to compute edit distance:
from nltk.metrics import edit_distance
>> edit_distance('trazodone', 'trazadon')
>> 2
In the above example, trazodone is the correct token, while trazadon is misspelled one and should be retrieved from my text.
In theory, I can check every single word in my texts and measure the edit distance to decided on whether they are similar or not, but it would be very inefficient. Any pythonian ideas?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to create Faster RCNN like model. I get stuck when it comes to the ROI pooling from the feature map. I know here billinear sampling can be used but, it may not help for end to end training. How to implement this ROI pooling layer in tensorflow?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am trying to create a simulation of Alexa or Google Home (very basic). I am using the SpeechRecognition module with the Google as recognizer. I have managed to get it working but I don't know how to run the whole script when I say a word (I want it to be hearing always (as Alexa does)).
Ex:
'Hey, Robot'
AI = Hi, how may I help you? (runs whole script)
I had thought about looping through a piece of code every 5 seconds and then connecting to Google API but this isn't possible as the API is limited to 50 requests per day.
Any help is appreciated,
Thanks in advance
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm trying to use sklearn's TfidfVectorizer to output tf-idf scores for a list of inputs, comprised of both unigrams and bigrams.
Here's the essence of what I'm doing:
comprehensive_ngrams = comprehensive_unigrams + comprehensive_bigrams # List of unigrams and bigrams(comprehensive_unigrams and comprehensive_bigrams are lists in their own right)
print("Length of input list: ", len(comprehensive_ngrams))
vectorizer = TfidfVectorizer(ngram_range = (1,2), lowercase = True)
vectorizer.fit(comprehensive_ngrams)
vocab = vectorizer.vocabulary_
print("Length of learned vocabulary: ", len(vocab))
term_document_matrix = vec.toarray()
print("Term document matrix shape is: ", term_document_matrix.shape)
This snippet outputs the following:
Length of input list: 12333
Length of learned vocabulary: 6196
Term document matrix shape is: (12333, 6196)
The length of the dictionary mapping input elements to positional indices emitted by the TfidfVectorizer is shorter than the number of unique inputs it's fed. This doesn't seem to be a problem for smaller datasets (on the order of ~50 elements) - the size of the dictionary the TfidfVectorizer produces once it has been fitted equals the size of the input.
What am I missing?
| 1 | 1 | 0 | 1 | 0 | 0 |
In the case of multi-input or multi-output models according to https://keras.io/models/model/, one can use
model = Model(inputs=a1, outputs=[b1, b2])
What if b1 and b2 are actually identical target values? I.e. After few initial layers, model has two independent "branches" and each should give the same value. Below is very simplified example
a = Input(shape=(32,))
b1 = Dense(32)(a)
b2 = Dense(32)(a)
model = Model(inputs=a, outputs=[b1,b2])
Is there a nicer/better way of doing fit than duplicating target values?
model.fit(x_train, [y_train, y_train])
Additionaly, if true labels (y_train) are needed during fit (only), one can use them like this
model.fit([x_train,y_train], [y_train, y_train])
Is there any better solution? Also, what to do with the prediction?
model.predict([x_test, y_test_fake_labels])
| 1 | 1 | 0 | 1 | 0 | 0 |
I have a pandas dataframe in which one column of text strings contains new line separated values.
I want to split each CSV field and create a new row per entry.
My Data Frame is like:
Col-1 Col-2
A Notifications
Returning Value
Both
B mine
Why Not?
Expected output is:
Col-1 Col-2
A Notifications
A Returning Value
A Both
B mine
B Why Not?
| 1 | 1 | 0 | 0 | 0 | 0 |
I am new to programming.I have a pandas data frame in which two string columns are present.
Data frame is like below:
Col-1 Col-2
Update have a account
Account account summary
AccountDTH Cancel
Balance Balance Summary
Credit Card Update credit card
Here i need to check the similarity of Col-2 elements with each element of Col-1.
It Means i have to compare have a account with all the elements of Col-1.
Then find the top 3 similar one. Suppose the similarity scores are :Account(85),AccountDTH(80),Balance(60),Update(45),Credit Card(35).
Expected Output is:
Col-2 Output
have a account Account(85),AccountDTH(80),Balance(60)
| 1 | 1 | 0 | 0 | 0 | 0 |
I have this python function that works as expected. Is it possible to save the logic as NLP stemmer?
If yes, what changes needs to be done?
import itertools, re
def dropdup(mytuple):
newtup=list()
for i in mytuple:
i = i[:-3] if i.endswith('bai') else i
for r in (("tha", "ta"), ("i", "e")):
i = i.replace(*r)
i = re.sub(r'(\w)\1+',r'\1', i)
newtup.append(''.join(i for i, _ in itertools.groupby(i)))
return tuple(newtup)
dropdup(('savithabai', 'samiiir', 'aaaabaa'))
('saveta', 'samer', 'aba')
I will like the users to import something like this...
from nltk.stemmer import indianNameStemmer
There are a few more rules to be added to the logic. I just want to know if this is a valid (pythonic) idea.
| 1 | 1 | 0 | 0 | 0 | 0 |
I want categorized the free text written name and make a categorical variable after this
Only first : Only first letter is capital
Standard usage : First letter every words is capital
All capital : Every letter is in capital letter
All small : Every letter is in lover case
Unidentified : Not in any of 4 category above
Here's my data
Id Name
1 Donald trump
2 Barack Obama
3 Hillary ClintoN
4 BILL GATES
5 jeff bezoz
6 Mark Zuckerberg
What I want
Id Name Category
1 Donald trump Only first
2 Barack Obama Standard usage
3 Hillary ClintoN Unidentified
4 BILL GATES All capital
5 jeff bezoz All small
6 Mark Zuckerberg Standard usage
What I did is
df['Uppercase'] = df['Name'].str.findall(r'[A-Z]').str.len()
df['Lowercase'] = df['Name'].str.findall(r'[a-z]').str.len()
df['WordCount'] = df['Name'].str.count(' ') + 1
Then do some logic using map function, such us:
`df['Lowercase'] = 0` for `All capital`
`df['Uppercase'] = 0` for `All small`
`df['Uppercase'] - df['WordCount'] = 0` for `Standard usage`
`df['Uppercase'] = 1 and `df['WordCount']` for `Only first`
If this does't belong to anything it labelled as Unidentified
But, naBih baWazir will be recorded as Standard usage based on standard rule, not Unidentified, I think there's any better way to do so
| 1 | 1 | 0 | 0 | 0 | 0 |
I am coding a personal assitant in Python. At this moment, I am planning all the things I am going to do but I have come up with a problem that I can't solve.
I will be running a main script that will check if user says 'Hello' every 3 seconds. If he does so, then it should start running another script/function and stop the current one. After the task is performed it should start running again the main script (I will be using different scripts for each task to make it cleaner). I had thought about a while loop but I am not sure if this is the best option.
| 1 | 1 | 0 | 0 | 0 | 0 |
I'm on Windows 10 and used pip to install spacy but am now getting an error when running
import spacy
in python shell.
My error message is:
Traceback (most recent call last):
File "C:\Users\Administrator\errbot-root\plugins\utility\model_training_test.py", line 17, in <module>
import spacy
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\spacy\__init__.py", line 4, in <module>
from .cli.info import info as cli_info
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\spacy\cli\__init__.py", line 1, in <module>
from .download import download
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\spacy\cli\download.py", line 5, in <module>
import requests
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\__init__.py", line 43, in <module>
import urllib3
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\urllib3\__init__.py", line 8, in <module>
from .connectionpool import (
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\urllib3\connectionpool.py", line 11, in <module>
from .exceptions import (
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\urllib3\exceptions.py", line 2, in <module>
from .packages.six.moves.http_client import (
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\urllib3\packages\six.py", line 203, in load_module
mod = mod._resolve()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\urllib3\packages\six.py", line 115, in _resolve
return _import_module(self.mod)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\urllib3\packages\six.py", line 82, in _import_module
__import__(name)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\http\client.py", line 71, in <module>
import email.parser
File "C:\Users\Administrator\errbot-root\plugins\utility\email.py", line 1, in <module>
from errbot import BotPlugin, botcmd, arg_botcmd, webhook
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\errbot\__init__.py", line 12, in <module>
from .core_plugins.wsview import bottle_app, WebView
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\errbot\core_plugins\wsview.py", line 5, in <module>
from bottle import Bottle, request
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\bottle.py", line 38, in <module>
import base64, cgi, email.utils, functools, hmac, imp, itertools, mimetypes,\
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\cgi.py", line 39, in <module>
from email.parser import FeedParser
ModuleNotFoundError: No module named 'email.parser'; 'email' is not a package
Edit: When trying to pip install email, i get the following error:
Collecting email
Using cached https://files.pythonhosted.org/packages/71/e7/816030d3b0426c130040bd068be62b9213357ed02896f5d9badcf46d1b5
f/email-4.0.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\users\administrator\appdata\local\programs\python\python36\lib\site-packages\setuptools\__init__.py", lin
e 12, in <module>
import setuptools.version
File "c:\users\administrator\appdata\local\programs\python\python36\lib\site-packages\setuptools\version.py", line
1, in <module>
import pkg_resources
File
"c:\users\administrator\appdata\local\programs\python\python36\lib\site-packages\pkg_resources\__init__.py",
line 36, in <module>
import email.parser
File "C:\Users\ADMINI~1\AppData\Local\Temp\2\pip-install-
p378w8he\email\email\parser.py", line 10, in <module>
from cStringIO import StringIO
ModuleNotFoundError: No module named 'cStringIO'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in
C:\Users\ADMINI~1\AppData\Local\Temp\2\pip-install-p378w8
he\email\
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a dataset (~80k rows) that contains a comma-separated list of tags (skills), for example:
python, java, javascript,
marketing, communications, leadership,
web development, node.js, react
...
Some are as short as 1, others can be as long as 50+ skills. I would like to cluster groups of skills together (Intuitively, people in same cluster would have a very similar set of skills)
First, I use CountVectorizer from sklearn to vectorise the list of words and perform a dimensionr reduction using SVD, reducing it to 50 dimensions (from 500+). Finally, I perform KMeans Clustering with n=50 , but the results are not optimal -- Groups of skills clustered together seems to be very unrelated.
How should I go about improving the results? I'm also not sure if SVD is the most appropriate form of dimension reduction for this use case.
| 1 | 1 | 0 | 0 | 0 | 0 |
How do I download nltk stopwords in online server Jupyter notebook?
In the local host, we can easily type nltk.download and downloading starts
but in online Kaggle server notebook, nltk.download doesn't work.
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a pandas data frame with two columns containing strings, like below:
Col-1 Col-2
Animal have an apple
Fruit tiger safari
Veg Vegetable Market
Flower Garden
From this i have to create a function which takes a string as argument.
This function then checks the fuzziwuzzy similarity between the input string and the elements of Col-2 and outputs the elements of Col-1 and Col-2 corresponding of the highest computed similarity.
For instance suppose input string is Gardening Hobby, here it will check similarity with all the elements of df['Col-2']. The function finds this ways that Garden as the highest similarity with Gardening Hobby with a score of 90. Then Expected output is:
I/P O/P
Gardening Hobby Garden(60),Flower
| 1 | 1 | 0 | 0 | 0 | 0 |
I am using the spacy library to identify the entity from the text. When I passed the text to the nlp object it is not identifying the date properly.
text : meet me 9 Oct. - 8am
Identified ->
9 (as Cardinal)
Oct. - 8 (as Date)
Required ->
9 Oct. (as Date)
8am (as Time)
So could you please help me out how could I resolve this issue. I am beginner in nlp.
Regards,
Aman
| 1 | 1 | 0 | 0 | 0 | 0 |
I have a csv with a single column, each row is a text document. All text has been normalized:
all lowercase
no punctuation
no numbers
no more than one whitespace between words
no tags(xml, html)
I have also this R script which constructs the Document Term Matrix on these documents and does some machine learning analysis. I need to convert this in Spark.
The first step is to produce the Document Term Matrix where for each term there is the relative frequency count in the document. The problem is that I am getting different vocabularies size using R, respect to spark api or python sklearn (spark and python are consistent in the result).
This is the relevant code for R:
library(RJDBC)
library(Matrix)
library(tm)
library(wordcloud)
library(devtools)
library(lsa)
library(data.table)
library(dplyr)
library(lubridate)
corpus <- read.csv(paste(inputDir, "corpus.csv", sep="/"), stringsAsFactors=FALSE)
DescriptionDocuments<-c(corpus$doc_clean)
DescriptionDocuments <- VCorpus(VectorSource(DescriptionDocuments))
DescriptionDocuments.DTM <- DocumentTermMatrix(DescriptionDocuments, control = list(tolower = FALSE,
stopwords = FALSE,
removeNumbers = FALSE,
removePunctuation = FALSE,
stemming=FALSE))
# VOCABULARY SIZE = 83758
This is the relevant code in Spark (1.6.0, Scala 2.10):
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel, RegexTokenizer}
var corpus = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "false").load("/path/to/corpus.csv")
// RegexTokenizer splits by default on one or more spaces, which is ok
val rTokenizer = new RegexTokenizer().setInputCol("doc").setOutputCol("words")
val words = rTokenizer.transform(corpus)
val cv = new CountVectorizer().setInputCol("words").setOutputCol("tf")
val cv_model = cv.fit(words)
var dtf = cv_model.transform(words)
// VOCABULARY SIZE = 84290
I've also checked in python sklearn and I got consistent result with spark:
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
corpus = pd.read_csv("/path/to/corpus.csv")
docs = corpus.loc[:, "doc"].values
def tokenizer(text):
return text.split
cv = CountTokenizer(tokenizer=tokenizer, stop_words=None)
dtf = cv.fit_transform(docs)
print len(dtf.vocabulary_)
# VOCABULARY SIZE = 84290
I don't know very much R tm package but it seems to me that by default should tokenize on white spaces by default. Someone has any hint why am I getting different vocabulary size?
| 1 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.