text
stringlengths
0
27.6k
python
int64
0
1
DeepLearning or NLP
int64
0
1
Other
int64
0
1
Machine Learning
int64
0
1
Mathematics
int64
0
1
Trash
int64
0
1
I have a dataset which has 3 different columns of relevant text information which I want to convert into doc2vec vectors and subsequently classify using a neural net. My question is how do I convert these three columns into vectors and input into a neural net? How do I input the concatenated vectors into a neural network?
1
1
0
1
0
0
I have code that looks like this: data = u"Species:cat color:orange and white with yellow spots number feet: 4" from spacy.matcher import PhraseMatcher import en_core_web_sm nlp = en_core_web_sm.load() data=data.lower() matcher = PhraseMatcher(nlp.vocab) terminology_list = [u"species",u"color", u"number feet"] patterns = list(nlp.tokenizer.pipe(terminology_list)) matcher.add("TerminologyList", None, *patterns) doc = nlp(data) for idd, (match_id, start, end) in enumerate(matcher(doc)): span = doc[start:end] print(span.text) I want to be able to grab everything until the next match. So that the match looks like this: species:cat color:orange and white with yellow spots number feet: 4 I was trying to extend the span but I don't know how to say stop before the next match. I know that I can have it be like span = doc[start:end+4] or something but that is hard-coding how far ahead to go and I won't know how far I should extend the index. Thank you
1
1
0
0
0
0
Consider a set of n cubes with colored facets (each one with a specific color out of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k ≤ n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorithm. What I did so far: I thought that the following representation would be suitable: an Individual could be an array of n integers, each number having a value between 1 and 12, indicating the current position of the cube (an input file contains n lines, each line shows information about the color of each face of the cube). Then, the Population consists of multiple Individuals. The Crossover method should create a new child(Individual), containing information from its parents (approximately half from each parent). Now, my biggest issue is related to the Mutate and Fitness methods. In Mutate method, if the probability of mutation (say 0.01), I should change the position of a random cube with other random position (for example, the third cube can have its position(rotation) changed from 5 to 12). In Fitness method, I thought that I could compare, two by two, the cubes from an Individual, to see if they have common faces. If they have a common face, a "count" variable will be incremented with the number of common faces and if all the 4 lateral faces will be the same for these 2 cubes, the count will increase with another number of points. After comparing all the adjacent cubes, the count variable is returned. Our goal is to obtain as many adjacent cubes having the same lateral faces as we can, i.e. to maximize the Fitness method. My question is the following: How can be a rotation implemented? I mean, if a cube changes its position(rotation) from 3, to 10, how do we know the new arrangement of the faces? Or, if I perform a mutation on a cube, what is the process of rotating this cube if a random rotation number is selected? I think that I should create a vector of 6 elements (the colors of each face) for each cube, but when the rotation value of a cube is modified, I don't know in what manner the elements of its vector of faces should be rearranged. Shuffling them is not correct, because by doing this, two opposite faces could become adjacent, meaning that the vector doesn't represent that particular cube anymore (obviously, two opposite faces cannot be adjacent).
1
1
0
0
0
0
I am looking for some heads up to train a conventional neural network model with bert embeddings that are generated dynamically (BERT contextualized embeddings which generates different embeddings for the same word which when comes under different context). In normal neural network model, we would initialize the model with glove or fasttext embeddings like, import torch.nn as nn embed = nn.Embedding(vocab_size, vector_size) embed.weight.data.copy_(some_variable_containing_vectors) Instead of copying static vectors like this and use it for training, I want to pass every input to a BERT model and generate embedding for the words on the fly, and feed them to the model for training. So should I work on changing the forward function in the model for incorporating those embeddings? Any help would be appreciated!
1
1
0
1
0
0
I am working on a sentiment analysis problem and found the vaderSentiment package but cannot get it to run. It is giving me an 'encoding' error. I have tried adding 'from io import open' but that did not fix my issue. Please see code below. from io import open from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyser = SentimentIntensityAnalyzer() def sentiment_analyzer_scores(sentence): score = analyser.polarity_scores(sentence) print("{:-<40} {}".format(sentence, str(score))) sentiment_analyzer_scores("The phone is super cool.") Here are the results I am wanting: "The phone is super cool----------------- {'neg': 0.0, 'neu': 0.326, 'pos': 0.674, 'compound': 0.7351}" The results I am getting: File "<ipython-input-27-bbb91818db04>", line 6, in <module> analyser = SentimentIntensityAnalyzer() File "C:\Users\mr110e\AppData\Local\Continuum\anaconda2\lib\site packages\vaderSentiment\vaderSentiment.py", line 212, in __init__ with open(lexicon_full_filepath, encoding='utf-8') as f: TypeError: 'encoding' is an invalid keyword argument for this function
1
1
0
0
0
0
I have a directory of roughly 600 CSV files that contain twitter data with multiple fields of various types (ints, floats, and strings). I have a script that can merge the files together, but the string fields can contain commas themselves are not quoted causing the string fields to separate and force text on new lines. Is it possible to quote the strings in each file and then merge them into a single file? Below is the script I use to merge the files and some sample data. Merger script: %%time import csv import glob from tqdm import tqdm with open('C:\Python\Scripts\Test_tweets\Test_output.csv', 'wb') as f_output: csv_output = csv.writer(f_output, quoting=csv.QUOTE_NONNUMERIC) write_header = True for filename in tqdm(glob.glob(r'C:\Python\Scripts\Test_tweets\*.csv')): with open(filename, 'rb') as f_input: csv_input = csv.reader(f_input) header = next(csv_input) if write_header: csv_output.writerow(header) write_header = False for row in tqdm(csv_input): row = row[:7] + [','.join(row[7:])] # Skip rows with insufficient values if len(row) > 7: row[1] = float(row[1]) row[5] = float(row[5]) row[6] = float(row[6]) csv_output.writerow(row) Sample data: 2014-02-07T00:25:40Z,431584511542198272,FalseAlarm_xox,en,-,-81.4994315,35.3268904,is still get hair done,Is Still Getting Hair Done 2014-02-07T00:25:40Z,431584511525003265,enabrkovic,en,-,-85.40364208,40.19369368,i had no class todai why did i wait 630 to start do everyth,I had no classes today why did I wait 630 to start doing EVERYTHING 2014-02-07T00:25:41Z,431584515757457408,_beacl,pt,-,-48.05338676,-16.02483911,passei o dia com o meu amor comemo demai <3 @guugaraujo,passei o dia com o meu amor, comemos demais ❤️ @guugaraujo 2014-02-07T00:25:42Z,431584519930396672,aprihasanah,in,-,106.9224971,-6.2441371,4 hari ngga ada kepsek rasanya nyaman bgt kerjaan juga lebih teratur tp skalinya doi masuk administrasi kacau balau lg yanasib,4 hari ngga ada kepsek rasanya nyaman bgt. kerjaan juga lebih teratur. tp skalinya doi masuk, administrasi kacau balau lg. yanasib &gt;_&lt;" 2014-02-07T00:25:42Z,431584519951749120,MLEFFin_awesome,en,-,-77.20315866,39.08811105,never a dull moment with emma <3 /MLEFFin_awesome/status/431584519951749120/photo/1,Never a dull moment with Emma /0Wfs5VqfVz 2014-02-07T00:25:43Z,431584524120510464,mimiey_natasya,en,-,103.3596089,3.9210196,good morn,Good morning... 2014-02-07T00:25:43Z,431584524124684288,louykins,en,-,-86.06823257,41.74938946,that Oikos commerci with @johnstamos @bobsaget and @davecoulier is better than my whole life #takesmeback #youcankissmeanytimejohn,That Oikos commercial with @JohnStamos, @bobsaget, and @DaveCoulier is better than my whole life. #takesmeback #youcankissmeanytimejohn 2014-02-07T00:25:44Z,431584528306421760,savannachristy4,en,-,-79.99920285,39.65367864,rememb when we would go to club zoo :D,Remember when we would go to club zoo?? 2014-02-07T00:25:44Z,431584528302231553,janiya_monet,en,-,-83.62028684,39.20591822,@itscourtney_365 thei call,@ItsCourtney_365 they. Called. 2014-02-07T00:25:44Z,431584528302223360,norastanky,en,-,-118.09849064,33.79394737,when you see your hometown in your english book /norastanky/status/431584528302223360/photo/1,When you see your hometown in your english book&gt;&gt; /XHRFymLFp4 2014-02-07T00:25:46Z,431584536703799296,Ericb1980,en,-,-82.32639648,27.92373599,i'm at longhorn steakhouse brandon fl .com/1bzZsrp,I'm at LongHorn Steakhouse (Brandon, FL) /YdCJKXmSmN 2014-02-07T00:25:46Z,431584536695410688,repokempt,en,-,37.40298473,55.96248794,@tonichopchop moron drive me nut,@tonichopchop Morons. Drives me nuts! 2014-02-07T00:25:47Z,431584540889317377,BeeNiabee6,en,-,-82.494139,27.4908062,my god sister got drink,My God sister got drinking 2014-02-08T00:00:01Z,4.3194E+17,NewarkWeather,in,-,-75.68444444,39.695,02 07 @19 00 temp 31.0 f wc 31.0 f wind 0.0 mph gust 0.0 mph bar 30.358 in rise rain 0.00 in hum 68 uv 0.0 solarrad 0,02/07@19:00 - Temp 31.0F, WC 31.0F. Wind 0.0mph ---, Gust 0.0mph. Bar 30.358in, Rising. Rain 0.00in. Hum 68%. UV 0.0. SolarRad 0.,,,,,,,,,,,,,, 2014-02-08T00:00:02Z,4.3194E+17,bastianwr,in,-,106.11073,-2.1198,happi weekend at sman 1 pangkalpinang https://path.com/p/1zjYtB,Happy Weekend! (at SMAN 1 Pangkalpinang) — /9U86N1BmD6,,,,,,,,,,,,,,,,, 2014-02-08T00:00:03Z,4.3194E+17,izaklast,en,-,-109.9176369,31.40244847,dihydrogen monoxid is good for you Watermill express .com/1bxHT81,Dihydrogen monoxide is good for you (@ Watermill Express) /IvfiuNHigM,,,,,,,,,,,,,,,,, 2014-02-08T00:00:03Z,4.3194E+17,blackbestpeople,tr,-,29.21950004,40.91441821,okulda özlediyim sadec kantindeki kakayolu süd,Okulda özlediyim sadece kantindeki kakayolu süd,,,,,,,,,,,,,,,,, 2014-02-08T00:00:03Z,4.3194E+17,Hakooo03,tr,-,3.72651687,51.06650946,gta v oynar katliam cikartirim bend,Gta v oynar katliam cikartirim bende !,,,,,,,,,,,,,,,,, 2014-02-08T00:00:03Z,4.3194E+17,piaras_14,en,-,-6.21720811,54.11456545,@blainmcg17 wee hornbal #taughtyouwell /piaras_14/status/431940452770934784/photo/1,@blainmcg17 wee hornball #taughtyouwell /C6yGymDoyl,,,,,,,,,,,,,,,,, 2014-02-08T00:00:04Z,4.3194E+17,PPompita,es,-,9.3215546,40.315019,@enrique305 esto es perfecto uauh yo y mi hermano v a ny al concierto lo enamorado 15feb desd italia solo para ti /PPompita/status/431940456973619200/photo/1,@enrique305 Esto es Perfecto uauh yo y mi hermano V a NY al concierto Los Enamorados 15Feb desde Italia solo para ti. /OrYYE2zN80,,,,,,,,,,,,,,,,, 2014-02-08T00:00:05Z,4.3194E+17,NickMontesdeoca,und,-,-71.34854858,42.63122899,<3,,,,,,,,,,,,,,,,,, 2014-02-08T00:00:05Z,4.3194E+17,Askin28Furkan,tr,-,28.6281946,41.0166627,birakma beni insanlar kötü bırakma beni korkuyorumm,Birakma beni insanlar kötü, bırakma beni korkuyorumm,,,,,,,,,,,,,,,, 2014-02-08T00:00:05Z,4.3194E+17,mumfy98,en,-,-75.59400911,43.08187836,i just want a horse,I just want a horse!!,,,,,,,,,,,,,,,,, 2014-02-08T00:00:05Z,4.3194E+17,Pitmedden_Weath,en,-,-2.18416667,57.33888889,wind 7.2 mph s Barometer 979.9 hpa fall temperature 2.6 c rain todai 0.0 mm forecast stormi much precipitation,Wind 7.2mph S. Barometer 979.9hPa, Falling. Temperature 2.6°C. Rain today 0.0mm. Forecast Stormy, much precipitation,,,,,,,,,,,,,,, 2014-02-08T00:00:06Z,4.3194E+17,BoeBaFett,en,-,-79.0129325,33.794075,2 whole hour still no repli,2 whole hours... still no reply,,,,,,,,,,,,,,,,,
1
1
0
0
0
0
I am fairly new to NLP, I want to implement a python based clustering algorithm, it will be having : Context/Topic Extraction - From the Title Statement (Will probably contain not more than 6-7 words) Clustering Algorithm So the problem is, that I have a bunch of statements(20 statements * 5-6 words per statement = 100-120 words) all related to a Title Statement. And an Algorithm should be able to cluster them. For the (1) - As an input, first I will have a Title, from that title I want to extract various topics, for ex : TITLE : "Problem in Manufacturing Assembly Line" - From this I want to extract something like 1. Mechanical Problems 2. Electrical Problems 3. Linemen Management 4. Supply Chain Management Problems...... And use these extracted topics to cluster those statements. I can perform the second task of clustering, but how do I extract topics from a single statement that contains not more than 6-7 words? Language : English Any idea how to go about the first problem??
1
1
0
1
0
0
I have a list of approximately 100 keywords and I need to search them in a huge corpus of over 0.1 million documents. I don't want an exact match , for example if keyword is Growth Fund, I am expecting all the matches like growth funds, growth fund of america etc. Any suggestions for this? I have tried using spacy's PhraseMatcher but it gives an ValueError: [T001] Max length currently 10 for phrase matching. import spacy from spacy.matcher import PhraseMatcher full_funds_list_flat = "<list of 100+ Keywords>" nlp = spacy.load('en_core_web_sm') keyword_patterns = [nlp(text) for text in full_funds_list_flat] matcher = PhraseMatcher(nlp.vocab) matcher.add('KEYWORD', None, *keyword_patterns)
1
1
0
0
0
0
I am getting repeated lines in my summarizer output. I am using genism in python for summarizing text documents. How to remove duplicate lines from the output of the summarizer. The output is coming with repeated content. How can I only keep unique lines in the output from the summarizer .The input file is as follows From: Jos To: Halley, Ibizo /FR Cc: pqr Secretariat; Björnsson Ulrika Subject: [EXTERNAL] pqr Response to Letter of Intent for a Variation WS procedure:SE/H/xxxx/WS/ Date: vendredi 1 juin 2018 13:16:48 Attachments: image001.jpg A07_SE_xxx yy R&D.PDF Dear Ibizo, Thank you for your letter of intent. The pqr agrees, on the basis of the documentation provided, that the above mentioned work- sharing application as specified in the enclosed letter of intent is acceptable for submission under Article 20 of the Commission Regulation (EC) No 1234/2008 of 24 November 2008. The reference authority for the worksharing procedure will be Sweden and the assigned work sharing procedure number will be: A07: SE/H/xxxx/WS/ Please be advised that this confirmation is not to be considered as validation of your application. The validity of the worksharing application will be checked by the reference authority after submission. Please liaise with the assigned reference authority for the further proceedings. Kind regards, Joe Assistant Administrator Parallel Distribution & Certificates Committees & Inspections Department Panthers Medicines Agency 30 ABC St, Michigan lane Fax +44 (0)20 certificate@zz.europa.eu | www.zz.europa.eu This message and any attachment contain information which may be confidential or otherwise protected from disclosure. It is intended for the addressee(s) only and should not be relied upon as legal advice unless it is otherwise stated. If you are not the intended recipient(s) (or authorised by an addressee who received this message), access to this e-mail, or any disclosure or copying of its contents, or any action taken (or not taken) in reliance on it is unauthorised and may be unlawful. If you have received this e-mail in error, please inform the sender immediately. P Please consider the environment and don't print this e-mail unless you really need to From: Jos Sent: 30 April 2018 11:17 To: Ibizo.Halley@xxx.com Cc: pqr Secretariat Subject: RE: Alfuzosin Hydrochloride - Request for Worksharing procedure Dear Ibizo, Thank you for your zzil. The letter of intent will be discussed in the May 2018 pqr meeting and you will receive feedback within two weeks following the meeting. Kind regards, Joe Assistant Administrator Parallel Distribution & Certificates Committees & Inspections Department mailto:eretta.ab@zz.europa.eu mailto:Ibizo.Halley@xxx.com mailto:H-pqrSecretariat@zz.europa.eu mailto:Ulrika.Bjornsson@mpa.se mailto:certificate@zz.europa.eu pqr/162/2010/Rev.2, August 2014 26 April 2018 pqr Secretariat Panthers Medicines Agency 30 Bluegoon Place, ABC Wharf ABC E14 5EU United Kingdom Subject: Letter of intent for the submission of a worksharing procedure to the pqr according to Article 20 of Commission Regulation (EC) No 1234/2008 Worksharing Applicant details: Name : xxx-yy R&D Address : 1, lane Pierre Brossolette 91385 Chilly-Maz Sw Contact person details (i.e. name, address, e-mail address, phone number, fax number) : Ibizo Halley 1, lane Pierre Brossolette 91385 Chilly-Maz Sw zzil: Ibizo.halley@xxx.com Tel : + 33 1 60 49 51 61 Application details: This letter of intent for the submission of a Type II following a worksharing procedure according to Article 20 of Commission Regulation (EC) No 1234/2008, concerns the following medicinal products authorised via MRP and national procedures: Products authorized via MRP: Alfuzosin 2.5 mg film-coated tablets Product name Active substance(s) MRP number XATRAL Alfuzosin hydrochloride SE/H/0112/001 mailto:Ibizo.halley@xxx.com Alfuzosin 5 mg prolonged-release tablets Product name Active substance(s) MRP number XATRAL SR 5 MG Alfuzosin hydrochloride SE/H/0112/002 XATRAL Alfuzosin hydrochloride SE/H/0112/002 Alfuzosin 10 mg prolonged-release tablets Product name Active substance(s) MRP number XATRAL UNO 10 MG Alfuzosin hydrochloride SE/H/0112/003 ALFUZOSIN WINTHROP UNO 10 MG Alfuzosin hydrochloride DE/H/2130/001 ALFUZOSIN ZENTIVA 10 MG Alfuzosin hydrochloride DE/H/2131/001/MR UROXATRAL Alfuzosin hydrochloride DE/H/2129/001 Alfuzosin Zentiva 10 mg Retardtabletten Alfuzosin hydrochloride DE/H/2131/001 XATRAL OD 10 MG Alfuzosin hydrochloride SE/H/0112/003 Products authorised via national procedure: Alfuzosin 2.5 mg film-coated tablets Product name Active substance(s) National MA number Member state XATRAL Alfuzosin hydrochloride NO APPLICATION CODE -#10600 Denmark XATRAL 2.5 MG Alfuzosin hydrochloride NL 14785 France ALFUZOSIN WINTHROP 2.5 MG Alfuzosin hydrochloride 32177.00.00 Germany UROXATRAL Alfuzosin hydrochloride 18111.00.00 Germany XATRAL Alfuzosin hydrochloride NO APPLICATION CODE -#10602 Greece XATRAL 2.5 MG Alfuzosin hydrochloride PA 540/162/1 Ireland XATRAL Alfuzosin hydrochloride 027314018 Italy MITTOVAL Alfuzosin hydrochloride 026670024 Italy ALFUZOSINA ZENTIVA Alfuzosin hydrochloride NO APPLICATION CODE -#10163 Italy XATRAL Alfuzosin hydrochloride RVG 13689 Netherlands DALFAZ Alfuzosin hydrochloride R/6812 Poland BENESTAN 2.5 MG Alfuzosin hydrochloride 60031 Spain XATRAL 2.5 MG Alfuzosin hydrochloride PL 04425/0655 United Kingdom ALFUZOSIN HYDROCHLORIDE 2.5MG Alfuzosin hydrochloride PL 17780/0220 United Kingdom Alfuzosin 5 mg prolonged-release tablets Product name Active substance(s) National MA number Member state XATRAL 5 RETARD Alfuzosin hydrochloride NAT-H-4908-01 Belgium XATRAL Alfuzosin hydrochloride 17139 Cyprus XATRAL LP 5 MG Alfuzosin hydrochloride NL 19090 France ALFUZOSIN WINTHROP 5 MG Alfuzosin hydrochloride 34637.00.00 Germany XATRAL Alfuzosin hydrochloride NO APPLICATION CODE -#10812 Greece ALFETIM SR 5 MG Alfuzosin hydrochloride OGYI-T-4374/01 Hungary ALFUZOSINA ZENTIVA Alfuzosin hydrochloride NO APPLICATION CODE -#8994 Italy XATRAL 5 RETARD Alfuzosin hydrochloride 583/98/12/4785 Luxembourg XATRAL SR 5 MG Alfuzosin hydrochloride MA082/05001 Malta DALFAZ SR Alfuzosin hydrochloride 8127 Poland XATRAL LP 5 MG Alfuzosin hydrochloride 1026/2008 Romania XATRAL 5-SR Alfuzosin hydrochloride 77/0275/96-S Slovakia BENESTAN RETARD 5 MG Alfuzosin hydrochloride 60767 Spain Alfuzosin 10 mg prolonged-release tablets Product name Active substance(s) National MA number Member state XATRAL UNO 10 MG Alfuzosin hydrochloride NAT-H-4908-04 Belgium XATRAL XL 10 MG Alfuzosin hydrochloride 19244 Cyprus XATRAL SR 10 MG Alfuzosin hydrochloride 345201 Estonia XATRAL CR 10 MG Alfuzosin hydrochloride 13973 Finland ALFUZOSINE ZENTIVA LP 10 MG Alfuzosin hydrochloride NL 24407 France XATRAL LP 10 MG Alfuzosin hydrochloride NL 24386 France XATRAL OD Alfuzosin hydrochloride NO APPLICATION CODE -#9520 Greece ALFETIM UNO 10 MG Alfuzosin hydrochloride OGYI-T-8022/01 Hungary XATRAL 10 MG Alfuzosin hydrochloride PA 540/162/3 Ireland MITTOVAL Alfuzosin hydrochloride 026670048-051 Italy XATRAL 10 MG Alfuzosin hydrochloride 027314044-057 Italy ALFUZOSINA ZENTIVA Alfuzosin hydrochloride NO APPLICATION CODE -#9579 Italy XATRAL SR 10 MG Alfuzosin hydrochloride 99-0702 Latvia XATRAL SR 10 MG Alfuzosin hydrochloride LT-2000/7118/10 Lithuania XATRAL UNO 10 MG Alfuzosin hydrochloride 0005/01/09/0045 Luxembourg XATRAL XL 10 MG Alfuzosin hydrochloride MA082/05002 Malta XATRAL XR 10 MG Alfuzosin hydrochloride RVG 23923 Netherlands DALFAZ UNO Alfuzosin hydrochloride 8378 Poland BENESTAN OD 10 MG Alfuzosin hydrochloride 99/H/0006/01 Portugal ALFUZOSINA ZENTIVA, 10 MG Alfuzosin hydrochloride 99/H/0007/001 Portugal XATRAL SR 10 MG Alfuzosin hydrochloride 7893/2006 Romania UNIBENESTAN 10 MG Alfuzosin hydrochloride 63605 Spain XATRAL XL 10 MG Alfuzosin hydrochloride PL 04425/0657 United Kingdom BESAVAR XL Alfuzosin hydrochloride PL 17780/0221 United Kingdom The following variation is intended to be part of the work-sharing procedure: Number as in the classification guideline: Title of variation as in the classification guideline Type of variation: C.I.4 Changes in the Summary of Product Characteristics, Labelling or package Leaflet due new quality, preclinical, clinical or pharmacovigilance data Type II Justification for worksharing : xxx submitted for alfuzosin hydrochloride separate national and MRP variations for implementation of CCDS V13 including among other topics the addition of a contraindication to strong CYP3A4 inhibitors in the sections 4.3 and 4.5. The MAH received on 04 April 2018 a letter from pqr (zz/pqr/195547/2018) requesting to re-submit the variation for this contraindication as a work-sharing application including all MRP and nationally authorised products to harmonise the assessment of the contraindication in section 4.3 and 4.5 of the SmPC across the EU (provided in Annex I). Justification for grouping : Not applicable Intended submission date : 30 June 2018 Preferred Reference Authority : The Para Medical Products Agency, as RMS of the MRP procedure SE/H/0112/001-003 Explanation that all MAs concerned belong to the same holder : I hereby confirm that all the marketing authorisations, listed in application details (refer above), concerned by the worksharing procedure belong to the same marketing authorisation holder, as they are part of the same mother company xxx, as per the Commission communication 98/C 229/03. Yours sincerely, Ibizo HALLEY xxx-yy R&D, Europe Region Global Logistics Affairs Europe Please send this letter electronically to the pqr Secretariat (H-pqrSecretariat@zz.europa.eu) or RMS as relevant. mailto:H-pqrSecretariat@zz.europa.eu ANNEX 1 30 Bluegoon Place ● ABC Wharf ● ABC E14 5EU ● United Kingdom Telephone +44 (0)20 3660 6000 Facsimile +44 (0)20 3660 5520 Dr.ssa Maty Lecc xxx S.p.A Viale L. Bodio 20158 AUGB Italy E-mail: DRA@xxx.com 4 April 2018 zz/pqr/195547/2018 Subject: Request for submission of variation worksharing procedure for Xatral (alfuzosin) and related names Dear Dr Maty Lecchi, During the March meeting, the pqr was informed that separate national and MRP variations have been submitted across EU Member States to request the inclusion of the below contraindication for Xatral (alfuzosin) and related names: Section 4.3 Concomitant intake of strong inhibitors of CYP3A4 (see paragraph 4.5). The parallel submissions in several Member States have led to a disharmonised assessment of the contraindication. In the interest of public health across the Panthers Union, the pqr requests xxx to re-submit the variation as a worksharing application including all MRP, DCP and nationally authorised products to harmonise the assessment of the contraindication in section 4.3 of the SmPC across the EU. Please note that a separate letter on an independent issue to this has been sent to Esther de Bles, xxx-yy Netherlands B.V.. However, there are general concerns by the pqr on the lack of use of variation worksharing by xxx-yy in these cases. Kind Regards, Laura Oliveira Santamaria Chair of pqr mailto:DRA@xxx.com Worksharing Applicant details: Name xxx-yy R&D, Europe Region Global Logistics Affairs Europe Panthers Medicines Agency 30 ABC St, Michigan lane Fax +44 (0)20 3660 5525 certificate@zz.europa.eu | www.zz.europa.eu This message and any attachment contain information which may be confidential or otherwise protected from disclosure. It is intended for the addressee(s) only and should not be relied upon as legal advice unless it is otherwise stated. If you are not the intended recipient(s) (or authorised by an addressee who received this message), access to this e-mail, or any disclosure or copying of its contents, or any action taken (or not taken) in reliance on it is unauthorised and may be unlawful. If you have received this e-mail in error, please inform the sender immediately. P Please consider the environment and don't print this e-mail unless you really need to From: Ibizo.Halley@xxx.com [mailto:Ibizo.Halley@xxx.com] Sent: 27 April 2018 17:40 To: pqr Secretariat Subject: Alfuzosin Hydrochloride - Request for Worksharing procedure Dear Sirs, Madams, We are pleased to send you a request for the submission of a Type II variation following a worksharing procedure according to Article 20 of Commission Regulation (EC) No 1234/2008 for Alfuzosin hydrochloride containing products. The variation concerns the addition of a contraindication with strong CYP 3A4 inhibitors in section 4.3 and 4.5. The worksharing procedure has been requested to xxx by the chair of pqr, Mme Oliveira Santamaria, the letter is attached as Annex of the letter of intent attached. Thank you in advance for your agreement. Kind regards, Ibizo Halley GEM/EP and OTC switch EU Regional Logistics Product manager Global Logistics Affairs xxx R&D Phone: +33 1 60 49 51 61 logoGRA 1 ________________________________________________________________________ This e-mail has been scanned for all known viruses by Panthers Medicines Agency.
1
1
0
0
0
0
I would like to add some basic Natural Language Processing or Natural Language understanding into a bot I have implemented with the errbot library. This is to add in basic conversation to the bot. So that the operator can have some basic chat with the chatbot. Perhaps leveraging NTLK. Is this something anyone has done already or has any god pointers? Much appreciated.
1
1
0
0
0
0
I have a project that consists of utilizing the kNN algorithm in a csv file and show selected metrics. But when I try to present some metrics it throws a few errors. When trying to use: sensitivity, f1_Score and Precision: sensitivity - print(metrics.recall_score(y_test, y_pred_class)) F1_score - print(metrics.f1_score(y_test, y_pred_class)) Presicion - print(metrics.precision_score(y_test, y_pred_class)) Pycharm throws the following error: ValueError: Target is multiclass but average='binary'. Please choose another average setting The error when trying to print the ROC curve's a little different: ValueError: multiclass format is not supported DATASET LINK TO DATASET: https://www.dropbox.com/s/yt3n1eqxlsb816n/Testfile%20-%20kNN.csv?dl=0 Program import matplotlib import pandas as pd import numpy as np import math import matplotlib.pyplot as plt from matplotlib.dviread import Text from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier #Tools para teste from sklearn import metrics from sklearn.metrics import confusion_matrix from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score def main(): dataset = pd.read_csv('filetestKNN.csv') X = dataset.drop(columns=['Label']) y = dataset['Label'].values X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.34) Classifier = KNeighborsClassifier(n_neighbors=2, p=2, metric='euclidean') Classifier.fit(X_train, y_train) y_pred_class = Classifier.predict(X_test) y_pred_prob = Classifier.predict_proba(X_test)[:, 1] accuracy = Classifier.score(X_test, y_test) confusion = metrics.confusion_matrix(y_test, y_pred_class) print() print("Accuracy") print(metrics.accuracy_score(y_test, y_pred_class)) print() print("Classification Error") print(1 - metrics.accuracy_score(y_test, y_pred_class)) print() print("Confusion matrix") print(metrics.confusion_matrix(y_test, y_pred_class)) #error print(metrics.recall_score(y_test, y_pred_class)) #error print(metrics.roc_curve(y_test, y_pred_class)) #error print(metrics.f1_score(y_test, y_pred_class)) #error print(metrics.precision_score(y_test, y_pred_class)) I just wanted to show the algorithm metrics on the screen.
1
1
0
0
0
0
I do side work writing/improving a research project web application for some political scientists. This application collects articles pertaining to the U.S. Supreme Court and runs analysis on them, and after nearly a year and half, we have a database of around 10,000 articles (and growing) to work with. One of the primary challenges of the project is being able to determine the "relevancy" of an article - that is, the primary focus is the federal U.S. Supreme Court (and/or its justices), and not a local or foreign supreme court. Since its inception, the way we've addressed it is to primarily parse the title for various explicit references to the federal court, as well as to verify that "supreme" and "court" are keywords collected from the article text. Basic and sloppy, but it actually works fairly well. That being said, irrelevant articles can find their way into the database - usually ones with headlines that don't explicitly mention a state or foreign country (the Indian Supreme Court is the usual offender). I've reached a point in development where I can focus on this aspect of the project more, but I'm not quite sure where to start. All I know is that I'm looking for a method of analyzing article text to determine its relevance to the federal court, and nothing else. I imagine this will entail some machine learning, but I've basically got no experience in the field. I've done a little reading into things like tf-idf weighting, vector space modeling, and word2vec (+ CBOW and Skip-Gram models), but I'm not quite seeing a "big picture" yet that shows me how just how applicable these concepts can be to my problem. Can anyone point me in the right direction?
1
1
0
1
0
0
I am extracting proper nouns from a column containing string data. I want to move the extracted nouns into a new column as a list (or, alternatively, as one noun per additional column). There are an arbitrary (and sometimes large) number of nouns per entry that I'm extracting. I've gotten the extraction done and have moved the values I'm interested in to a list, but I can't figure out how to add them on as a column to the case where I extracted them from because of the difference in length between the list I extracted and the fact that it needs to correspond with a single row. from nltk.tokenize import PunktSentenceTokenizer data = [] norm_data['words'] = [] for sent in norm_data['gtd_summary']: sentences = nltk.sent_tokenize(sent) data = data + nltk.pos_tag(nltk.word_tokenize(sent)) for word in data: if 'NNP' in word[1]: nouns = list(word)[0] norm_data['words'].append(nouns) Current Data X Y 1 Joe Montana walks over to the yard 2 Steve Smith joins the Navy 3 Anne Johnson wants to go to a club 4 Billy is interested in Sally What I want X Y Z 1 Joe Montana walks over to the yard [Joe, Montana] 2 Steve Smith joins the Navy [Steve, Smith, Navy] 3 Anne Johnson wants to go to a club [Anne, Johnson] 4 Billy is interested in Sally [Billy, Sally] OR This would be OK too X Y Z L M 1 Joe Montana walks over to the yard Joe Montana NA 2 Steve Smith joins the Navy Steve Smith Navy 3 Anne Johnson wants to go to a club Anne Johnson NA 4 Billy is interested in Sally Billy Sally NA
1
1
0
0
0
0
I am trying to clean my text data in spreadsheet and it has no NAs. I face this error:TypeError: expected string or bytes-like object. import nltk import numpy as np import pandas as pd from nltk.stem import PorterStemmer from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords paragraph=pd.read_excel("..") paragraph.info() paragraph['Subject'].dropna(inplace=True) sentence = paragraph['Subject'].apply(nltk.sent_tokenize) lemmatizer=WordNetLemmatizer() # lemmatizer for i in range(len(sentence)): words=nltk.word_tokenize(sentence[i]) words=[lemmatizer.lemmatize(word) for word in words if word not in set(stopwords.words('english'))] sentence[i]=' '.join(words) I am getting these errors below. Traceback (most recent call last): File "<ipython-input-20-95ed150df96b>", line 11, in <module> words=nltk.word_tokenize(sentence[i]) File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\__init__.py", line 143, in word_tokenize sentences = [text] if preserve_line else sent_tokenize(text, language) File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\__init__.py", line 105, in sent_tokenize return tokenizer.tokenize(text) File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 1269, in tokenize return list(self.sentences_from_text(text, realign_boundaries)) File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 1323, in sentences_from_text return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 1323, in <listcomp> return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 1313, in span_tokenize for sl in slices: File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 1354, in _realign_boundaries for sl1, sl2 in _pair_iter(slices): File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 317, in _pair_iter prev = next(it) File "C:\Users\320055025\AppData\Local\Continuum\anaconda3\lib\site-packages ltk\tokenize\punkt.py", line 1327, in _slices_from_text for match in self._lang_vars.period_context_re().finditer(text): TypeError: expected string or bytes-like object
1
1
0
0
0
0
I'm a student write a simple chatbot for an asterisk telephone server. The AGI script just says a hello message, asks if the person is looking for a group ( sales / support etc ) or for a person, than asks which person / group and redirects them to the group / person, if the person isn't available it gives you the options to eigther, redirect to a colleague ( person in the same que ) , to speak a message into his voicemail, or to wait until the person is available. I had to do some 'hard coding' of strings to make the scipt work for example if a person is not available the caller could say('yhea send me to a COLLEAGUE' or 'yhea you can REDIRECT' , or 'I don't mind talking to SOMEONE else' ) for this part i would like to make use of some simple AI that could understand the user better and give propper responses, I however have no expirience with AI at all but love to learn it. I'm making use os a centos server with free asterisk PBX installed, with my AGI script coded in Python. Is there a way to use the google assistant / dialog flow to return a parameter to my script / server if it matched? I was able to put together the badjokegenerator: BadJokeGenerator but i'm wondering if / how i could send something to my server / script so i could for example let my script redirect a caller to the person he asked for.
1
1
0
0
0
0
So basically I have an array, that consists of 14 rows and 426 Columns, every row represents one property of a dog and every column represents one dog, now I want to know how many dogs are ill, this property is represented by the 14. row. 0 = Healthy and 1 = ill, so how do I count the specific row? I tried to use the numpy.count_nonzero but this counts the values of the entire array, is there a way to tell it to only count a specific row?
1
1
0
0
0
0
I want to do spelling for text in Italian language using textblob, but I find just the code for English language. how can do it? this is the code for English from textblob import TextBlob text = "I am gonig to schol" text = TextBlob(text) print(text.correct()) I am going to school
1
1
0
0
0
0
I have a data frame that has a column with URL links in it. Can someone tell me how to handle these links while pre-processing data in NLP? For eg, the df column looks similar to this- likes text 11 https://www.facebook.com 12 https://www.facebook.com 13 https://www.facebook.com 14 Good morning 15 How are.....you? Do we need to remove these URL links completely or is there another way to deal with them?
1
1
0
0
0
0
I am trying to train an AI program that predicts stock values. Every single time, my cost is 0 and my test is 100%. I can not seem to find what I am doing wrong. placeholder1 = tf.placeholder(tf.float32, shape=[None, 3]) #trainers dates_train = np.array(dates[0:8000]).astype(np.float32) highPrice_train = np.array(highPrice[0:8000]).astype(np.float32) print(dates_train[0][0]) #testers dates_test = np.array(dates[8000:9564]).astype(np.float32) highPrice_test = np.array(highPrice[8000:9564]).astype(np.float32) def get_training_batch(n): n = min(n,7999) idx = np.random.choice(7999,n) return dates_train[idx],highPrice_train[idx] n_hidden_1 = 100 n_hidden_2 = 100 weights = { 'h1' : tf.Variable(tf.random_normal([3, n_hidden_1])), 'h2' : tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])), 'out' : tf.Variable(tf.random_normal([n_hidden_2,1])) } biases = { 'b1' : tf.Variable(tf.random_normal([n_hidden_1])), 'b2' : tf.Variable(tf.random_normal([n_hidden_2])), 'out' : tf.Variable(tf.random_normal([1])) } layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(placeholder1, weights['h1']), biases['b1'])) layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])) y = tf.matmul(layer_2,weights['out']) + biases['out'] placeholder2 = tf.placeholder(tf.float32,shape=[None,1]) print("Mean") print(sum(highPrice)/len(highPrice)) mean = tf.reduce_mean(highPrice) print(mean) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y, labels=placeholder2)) print("Printing cross_entropy") print(cross_entropy) rate = 0.01 optimizer = tf.train.GradientDescentOptimizer(rate).minimize(cross_entropy) print(optimizer) prediction = tf.nn.softmax(y) print(prediction) ##Training correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(placeholder2,1)) accuracy = 100 * tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(accuracy) epochs = 1000 batch_size = 10 sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) cost = [] accu = [] test_accu = [] for ep in range(epochs): x_feed,y_feed = get_training_batch(batch_size) y_feed = np.reshape(y_feed,[10,1]) _,cos,predictions,acc = sess.run([optimizer, cross_entropy, prediction, accuracy], feed_dict={placeholder1:x_feed, placeholder2:y_feed}) highPrice_test = np.reshape(highPrice_test,[1564,1]) test_acc = accuracy.eval(feed_dict={placeholder1:dates_test, placeholder2:highPrice_test}) cost.append(cos) accu.append(acc) test_accu.append(test_acc) if(ep % (epochs // 10) == 0): print('[%d]: Cos: %.4f, Acc: %.1f%%, Test Acc: %.1f%%' % (ep,cos,acc,test_acc)) plt.plot(cost) plt.title('cost') plt.show() plt.plot(accu) plt.title('Train Accuracy') plt.show() plt.plot(test_accu) plt.title('Test Accuracy') plt.show() index = 36 p = sess.run(prediction, feed_dict = {placeholder1:dates_train[index:index +1]})[0] [0]: Cos: 0.0000, Acc: 100.0%, Test Acc: 100.0% [100]: Cos: 0.0000, Acc: 100.0%, Test Acc: 100.0% That is my output for every single test. I expect there to be a cost and accuracy should not be 100%
1
1
0
0
0
0
I have a dataframe column that contains text data. It has few words with repetitive letters. I want to find all such words, then store these words as keys in a dictionary and their correct spellings as values in the dictionary and then replace the word in the dataframe with its value in the dictionary. For example if my dataframe has words like- id text 1 Hiiiiiii 2 Good morninggggggg 3 See you soooonnnn 1) I need to find such words in the dataframe column 2) store these words in dictionary {Hiiiiiii : Hi, morninggggggg : morning, soooonnnn : soon} 3) then replace these words in the dataframe with their values in the dictionary 4) Final output should look like- id text 1 Hi 2 Good morning 3 See you soon
1
1
0
0
0
0
So I was using GloVe with my model and it worked, but now I changed to Elmo (reference that Keras code available on GitHub Elmo Keras Github, utils.py however, when I print model.summary I get 0 parameters in the ELMo Embedding layer unlike when I was using Glove is that normal ? If not can you please tell me what am I doing wrong Using glove I Got over 20Million parameters ##--------> When I was using Glove Embedding Layer word_embedding_layer = emb.get_keras_embedding(#dropout = emb_dropout, trainable = True, input_length = sent_maxlen, name='word_embedding_layer') ## --------> Deep layers pos_embedding_layer = Embedding(output_dim =pos_tag_embedding_size, #5 input_dim = len(SPACY_POS_TAGS), input_length = sent_maxlen, #20 name='pos_embedding_layer') latent_layers = stack_latent_layers(num_of_latent_layers) ##--------> 6] Dropout dropout = Dropout(0.1) ## --------> 7]Prediction predict_layer = predict_classes() ## --------> 8] Prepare input features, and indicate how to embed them inputs = [Input((sent_maxlen,), dtype='int32', name='word_inputs'), Input((sent_maxlen,), dtype='int32', name='predicate_inputs'), Input((sent_maxlen,), dtype='int32', name='postags_inputs')] ## --------> 9] ELMo Embedding and Concat all inputs and run on deep network from elmo import ELMoEmbedding import utils idx2word = utils.get_idx2word() ELmoembedding1 = ELMoEmbedding(idx2word=idx2word, output_mode="elmo", trainable=True)(inputs[0]) # These two are interchangeable ELmoembedding2 = ELMoEmbedding(idx2word=idx2word, output_mode="elmo", trainable=True)(inputs[1]) # These two are interchangeable embeddings = [ELmoembedding1, ELmoembedding2, pos_embedding_layer(inputs[3])] con1 = keras.layers.concatenate(embeddings) ## --------> 10]Build model outputI = predict_layer(dropout(latent_layers(con1))) model = Model(inputs, outputI) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) model.summary() Trials: note: I tried using the TF-Hub Elmo with Keras code, but the output was always a 2D tensor [even when I changed it to 'Elmo' setting and 'LSTM' instead of default']so I couldn't Concatenate with POS_embedding_layer. I tried reshaping but eventually I got the same issue total Parameters 0.
1
1
0
0
0
0
The dataframe column contains few words with repetitive letters. I want to remove words that are entirely made up of same letters from the dataframe column and keep the first occurrence of the letter in other cases where the letters repeat more than 2 times consecutively. df- id text 1 aaaa 2 bb 3 wwwwwwww 4 Hellooooo 5 See youuuu Output id text 1 2 3 4 Hello 5 See you
1
1
0
0
0
0
I'm working on a classification task, using a movie reviews dataset from Kaggle. The part with which I'm struggling is a series of functions, in which the output of one becomes the input of the next. Specifically, in the code provided, the function "word_token" takes the input "phraselist", tokenizes it, and returns a tokenized document titled "phrasedocs". The only problem is that it doesn't seem to be working, because when I take that theoretical document "phrasedocs" and enter it into the next function, "process_token", I get: NameError: name 'phrasedocs' is not defined I am completely willing to accept that there is something simple I have overlooked, but I've been on this for hours and I can't figure it out. I would appreciate any help. I have tried proofreading and debugging the code, but my Python expertise is not great. # This function obtains data from train.tsv def processkaggle(dirPath, limitStr): # Convert the limit argument from a string to an int limit = int(limitStr) os.chdir(dirPath) f = open('./train.tsv', 'r') # Loop over lines in the file and use their first limit phrasedata = [] for line in f: # Ignore the first line starting with Phrase, then read all lines if (not line.startswith('Phrase')): # Remove final end of line character line = line.strip() # Each line has four items, separated by tabs # Ignore the phrase and sentence IDs, keep the phrase and sentiment phrasedata.append(line.split('\t')[2:4]) return phrasedata # Randomize and subset data def random_phrase(phrasedata): random.shuffle(phrasedata) # phrasedata initiated in function processkaggle phraselist = phrasedata[:limit] for phrase in phraselist[:10]: print(phrase) return phraselist # Tokenization def word_token(phraselist): phrasedocs=[] for phrase in phraselist: tokens=nltk.word_tokenize(phrase[0]) phrasedocs.append((tokens, int(phrase[1]))) return phrasedocs # Pre-processing # Convert all tokens to lower case def lower_case(doc): return [w.lower() for w in doc] # Clean text, fixing confusion over apostrophes def clean_text(doc): cleantext=[] for review_text in doc: review_text = re.sub(r"it 's", "it is", review_text) review_text = re.sub(r"that 's", "that is", review_text) review_text = re.sub(r"'s", "'s", review_text) review_text = re.sub(r"'ve", "have", review_text) review_text = re.sub(r"wo n't", "will not", review_text) review_text = re.sub(r"do n't", "do not", review_text) review_text = re.sub(r"ca n't", "can not", review_text) review_text = re.sub(r"sha n't", "shall not", review_text) review_text = re.sub(r"n't", "not", review_text) review_text = re.sub(r"'re", "are", review_text) review_text = re.sub(r"'d", "would", review_text) review_text = re.sub(r"'ll", "will", review_text) cleantext.append(review_text) return cleantext # Remove punctuation and numbers def rem_no_punct(doc): remtext = [] for text in doc: punctuation = re.compile(r'[-_.?!/\%@,":;'{}<>~`()|0-9]') word = punctuation.sub("", text) remtext.append(word) return remtext # Remove stopwords def rem_stopword(doc): stopwords = nltk.corpus.stopwords.words('english') updatestopwords = [word for word in stopwords if word not in ['not','no','can','has','have','had','must','shan','do','should','was','were','won','are','cannot','does','ain','could','did','is','might','need','would']] return [w for w in doc if not w in updatestopwords] # Lemmatization def lemmatizer(doc): wnl = nltk.WordNetLemmatizer() lemma = [wnl.lemmatize(t) for t in doc] return lemma # Stemming def stemmer(doc): porter = nltk.PorterStemmer() stem = [porter.stem(t) for t in doc] return stem # This function combines all the previous pre-processing functions into one, which is helpful # if I want to alter these settings for experimentation later def process_token(phrasedocs): phrasedocs2 = [] for phrase in phrasedocs: tokens = nltk.word_tokenize(phrase[0]) tokens = lower_case(tokens) tokens = clean_text(tokens) tokens = rem_no_punct(tokens) tokens = rem_stopword(tokens) tokens = lemmatizer(tokens) tokens = stemmer(tokens) phrasedocs2.append((tokens, int(phrase[1]))) # Any words that pass through the processing # steps above are added to phrasedocs2 return phrasedocs2 dirPath = 'C:/Users/J/kagglemoviereviews/corpus' processkaggle(dirPath, 5000) # returns 'phrasedata' random_phrase(phrasedata) # returns 'phraselist' word_token(phraselist) # returns 'phrasedocs' process_token(phrasedocs) # returns phrasedocs2 NameError Traceback (most recent call last) <ipython-input-120-595bc4dcf121> in <module>() 5 random_phrase(phrasedata) # returns 'phraselist' 6 word_token(phraselist) # returns 'phrasedocs' ----> 7 process_token(phrasedocs) # returns phrasedocs2 8 9 NameError: name 'phrasedocs' is not defined
1
1
0
0
0
0
I have a dataframe column that contains text data. It has few words entirely made up of with repetitive letters and few others having repetitive letters partially. I want to remove words made up of entirely repetitive letters and just keep the first occurrence of the letter in other case (if the count of the repetitive letter is more than 2) in the dataframe column. How to do this? For example if my dataframe has words like- id text 1 aaaa 2 bb 3 wwwwwwww 4 helloooo 5 see youuuu The output should be- id text 1 2 3 4 hello 5 see you
1
1
0
0
0
0
The dataframe column contains sentences having few three and two letter words that have no meaning. I want to find all such words in the dataframe column and then remove them from the dataframe column. df- id text 1 happy birthday syz 2 vz 3 have a good bne weekend I want to 1) find all words with length less than 3. (this shall return syz, vz, bne) 2) remove these words (Note that the stopwords have already been removed so words like "a", "the" aren't existing in the dataframe column now, the above dataframe is just an example) I tried the below code but it doesn't work def word_length(text): words = [] for word in text: if len(word) <= 3: words.append(word) return(words) short_words = df['text'].apply(word_length).sum() the output should be- id text 1 happy birthday 2 3 have good weekend
1
1
0
0
0
0
I am trying to create a dependency parser from a corpus. The corpus is in conll format so I have a function that reads the files and return a list of lists, in which each list is a parsed sentence (the corpus I'm using is already parsed, my job is to find another alternative in this parse). My professor asked to randomly pick only the 5% of sentences in this corpus, as it is too large. I have tried creating an empty list and use the append function, but I don't know how can I specify by indexing that I want 5 out of each 100 sentences of the corpus The function I have made for converting the conll files is the following: import os, nltk, glob def read_files(path): """ Function to load Ancora Dependency corpora (GLICOM style) path = full path to the files returns de corpus in sentences each sentence is a list of tuples each tuple is a token with the follwoing info: index of the token in the sentence token lemma POS /es pot eliminar POS FEAT /es pot eliminar head DepRelation """ corpus = [] for f in glob.glob(path): sents1 = open(f).read()[185:-2].split(' ') sents2 = [] for n in range(len(sents1)): sents2.append(sents1[n].split(' ')) sents3 = [] for s in sents2: sent = [] for t in s: sent.append(tuple(t.split('\t'))) sents3.append(sent) corpus.extend(sents3) return corpus I want a way of selecting 5 sentences of every 100 in the corpus so I can have a list of lists containing only these. Thanks in advance!
1
1
0
0
0
0
I have trained LDA model using gensim on a text_corpus. >lda_model = gensim.models.ldamodel.LdaModel(text_corpus, 10) Now if a new text document text_sparse_vector has to be inferred I have to do >lda_model[text_sparse_vector] [(0, 0.036479568280206563), (3, 0.053828073308160099), (7, 0.021936618544365804), (11, 0.017499953446152686), (15, 0.010153090454090822), (16, 0.35967516223499041), (19, 0.098570351997275749), (26, 0.068550060242800928), (27, 0.08371562828754453), (28, 0.14110945630261607), (29, 0.089938130046832571)] But how do I get the word distribution for each of the corresponding topics. For example, How do I know top 20 words for topic number 16 ? The class gensim.models.ldamodel.LdaModel has method called show_topics(topics=10, topn=10, log=False, formatted=True), but the as the documentation says it shows randomly selected list of topics. Is there a way to link or print I can map the inferred topic numbers to word distributions ?
1
1
0
0
0
0
I figured out how to use tfidf schema to capture distribution of the words along the document. However, I want to create vocabulary of top frequent and least frequent words for list of sentences. Here is some part of text preprocessing: print(my.df) -> (17298, 2) print(df.columns) -> Index(['screen_name', 'text'], dtype='object') txt = re.sub(r"[^\w\s]","",txt) txt = re.sub(r"@([A-Z-a-z0-9_]+)", "", txt) tokens = nltk.word_tokenize(txt) token_lemmetized = [lemmatizer.lemmatize(token).lower() for token in tokens] df['text'] = df['text'].apply(lambda x: process(x)) then this is my second attempt: import nltk nltk.download('stopwords') from nltk.corpus import stopwords import string stop = set(stopwords.words('english')) df['text'] = df['text'].apply(lambda x: [item for item in x if item not in stop]) all_words = list(chain.from_iterable(df['text'])) for i in all_words: x=Counter(df['text'][i]) res= [word for word, count in x.items() if count == 1] print(res) in above approach I want to create most frequent and least frequent words from list of sentences, but my attempt didn't produce that outuput? what should I do? any elegant way to make this happen? any idea? can anyone give me possible idea to make this happen? Thanks example data snippets : here is data that I used and file can be found safely here: example data sample input and output: inputList = {"RT @GOPconvention: #Oregon votes today. That means 62 days until the @GOPconvention!", "RT @DWStweets: The choice for 2016 is clear: We need another Democrat in the White House. #DemDebate #WeAreDemocrats ", "Trump's calling for trillion dollar tax cuts for Wall Street.", From Chatham Town Council to Congress, @RepRobertHurt has made a strong mark on his community. Proud of our work together on behalf of VA!} sample output of tokens ['rt', 'gopconvention', 'oregon', 'vote', 'today', 'that', 'mean', '62', 'day', 'until', 'gopconvention', 'http', 't', 'co', 'ooh9fvb7qs'] output: I want to create vocabulary for most frequent words and least frequent words from give data. any idea to get this done? Thanks
1
1
0
0
0
0
I am scraping files from a website, and want to rename those files based on existing directory names on my computer (or if simpler, a list containing those directory names). This is to maintain a consistent naming convention. For example, I already have directories named: Barone Capital Management, Gabagool Alternative Investments, Aprile Asset Management, Webistics Investments The scraped data consists of some exact matches, some "fuzzy" matches, and some new values: Barone, Gabagool LLC, Aprile Asset Management, New Name, Webistics Investments I want the scraped files to adopt the naming convention of the existing directories. For example, Barone would become Barone Capital Management, and Gabagool LLC would be renamed Gabagool Alternative Investments. So what's the best way to accomplish this? I looked at fuzzywuzzy and some other libraries, but not sure what the right path is. This is my existing code which just names the file based on the anchor: import praw import requests from bs4 import BeautifulSoup import urllib.request url = 'https://old.reddit.com/r/test/comments/b71ug1/testpostr23432432/' headers = {'User-Agent': 'Mozilla/5.0'} page = requests.get(url, headers=headers) soup = BeautifulSoup(page.text, 'html.parser') table = soup.find_all('table')[0] #letter_urls = [] for anchor in table.findAll('a'): try: if not anchor: continue fund_name = anchor.text letter_link = anchor['href'] urllib.request.urlretrieve(letter_link, '2018 Q4 ' + fund_name + '.pdf') except: pass Note that the list of directories are already created, and look something like this: - /Users/user/Dropbox/Letters/Barone Capital Management - /Users/user/Dropbox/Letters/Aprile Asset Management - /Users/user/Dropbox/Letters/Webistics Investments - /Users/user/Dropbox/Letters/Gabagool Alternative Investments - /Users/user/Dropbox/Letters/Ro Capital - /Users/user/Dropbox/Letters/Vitoon Capital
1
1
0
0
0
0
EDITED: I've updated my traceback below I know this kind of problems has been asked for many times, but I have been struggling to this issue 2 days and still can't figure a solution. Here the case: I'm using pycrfsuite (a python implementation of CRF), and this snippets raise UnicodeEncodeError. trainer = pycrfsuite.Trainer(verbose=True) for xseq, yseq in zip(X_train, y_train): trainer.append(xseq, yseq) Error ... Traceback (most recent call last): File "/home/enamoria/Dropbox/NLP/POS-tagger/MyTagger/V2_CRF/src/pos-tag/pos-tag.py", line 46, in <module> trainer.append(xseq, yseq) File "pycrfsuite/_pycrfsuite.pyx", line 312, in pycrfsuite._pycrfsuite.BaseTrainer.append File "stringsource", line 48, in vector.from_py.__pyx_convert_vector_from_py_std_3a__3a_string File "stringsource", line 15, in string.from_py.__pyx_convert_string_from_py_std__in_string UnicodeEncodeError: 'ascii' codec can't encode character '\u201d' in position 0: ordinal not in range(128) \u201d is the closing double quote ” in utf8 encoding. This exception was also raised for \u201c (opening double quote) and \u2026 (ellipsis IIRC) FYI, X_train and y_train is a features representation of a text and its corresponding labels, which I read from a file. I've try using encoding='utf8', errors='ignore' but the error still there for file in filelist: with open(self.datapath + "/" + file, "r", encoding='utf8', errors='ignore') as f: raw_text = [(line.strip(" ").strip(" ").replace(" ", " ").replace(" ", " ")).split(" ") for line in f.readlines() if line != ' '] data.extend(raw_text) My question is: Is pycrfsuite only support ascii encoding? If so, is there any workaround available for me? My data is Vietnamese which ascii can't represent, and a new crf library is the last thing I want Thanks in advance.
1
1
0
0
0
0
I am trying to conduct some NLP on subreddit pages. I have a chunk of code that gathers a bunch of data two web pages. It scrapes data until I get to range(40). This would be fine, except I know that the subreddits I have chosen have more posts than my code is allowing me to scrape. Could anyone figure out what is going on here? posts_test = [] url = 'https://www.reddit.com/r/TheOnion/.json?after=' for i in range(40): res = requests.get(url, headers={'User-agent': 'Maithili'}) the_onion = res.json() for i in range(25): post_t = [] post_t.append(the_onion['data']['children'][i]['data']['title']) post_t.append(the_onion['data']['children'][i]['data']['subreddit']) posts_test.append(post_t) after = the_onion['data']['after'] url = 'https://www.reddit.com/r/TheOnion/.json?after=' + after time.sleep(3) # Not the onion url = 'https://www.reddit.com/r/nottheonion/.json?after=' for i in range(40): res3 = requests.get(url, headers=headers2) not_onion_json = res2.json() for i in range(25): post_t = [] post_t.append(not_onion_json['data']['children'][i]['data']['title']) post_t.append(not_onion_json['data']['children'][i]['data']['subreddit']) posts_test.append(post_t) after = not_onion_json['data']['after'] url = "https://www.reddit.com/r/nottheonion/.json?after=" + after time.sleep(3) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-57-6c1cfdd42421> in <module> 7 for i in range(25): 8 post_t = [] ----> 9 post_t.append(the_onion['data']['children'][i]['data']['title']) 10 post_t.append(the_onion['data']['children'][i]['data']['subreddit']) 11 posts_test.append(post_t) IndexError: list index out of range"```
1
1
0
0
0
0
Background : Given a corpus I want to train it with an implementation of word2wec (Gensim). Want to understand if the final similarity between 2 tokens is dependent on the frequency of A and B in the corpus (all contexts preserved), or agnostic of it. Example: (May not be ideal, but using it to elaborate the problem statement) Suppose word 'A' is being used in 3 different contexts within the corpus : Context 1 : 1000 times Context 2 : 50000 times Context 3 : 50000 times 'B' is being used in 2 different contexts : Context 1 : 300 times Context 5 : 1000 time Question : If I change the frequency of 'A' in my corpus (ensuring no context is lost, i.e. 'A' is still being used at least once in all the contexts as in the original corpus), is the similarity between A snd B going to be the same ? New distribution of 'A' across contexts Context 1 : 5 times Context 2 : 10 times Context 3 : 5000 times Any leads appreciated
1
1
0
0
0
0
I'm newbie to spacy and I've read the docs about token-base matching. I've tried spaCy matcher using the REGEX but I don't have any results. When I use the re library to do the match it works though. Am I doing something wrong in the code. I'm trying to match the "accès'd" word Thanks for your help # REGEX import re text = u"accès'd est ferme aujpourd'hui" pattern_re = re.compile("^acc?é?e?è?s?s?'?D" , re.I) pattern_re.match(text) # <re.Match object; span=(0, 7), match="accès'd"> # REGEX SPACY VERSION 1 import spacy from spacy.matcher import Matcher nlp = spacy.load("fr_core_news_sm") pattern = [{'TEXT': {'REGEX' : "^acc?é?e?è?s?s?'?D"}}] matcher = Matcher(nlp.vocab) matcher.add('AccèsD' , None , pattern) doc = nlp(text) matches = matcher(doc) for match_id, start , end in matches: match_string = nlp.vocab.strings[match_id] span = doc[start:end] print(match_id, match_string, start , end , span.text) # NOTHING # REGEX SPACY VERSION 2 import spacy from spacy.matcher import Matcher nlp = spacy.load("fr_core_news_sm") accesd_flag = lambda text : bool(re.compile(r"^acc?é?e?è?s?s?'?D" , re.I).match(text)) IS_ACCESD = nlp.vocab.add_flag(accesd_flag) pattern= [{IS_ACCESD : True}] matcher = Matcher(nlp.vocab) matcher.add('AccèsD' , None , pattern) doc = nlp(text) matches = matcher(doc) for match_id, start , end in matches: match_string = nlp.vocab.strings[match_id] span = doc[start:end] print(match_id, match_string, start , end , span.text) # NOTHING
1
1
0
0
0
0
I am new to deep learning and NLP, and now trying to get started with the pre-trained Google BERT model. Since I intended to build a QA system with BERT, I decided to start from the SQuAD related fine-tuning. I followed the instructions from README.md in the official Google BERT GitHub repository. I typed the code as following: export BERT_BASE_DIR=/home/bert/Dev/venv/uncased_L-12_H-768_A-12/ export SQUAD_DIR=/home/bert/Dev/venv/squad python run_squad.py \ --vocab_file=$BERT_BASE_DIR/vocab.txt \ --bert_config_file=$BERT_BASE_DIR/bert_config.json \ --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ --do_train=True \ --train_file=$SQUAD_DIR/train-v1.1.json \ --do_predict=True \ --predict_file=$SQUAD_DIR/dev-v1.1.json \ --train_batch_size=12 \ --learning_rate=3e-5 \ --num_train_epochs=2.0 \ --max_seq_length=384 \ --doc_stride=128 \ --output_dir=/tmp/squad_base/ and after minutes(when the training started), I got this: a lot of output omitted INFO:tensorflow:start_position: 53 INFO:tensorflow:end_position: 54 INFO:tensorflow:answer: february 1848 INFO:tensorflow:***** Running training ***** INFO:tensorflow: Num orig examples = 87599 INFO:tensorflow: Num split examples = 88641 INFO:tensorflow: Batch size = 12 INFO:tensorflow: Num steps = 14599 INFO:tensorflow:Calling model_fn. INFO:tensorflow:Running train on CPU INFO:tensorflow:*** Features *** INFO:tensorflow: name = end_positions, shape = (12,) INFO:tensorflow: name = input_ids, shape = (12, 384) INFO:tensorflow: name = input_mask, shape = (12, 384) INFO:tensorflow: name = segment_ids, shape = (12, 384) INFO:tensorflow: name = start_positions, shape = (12,) INFO:tensorflow: name = unique_ids, shape = (12,) INFO:tensorflow:Error recorded from training_loop: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /home/bert/Dev/venv/uncased_L-12_H-768_A-12//bert_model.ckpt INFO:tensorflow:training_loop marked as finished WARNING:tensorflow:Reraising captured error Traceback (most recent call last): File "run_squad.py", line 1283, in <module> tf.app.run() File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "run_squad.py", line 1215, in main estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2400, in train rendezvous.raise_errors() File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/error_handling.py", line 128, in raise_errors six.reraise(typ, value, traceback) File "/home/bert/Dev/venv/lib/python3.5/site-packages/six.py", line 693, in reraise raise value File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2394, in train saving_listeners=saving_listeners File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 356, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 1181, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 1211, in _train_model_default features, labels, model_fn_lib.ModeKeys.TRAIN, self.config) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2186, in _call_model_fn features, labels, mode, config) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 1169, in _call_model_fn model_fn_results = self._model_fn(features=features, **kwargs) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2470, in _model_fn features, labels, is_export_mode=is_export_mode) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1250, in call_without_tpu return self._call_model_fn(features, labels, is_export_mode=is_export_mode) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1524, in _call_model_fn estimator_spec = self._model_fn(features=features, **kwargs) File "run_squad.py", line 623, in model_fn ) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint) File "/home/bert/Dev/venv/bert/modeling.py", line 330, in get_assignment_map_from_checkpoint init_vars = tf.train.list_variables(init_checkpoint) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/training/checkpoint_utils.py", line 95, in list_variables reader = load_checkpoint(ckpt_dir_or_file) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/training/checkpoint_utils.py", line 64, in load_checkpoint return pywrap_tensorflow.NewCheckpointReader(filename) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 314, in NewCheckpointReader return CheckpointReader(compat.as_bytes(filepattern), status) File "/home/bert/Dev/venv/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 526, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for /home/bert/Dev/venv/uncased_L-12_H-768_A-12//bert_model.ckpt It seems that tensorflow failed to find the checkpoint file, but as far as i know about it, a tensorflow checkpoint "file" is actually three files, and this is correct way to call it(with the path and prefix). I am placing files in the right place, I believe: (venv) bert@bert-System-Product-Name:~/Dev/venv/uncased_L-12_H-768_A-12$ pwd /home/bert/Dev/venv/uncased_L-12_H-768_A-12 (venv) bert@bert-System-Product-Name:~/Dev/venv/uncased_L-12_H-768_A-12$ ls bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index bert_model.ckpt.meta vocab.txt I am running on Ubuntu 16.04 LTS , with NVIDIA GTX 1080 Ti (CUDA 9.0) , with Anaconda python 3.5 distribution , with tensorflow-gpu 1.11.0 in a virtual environment. I am expecting the code to run smoothly and start training(fine-tune) since it is official code and I got the files placed as instructions.
1
1
0
0
0
0
I'm trying to grasp the idea of a Hierarchical Attention Network (HAN), most of the code i find online is more or less similar to the one here: https://medium.com/jatana/report-on-text-classification-using-cnn-rnn-han-f0e887214d5f : embedding_layer=Embedding(len(word_index)+1,EMBEDDING_DIM,weights=[embedding_matrix], input_length=MAX_SENT_LENGTH,trainable=True) sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32', name='input1') embedded_sequences = embedding_layer(sentence_input) l_lstm = Bidirectional(LSTM(100))(embedded_sequences) sentEncoder = Model(sentence_input, l_lstm) review_input = Input(shape=(MAX_SENTS,MAX_SENT_LENGTH), dtype='int32', name='input2') review_encoder = TimeDistributed(sentEncoder)(review_input) l_lstm_sent = Bidirectional(LSTM(100))(review_encoder) preds = Dense(len(macronum), activation='softmax')(l_lstm_sent) model = Model(review_input, preds) My question is: What do the input layers here represent? I'm guessing that input1 represents the sentences wrapped with the embedding layer, but in that case what is input2? Is it the output of the sentEncoder? In that case it should be a float, or if it's another layer of embedded words, then it should be wrapped with an embedding layer as well.
1
1
0
1
0
0
I have a table with columns: Location, Basic quals, Preferred quals, and Responsibilities. The last three columns have string entries that I tokenized, I want to group the columns by Location. When I do this my strings Truncate eg. "we want an individual who knows python and java." turns into "we want an individual..." How do I avoid this from happening? grouped_location=pd.DataFrame(df1['Pref'].groupby(df1['Location'])) grouped_location.columns = ['Loaction','Pref'] grouped_location=grouped_location.set_index('Loaction') grouped_location.iat[0,0] I expect to get 17 [Experience, in, design, verification,, includ (full entry)] but what I get is: 17 [Experience, in, design, verification,, includ...
1
1
0
0
0
0
I am using BERT for feature extraction of a word given the text where it appears, but it seems current implementation in bert's official github (https://github.com/google-research/bert) can only compute the features of all the words in text, which makes it consume too much resources. Is it possible to adapt it for this purporse? Thanks!!
1
1
0
0
0
0
First let's extract the TF-IDF scores per term per document: from gensim import corpora, models, similarities documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey"] stoplist = set('for a of the and to in'.split()) texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] tfidf = models.TfidfModel(corpus) corpus_tfidf = tfidf[corpus] Printing it out: for doc in corpus_tfidf: print doc [out]: [(0, 0.4301019571350565), (1, 0.4301019571350565), (2, 0.4301019571350565), (3, 0.4301019571350565), (4, 0.2944198962221451), (5, 0.2944198962221451), (6, 0.2944198962221451)] [(4, 0.3726494271826947), (7, 0.27219160459794917), (8, 0.3726494271826947), (9, 0.27219160459794917), (10, 0.3726494271826947), (11, 0.5443832091958983), (12, 0.3726494271826947)] [(6, 0.438482464916089), (7, 0.32027755044706185), (9, 0.32027755044706185), (13, 0.6405551008941237), (14, 0.438482464916089)] [(5, 0.3449874408519962), (7, 0.5039733231394895), (14, 0.3449874408519962), (15, 0.5039733231394895), (16, 0.5039733231394895)] [(9, 0.21953536176370683), (10, 0.30055933182961736), (12, 0.30055933182961736), (17, 0.43907072352741366), (18, 0.43907072352741366), (19, 0.43907072352741366), (20, 0.43907072352741366)] [(21, 0.48507125007266594), (22, 0.48507125007266594), (23, 0.48507125007266594), (24, 0.48507125007266594), (25, 0.24253562503633297)] [(25, 0.31622776601683794), (26, 0.31622776601683794), (27, 0.6324555320336759), (28, 0.6324555320336759)] [(25, 0.20466057569885868), (26, 0.20466057569885868), (29, 0.2801947048062438), (30, 0.40932115139771735), (31, 0.40932115139771735), (32, 0.40932115139771735), (33, 0.40932115139771735), (34, 0.40932115139771735)] [(8, 0.6282580468670046), (26, 0.45889394536615247), (29, 0.6282580468670046)] If we want to find the "saliency" or "importance" of the words within this corpus, can we simple do the sum of the tf-idf scores across all documents and divide it by the number of documents? I.e. >>> tfidf_saliency = Counter() >>> for doc in corpus_tfidf: ... for word, score in doc: ... tfidf_saliency[word] += score / len(corpus_tfidf) ... >>> tfidf_saliency Counter({7: 0.12182694202050007, 8: 0.11121194156107769, 26: 0.10886469856464989, 29: 0.10093919463036093, 9: 0.09022272408985754, 14: 0.08705221175200946, 25: 0.08482488519466996, 6: 0.08143359568202602, 10: 0.07480097322359022, 12: 0.07480097322359022, 4: 0.07411881371164887, 13: 0.07117278898823597, 5: 0.07104525967490458, 27: 0.07027283689263066, 28: 0.07027283689263066, 11: 0.060487023243988705, 15: 0.055997035904387725, 16: 0.055997035904387725, 21: 0.05389680556362955, 22: 0.05389680556362955, 23: 0.05389680556362955, 24: 0.05389680556362955, 17: 0.048785635947490406, 18: 0.048785635947490406, 19: 0.048785635947490406, 20: 0.048785635947490406, 0: 0.04778910634833961, 1: 0.04778910634833961, 2: 0.04778910634833961, 3: 0.04778910634833961, 30: 0.045480127933079706, 31: 0.045480127933079706, 32: 0.045480127933079706, 33: 0.045480127933079706, 34: 0.045480127933079706}) Looking at the output, could we assume that the most "prominent" word in the corpus is: >>> dictionary[7] u'system' >>> dictionary[8] u'survey' >>> dictionary[26] u'graph' If so, what is the mathematical interpretation of the sum of TF-IDF scores of words across documents?
1
1
0
0
0
0
I want to compare the similarity in some texts to detect duplicates, but if i use difflib, it returns different ratios depending on the order i give the data. Some random example .... Thanks import difflib a='josephpFRANCES' b='ABswazdfsadSASAASASASAS' seq=difflib.SequenceMatcher(None,a,b) d=seq.ratio()*100 print(d) seq2=difflib.SequenceMatcher(None,b,a) d2=seq2.ratio()*100 print(d2) d = 16.216216216216218 d2 = 10.81081081081081
1
1
0
0
0
0
I'm new in NLP. I'm trying to tokenize sentence using nlp on python 3.7.So I used following code import nltk text4="This is the first sentence.A gallon of milk in the U.S. cost $2.99.Is this the third sentence?Yes,it is!" x=nltk.sent_tokenize(text4) x[0] I was expecting that x[0] will return first sentence but I got Out[4]: 'This is the first sentence.A gallon of milk in the U.S. cost $2.99.Is this the third sentence?Yes,it is!' Am I doing anything wrong?
1
1
0
0
0
0
How do I get cumulative unique words from a dataframe column which has more than 500 words per. Dataframe has ~300,000 rows I read the csv file in a dataframe with column A having text data. I have tried creating couple of columns (B & C) by looping through column A and taking unique words from column A as set and appending Column B with unique words and Column C with count Subsequently, I take unique words by taking Column A and column B(union) from previous row(set) This works for small number of rows. But once number of rows exceeds 10,000 performance degrades and kernal eventually dies Is there any better way of doing this for huge dataframe ? Tried creating seperate dataframe with just unique words and count, but still have issue Sample code: for index, row in DF.iterrows(): if index = 0: result = set(row['Column A'].lower().split() DF.at[index, 'Column B'] = result else: result = set(row['Column A'].lower().split() DF.at[index, 'Cloumn B'] = result.union(DF.loc[index -1, 'Column B']) DF['Column C'] = DF['Column B'].apply(len)
1
1
0
0
0
0
I am a beginner developer who started to study automation by using pywinauto. An overflow error occurs when using application.connect () to connect to an already open program. But application.start() works fine.... Please help me if someone know this part. The source code and error contents are as follows. Source code: import pywinauto app = pywinauto.application.Application() app.connect(title_re='Calculator') Error: OverflowError Traceback (most recent call last) in 1 import pywinauto 2 app = pywinauto.application.Application() ----> 3 app.connect(title_re='Calculator') d:\Anaconda3\lib\site-packages\pywinauto\application.py in connect(self, **kwargs) 972 ).process_id 973 else: --> 974 self.process = findwindows.find_element(**kwargs).process_id 975 connected = True 976 d:\Anaconda3\lib\site-packages\pywinauto\findwindows.py in find_element(**kwargs) 82 so please see :py:func:find_elements for the full parameters description. 83 """ ---> 84 elements = find_elements(**kwargs) 85 86 if not elements: d:\Anaconda3\lib\site-packages\pywinauto\findwindows.py in find_elements(class_name, class_name_re, parent, process, title, title_re, top_level_only, visible_only, enabled_only, best_match, handle, ctrl_index, found_index, predicate_func, active_only, control_id, control_type, auto_id, framework_id, backend, depth) 279 return title_regex.match(t) 280 return False --> 281 elements = [elem for elem in elements if _title_match(elem)] 282 283 if visible_only: d:\Anaconda3\lib\site-packages\pywinauto\findwindows.py in (.0) 279 return title_regex.match(t) 280 return False --> 281 elements = [elem for elem in elements if _title_match(elem)] 282 283 if visible_only: d:\Anaconda3\lib\site-packages\pywinauto\findwindows.py in _title_match(w) 275 def _title_match(w): 276 """Match a window title to the regexp""" --> 277 t = w.rich_text 278 if t is not None: 279 return title_regex.match(t) d:\Anaconda3\lib\site-packages\pywinauto\win32_element_info.py in rich_text(self) 81 def rich_text(self): 82 """Return the text of the window""" ---> 83 return handleprops.text(self.handle) 84 85 name = rich_text d:\Anaconda3\lib\site-packages\pywinauto\handleprops.py in text(handle) 86 length += 1 87 ---> 88 buffer_ = ctypes.create_unicode_buffer(length) 89 90 ret = win32functions.SendMessage( d:\Anaconda3\lib\ctypes_init_.py in create_unicode_buffer(init, size) 286 return buf 287 elif isinstance(init, int): --> 288 buftype = c_wchar * init 289 buf = buftype() 290 return buf OverflowError: cannot fit 'int' into an index-sized integer
1
1
0
0
0
0
How can I train the semantic role labeling model in AllenNLP? I am aware of the allennlp.training.trainer function but I don't know how to use it to train the semantic role labeling model. Let's assume that the training samples are BIO tagged, e.g.: Remove B_O the B_ARG1 fish I_ARG1 in B_LOC the I_LOC background I_LOC
1
1
0
0
0
0
I am expecting the following code; tokenize this is an example 123 into ['this', 'is', 'an', 'example 123'] but it doesn't see numbers part of the word. Any suggestion? import re from nltk.tokenize import RegexpTokenizer pattern=re.compile(r"[\w\s\d]+") tokenizer_number=RegexpTokenizer(pattern) tokenizer_number.tokenize("this is an example 123")
1
1
0
0
0
0
I am having following dict: {'time pickup': 8, 'pickup drop': 7, 'bus good': 5, 'good bus': 5, 'best service': 4, 'rest stop': 4, 'comfortable journey': 4, 'good service': 4, 'everything good': 3, 'staff behaviour': 3, ...} You can see that at index 2 and 3 having same words in each, I need to remove one of them, and removing meaningless word is recommended. I am reversing the sentence and later I will remove one by checking if two matches. But its complexity could be high if more words. def remDups(s): words = s.split(' ') string =[] for word in words: string.insert(0, word) print("Reversed String:") return (" ".join(string)).strip() If anybody know efficient method, please help me out in this.
1
1
0
0
0
0
I am trying to create a predictive model where the model tells whether the give sentence is correct or not by checking the order of the words in the sentence. The model checks weather the particular sequence of words as already occurred in a huge corpus and makes sense or no. I tried doing this with the word2vec model and removed the cosine similarity or WMD distance of the two sentences but that only gives the similarity based on the word vector similarity and not the sequence of the words. So if we give the input as 2 sentences: Sentence 1- "I am going to the shop" Sentence 2- "going I am the shop to" output should indicate that the sentence is invalid or with a similarity of 20% or less Whereas the word2vec model shows 100% similarity as the words entered are same irrespective of the order. So i guess it cannot be used for comparing the word order. Any other suggestions could also be very helpful.
1
1
0
0
0
0
I'm new here, and I have a question to ask regarding indexing of tensors in Keras / Tensorflow: I have a vector of length N, which contains indices of words in a vocabulary (indices may repeat). This vector represents a sentence, for example (40, 25, 99, 26, 34, 99, 100, 100...) I also have another vector, or actually a matrix (since it's a batch of examples), of the same length N, where each word in the original vector is assigned a weight W_i. I want to sum up the weights for a specific word across the whole sentence so that I can get a map from word index to the sum of weights for that word in the sentence, and I want to do it in a vectorized way. For example, assuming the sentence is (1, 2, 3, 4, 5, 3), and the weights are (0, 1, 0.5, 0.1, 0.6, 0.5), I want the result to be some mapping: 1->0 2->1 3->1 4->0.1 5->0.6 How can I achieve something like that without the need to iterate through each element? I was thinking something along the direction of a sparse tensor (since the possible vocabulary is very large), but I don't know how to implement it efficiently. Can anyone help? I basically want to implement a pointer-generator network and this part is required when calculating the probabilty of copying an input word rather than generating one.
1
1
0
1
0
0
How to find the frequency of an individual word from the corpus using Tf-idf. Below is my sample code, now I want to print the frequency of a word. How can I achieve this? from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() corpus = ['This is the first document.', 'This is the second second document.', 'And the third one.', 'Is this the first document?',] X = vectorizer.fit_transform(corpus) X print(vectorizer.get_feature_names()) X.toarray() vectorizer.vocabulary_.get('document') print(vectorizer.get_feature_names()) X.toarray() vectorizer.vocabulary_.get('document')
1
1
0
0
0
0
I came up with a CFG for a text input having left recursion and I would like to eliminate it using the well known rule of adding another production including a null production. Can someone please guide me as to how to add a null production in NLTK grammar string? I tried with NT -> '', but it did not work.
1
1
0
0
0
0
I am using a word embeddings model (FastText via Gensim library) to expand the terms of a search. So, basically if the user write "operating system" my goal is to expand that term with very similar terms like "os", "windows", "ubuntu", "software" and so on. The model works very well but now the time has come to improve the model with "external information", with "external information" i mean OOV (out-of-vocabulary) terms OR terms that do not have good context. Following the example i wrote above when the user writes operating system i would like to expand the query with the "general" terms: Terms built in the FastText model: windows ubuntu software AND terms that represent (organizations/companies) like "Microsoft", "Apple" so the complete query will be: term: operating system query: operating system, os, software, windows, ios, Microsoft, Apple My problem is that i DO NOT have companies inside the corpus OR, if present, i do not have to much context to "link" Microsoft to "operating system". For example if i extract a piece inside the corpus i can read "... i have started working at Microsoft in November 2000 with my friend John ..." so, as you can see, i cannot contextualize "Microsoft" word because i do not have good context, indeed. A small recap: I have a corpus where the companies (terms) have poor context I have a big database with companies and the description of what they do. What i need to do: I would like to include the companies in my FastText model and set "manually" their words context/cloud of related terms. Ideas?
1
1
0
1
0
0
I am trying to initialize an array with another after making changes to it. Using Numpy library function on python working on default pydataset import numpy as np from pydataset import data iris_data=data('iris') iris_arr=iris_data.values sp_l = iris_arr[:,0] #sepal.length sp_w = iris_arr[:,1] #sepal.width sp_l = np.array(sp_l) sp_w = np.array(sp_w) if(sp_l.any() <= 5 and sp_w.any() <= 3): sp_le = np.asarray(sp_l) sp_we = np.asarray(sp_w) NameError: name 'sp_le' is not defined I expected sp_le to be initialized
1
1
0
0
0
0
Making beginner Ai to distingush apple from orange from weight and texture and it comes with a syntax error on labels Here's the code: from sklearn import tree ## In Features 1 = Smooth, 0 = Bumpy features = [[140, 1], [130, 1], [150, 0], [170, 0] labels = ["apple", "apple", "orange", "orange"] clf = tree.DecisionTreeClassifier() clf = clf.fit(features, labels) print clf.predict([[150, 0]])
1
1
0
0
0
0
I am trying to read different language encoding models like golve, fasttext and word3vec and detecting the sarcasm but I am unable to read google's language encoding file. It's giving permission denied error. what should I do? I tried different encoding and giving all permission to the file as well but still no luck EMBEDDING_FILE = 'C:/Users/Abhishek/Documents/sarcasm/GoogleNews-vectors-negative300.bin/' def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32') embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE,encoding="ISO-8859-1")) embed_size = 300 word_index = tokenizer.word_index nb_words = min(max_features, len(word_index)) embedding_matrix = np.zeros((nb_words, embed_size)) for word, i in word_index.items(): if i >= max_features: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector PermissionError Traceback (most recent call last) <ipython-input-10-5d122ae40ef0> in <module> 1 EMBEDDING_FILE = 'C:/Users/Abhishek/Documents/sarcasm/GoogleNews-vectors-negative300.bin/' 2 def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32') ----> 3 embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE,encoding="ISO-8859-1")) 4 embed_size = 300 5 word_index = tokenizer.word_index PermissionError: [Errno 13] Permission denied: 'C:/Users/Abhishek/Documents/sarcasm/GoogleNews-vectors-negative300.bin/'
1
1
0
0
0
0
Dataframe with below structure - ID text 0 Language processing in python th is great 1 Relace the string Dictionary named custom fix {'Relace': 'Replace', 'th' : 'three'} Tried the code and the output is coming as - Current output - ID text 0 Language processing in pythirdon three is great 1 Replace threee string Code: def multiple_replace(dict, text): # Create a regular expression from the dictionary keys regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys()))) # For each match, look-up corresponding value in dictionary return regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], text) df['col1'] = df.apply(lambda row: multiple_replace(custom_fix, row['text']), axis=1) Expected Output - ID text 0 Language processing in python three is great 1 Replace the string
1
1
0
0
0
0
I'm implementing an application that tracks the locations of Australia's sharks through analysing a Twitter dataset. So I'm using shark as the keyword and search for the Twitts that contains "shark" and a location phrase. So the question is how to identify that "Airlie Beach at Hardy Reef" is the one that is correlated to "shark"? If it's possible, can anyone provide a working code of Python to demonstrate​? Thank you so much!
1
1
0
0
0
0
I use a LineairSVM to predict the sentiment of tweets. The LSVM classifies the tweets as neutral or positive. I use a Pipeline to (in order) clean, vectorize and classify the tweets. But when predicting the sentiment I'm only able to get a 0 (for neg) or 4 (neg). I want to get predicting scores between -1 and 1 in decimal digits to get a better scale/understanding of 'how' positive and negative the tweets are: the code: #read in influential twitter users on stock market twitter_users = pd.read_csv('core/infl_users.csv', encoding = "ISO-8859-1") twitter_users.columns = ['users'] df = pd.DataFrame() #MODEL TRAINING #read trainingset for model : csv to dataframe df = pd.read_csv("../trainingset.csv", encoding='latin-1') #label trainingsset dataframe columns frames = [df] for colnames in frames: colnames.columns = ["target","id","data","query","user","text"] #remove unnecessary columns df = df.drop("id",1) df = df.drop("data",1) df = df.drop("query",1) df = df.drop("user",1) pat1 = r'@[A-Za-z0-9_]+' # remove @ mentions fron tweets pat2 = r'https?://[^ ]+' # remove URL's from tweets combined_pat = r'|'.join((pat1, pat2)) #addition of pat1 and pat2 www_pat = r'www.[^ ]+' # remove URL's from tweets negations_dic = {"isn't":"is not", "aren't":"are not", "wasn't":"was not", "weren't":"were not", # converting words like isn't to is not "haven't":"have not","hasn't":"has not","hadn't":"had not","won't":"will not", "wouldn't":"would not", "don't":"do not", "doesn't":"does not","didn't":"did not", "can't":"can not","couldn't":"could not","shouldn't":"should not","mightn't":"might not", "mustn't":"must not"} neg_pattern = re.compile(r'\b(' + '|'.join(negations_dic.keys()) + r')\b') def tweet_cleaner(text): # define tweet_cleaner function to clean the tweets soup = BeautifulSoup(text, 'lxml') # call beautiful object souped = soup.get_text() # get only text from the tweets try: bom_removed = souped.decode("utf-8-sig").replace(u"\ufffd", "?") # remove utf-8-sig codeing except: bom_removed = souped stripped = re.sub(combined_pat, '', bom_removed) # calling combined_pat stripped = re.sub(www_pat, '', stripped) #remove URL's lower_case = stripped.lower() # converting all into lower case neg_handled = neg_pattern.sub(lambda x: negations_dic[x.group()], lower_case) # converting word's like isn't to is not letters_only = re.sub("[^a-zA-Z]", " ", neg_handled) # will replace # by space words = [x for x in tok.tokenize(letters_only) if len(x) > 1] # Word Punct Tokenize and only consider words whose length is greater than 1 return (" ".join(words)).strip() # join the words # Build a list of stopwords to use to filter stopwords = list(STOP_WORDS) # Use the punctuations of string module punctuations = string.punctuation # Creating a Spacy Parser parser = English() class predictors(TransformerMixin): def transform(self, X, **transform_params): return [clean_text(text) for text in X] def fit(self, X, y=None, **fit_params): return self def get_params(self, deep=True): return {} # Basic function to clean the text def clean_text(text): return text.strip().lower() def spacy_tokenizer(sentence): mytokens = parser(sentence) mytokens = [word.lemma_.lower().strip() if word.lemma_ != "-PRON-" else word.lower_ for word in mytokens] #mytokens = [word.lemma_.lower().strip() for word in mytokens] mytokens = [word for word in mytokens if word not in stopwords and word not in punctuations] #mytokens = preprocess2(mytokens) return mytokens # Vectorization # Convert a collection of text documents to a matrix of token counts # ngrams : extension of the unigram model by taking n words together # big advantage: it preserves context. -> words that appear together in the text will also appear together in a n-gram # n-grams can increase the accuracy in classifying pos & neg vectorizer = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,1)) # Linear Support Vector Classification. # "Similar" to SVC with parameter kernel=’linear’ # more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. # LinearSVC take as input two arrays: an array X of size [n_samples, n_features] holding the training samples, and an array y of class labels (strings or integers), size [n_samples]: classifier = LinearSVC(C=0.5) # Using Tfidf tfvectorizer = TfidfVectorizer(tokenizer=spacy_tokenizer) #put tweet-text in X and target in ylabels to train model X = df['text'] ylabels = df['target'] #T he next step is to load the data and split it into training and test datasets. In this example, # we will use 80% of the dataset to train the model.This 80% is then splitted again in 80-20. 80% tot train the model, 20% to test results. # the remaining 20% is kept to train the final model X_tr, X_kast, y_tr, y_kast = train_test_split(X, ylabels, test_size=0.2, random_state=42) X_train, X_test, y_train, y_test = train_test_split(X_tr, y_tr, test_size=0.2, random_state=42) # Create the pipeline to clean, tokenize, vectorize, and classify # Tying together different pieces of the ML process is known as a pipeline. # Each stage of a pipeline is fed data processed from its preceding stage # Pipelines only transform the observed data (X). # Pipeline can be used to chain multiple estimators into one. # The pipeline object is in the form of (key, value) pairs. # Key is a string that has the name for a particular step # value is the name of the function or actual method. #Fit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator. pipe_tfid = Pipeline([("cleaner", predictors()), ('vectorizer', tfvectorizer), ('classifier', classifier)]) # Fit our data, fit = training the model pipe_tfid.fit(X_train,y_train) # Predicting with a test dataset #sample_prediction1 = pipe_tfid.predict(X_test) accur = pipe_tfid.score(X_test,y_test) when predicting a sentiment score i do pipe_tfid.predict('textoftweet')
1
1
0
0
0
0
I realize this question has been asked before, but none of those solutions seem to be relevant to my problem. I am trying to implement a basic binary classification algorithm using logistic regression to identify whether an image is a cat or a dog. I believe I am structuring the data properly, I am adding a flatten layer before the initial dense layer which I believe is accepting the proper shape, then I run it through two more dense layers with the final one having only 2 outputs (which as I understand it, is the way it should be for a binary classification such as this). Please take a look at my code and advise what I can do better to: 1.) Make the prediction output vary (not always choose one or the other) 2.) Make my accuracy and loss vary after the second epoch. I have tried: - varying the number of dense layers and their parameters - changing the size of my dataset (hence the count variable when processing files) - changing the number of epochs - changing the kind model from sgd to adam dataset initialization import numpy as np import cv2 import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import random import keras dataDir = '/content/gdrive/My Drive/AI' categories = ['dog', 'cat'] x, y = [], [] imgSize = 100 for cat in categories: folderPath = os.path.join(dataDir, cat) # path to the respective folders classNum = categories.index(cat) # sets classification number (0 = dog, 1 = cat) count = 0 # used for limiting the number of images to test for file in os.listdir(folderPath): count = count + 1 try: # open image and convert to grayscale img = cv2.imread(os.path.join(folderPath, file), cv2.IMREAD_GRAYSCALE) # resize to a square of predefined dimensions newImg = cv2.resize(img, (imgSize, imgSize)) # add images to x and labels to y x.append(newImg) y.append(classNum) if count >= 100: break; # some images may be broken except Exception as e: pass # y array to categorical y = keras.utils.to_categorical(y, num_classes=2) # shuffle data to increase training random.shuffle(x) random.shuffle(y) x = np.array(x).reshape(-1, imgSize, imgSize, 1) y = np.array(y) # split data into default sized groups (75% train, 25% test) xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size=0.25) # display bar chart objects = ('xTrain', 'xTest', 'yTrain', 'yTest') y_pos = np.arange(len(objects)) maxItems = int((len(x) / 2 ) + 1) arrays = [len(xTrain), len(xTest), len(yTrain), len(yTest)] plt.bar(y_pos, arrays, align='center') plt.xticks(y_pos, objects) plt.ylabel('# of items') plt.title('Items in Arrays') plt.show() model setup from keras.layers import Dense, Flatten from keras.models import Sequential shape = xTest.shape model = Sequential([Flatten(), Dense(100, activation = 'relu', input_shape = shape), Dense(50, activation = 'relu'), Dense(2, activation = 'softmax')]) model.compile(loss = keras.losses.binary_crossentropy, optimizer = keras.optimizers.sgd(), metrics = ['accuracy']) model.fit(xTrain, yTrain, epochs=3, verbose=1, validation_data=(xTest, yTest)) model.summary() which outputs: Train on 150 samples, validate on 50 samples Epoch 1/3 150/150 [==============================] - 1s 6ms/step - loss: 7.3177 - acc: 0.5400 - val_loss: 1.9236 - val_acc: 0.8800 Epoch 2/3 150/150 [==============================] - 0s 424us/step - loss: 3.4198 - acc: 0.7867 - val_loss: 1.9236 - val_acc: 0.8800 Epoch 3/3 150/150 [==============================] - 0s 430us/step - loss: 3.4198 - acc: 0.7867 - val_loss: 1.9236 - val_acc: 0.8800 _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_13 (Flatten) (None, 10000) 0 _________________________________________________________________ dense_45 (Dense) (None, 100) 1000100 _________________________________________________________________ dense_46 (Dense) (None, 50) 5050 _________________________________________________________________ dense_47 (Dense) (None, 2) 102 ================================================================= Total params: 1,005,252 Trainable params: 1,005,252 Non-trainable params: 0 prediction y_pred = model.predict(xTest) for y in y_pred: print(y) which outputs: [1. 0.] [1. 0.] [1. 0.] . . . [1. 0.]
1
1
0
0
0
0
I have an input image 416x416. How can I create an output of 4 x 10, where 4 is number of columns and 10 the number of rows? My label data is 2D array with 4 columns and 10 rows. I know about the reshape() method but it requires that the resulted shape has same number of elements as the input. With 416 x 416 input size and max pools layers I can get max 13 x 13 output. Is there a way to achieve 4x10 output without loss of data? My input label data looks like for example like [[ 0 0 0 0] [ 0 0 0 0] [ 0 0 0 0] [ 0 0 0 0] [ 0 0 0 0] [ 0 0 0 0] [ 0 0 0 0] [116 16 128 51] [132 16 149 52] [ 68 31 77 88] [ 79 34 96 92] [126 37 147 112] [100 41 126 116]] Which indicates there are 6 objects on my images that i want to detect, first value is xmin, second ymin , third xmax, fourth ymax. The last layer of my networks looks like (None, 13, 13, 1024)
1
1
0
0
0
0
I'm currently using a custom version of YOLO v2 from pjreddie.com written with Tensorflow and Keras. I've successfully got the model to start and finish training over 100 epochs with 10000 training images and 2400 testing images which I randomly generated along with the associated JSON files all on some Titan X gpus with CUDA. I only wish to detect two classes. However, after leaving the training going, the loss function decreases but the test accuracy hovers at below 3%. All the images appear to be getting converted to black and white. The model seems to perform reasonably on one of the classes when using the training data, so the model appears overfitted. What can I do to my code to get the model to become accurate?
1
1
0
0
0
0
I am trying to catch occurrences of general classes of objects using NLTK. For example, trout and herring are types of fish, eagles and sparrows are types of birds. Is there any functionality in NLTK (or any other library for that matter) to help me do this? I am not looking for synonyms since they are just another way of saying the same thing... so for example using wordnet.synsets I get the following as synonyms for 'sparrow': 'hedge_sparrow', 'dunnock', 'Prunella_modularis', 'sparrow', 'true_sparrow'. And for synonyms of 'bird' I get: 'doll', 'snort', 'skirt', 'birdwatch', 'chick', 'hiss', 'hoot', 'raspberry', 'bird', 'Bronx_cheer', 'boo', 'shuttlecock', 'razzing', 'birdie', 'shuttle', 'wench', 'fowl', 'dame', 'razz'. I am looking for a way to say that sparrow is a type of bird.
1
1
0
0
0
0
I will be performing sentiment analysis on fiction. I'll be working with around 300 books of 350 pages. Can I limit the dictionary size by ignoring less frequent words? If so, what is the rule for defining the size?
1
1
0
1
0
0
Problem: I have pairs of sentences that lack a period and a capitalized letter in between them. Need to segment them from each other. I'm looking for some help in picking the good features to improve the model. Background: I'm using pycrfsuite to perform sequence classification and find the end of the first sentence, like so: From brown corpus, I join every two sentences together and get their pos tags. Then, I label every token in the sentence with 'S' if the space follows it and 'P' if the period follows it in the sentence. Then I delete a period between the sentences, and lower the following token. I get something like this: Input: data = ['I love Harry Potter.', 'It is my favorite book.'] Output: sent = [('I', 'PRP'), ('love', 'VBP'), ('Harry', 'NNP'), ('Potter', 'NNP'), ('it', 'PRP'), ('is', 'VBZ'), ('my', 'PRP$'), ('favorite', 'JJ'), ('book', 'NN')] labels = ['S', 'S', 'S', 'P', 'S', 'S', 'S', 'S', 'S'] At the moment, I extract these general features: def word2features2(sent, i): word = sent[i][0] postag = sent[i][1] # Common features for all words features = [ 'bias', 'word.lower=' + word.lower(), 'word[-3:]=' + word[-3:], 'word[-2:]=' + word[-2:], 'word.isupper=%s' % word.isupper(), 'word.isdigit=%s' % word.isdigit(), 'postag=' + postag ] # Features for words that are not # at the beginning of a document if i > 0: word1 = sent[i-1][0] postag1 = sent[i-1][1] features.extend([ '-1:word.lower=' + word1.lower(), '-1:word.isupper=%s' % word1.isupper(), '-1:word.isdigit=%s' % word1.isdigit(), '-1:postag=' + postag1 ]) else: # Indicate that it is the 'beginning of a sentence' features.append('BOS') # Features for words that are not # at the end of a document if i < len(sent)-1: word1 = sent[i+1][0] postag1 = sent[i+1][1] features.extend([ '+1:word.lower=' + word1.lower(), '+1:word.isupper=%s' % word1.isupper(), '+1:word.isdigit=%s' % word1.isdigit(), '+1:postag=' + postag1 ]) else: # Indicate that it is the 'end of a sentence' features.append('EOS') And train crf with these parameters: trainer = pycrfsuite.Trainer(verbose=True) # Submit training data to the trainer for xseq, yseq in zip(X_train, y_train): trainer.append(xseq, yseq) # Set the parameters of the model trainer.set_params({ # coefficient for L1 penalty 'c1': 0.1, # coefficient for L2 penalty 'c2': 0.01, # maximum number of iterations 'max_iterations': 200, # whether to include transitions that # are possible, but not observed 'feature.possible_transitions': True }) trainer.train('crf.model') Results: Accuracy report shows: precision recall f1-score support S 0.99 1.00 0.99 214627 P 0.81 0.57 0.67 5734 micro avg 0.99 0.99 0.99 220361 macro avg 0.90 0.79 0.83 220361 weighted avg 0.98 0.99 0.98 220361 What are some ways I could edit word2features2() in order to improve the model? (or any other part) Here is the link to the full code as it is today. Also, I am just a beginner in nlp so I would greatly appreciate any feedback overall, links to relevant or helpful sources, and rather simple explanations. Thank you very-very much!
1
1
0
1
0
0
I'm trying to solve quite a difficult problem - building a generic parser for job descriptions. The idea is, given a job description, the parser should be able to identify and extract different sections such as job title, location, job description, responsibilities, qualifications etc. The job description will basically be scraped from a web page. A rule based approach (such as regular expressions) doesn't work since the scenario is too generic. My next approach was to train a custom NER classifier using SpaCy; I've done this numerous times before. However, I'm running into several problems. The entities can be very small in size (location, job title etc.) or very large (responsibilities, qualifications etc.). I'm not sure how well NER works if the entities are several lines or a paragraph long? Most of the use cases I've seen are those in which the entities aren't longer than a few words max. Does Spacy's NER work well if the text of the entities I want to identify is quite long in size? (I can give examples if required to make it clearer). Is there any other strategy besides NER that I can use to parse these job descriptions as I've mentioned? Any help here would be greatly appreciated. I've been banging my head along different walls for a few months, and I have made some progress, but I'm not sure if I'm on the right track, or if a better approach exists.
1
1
0
0
0
0
I am trying to extract the text in a Javadoc before the Javadoc tags in python. I have so far been able to avoid the parameter tag, but there are other Javadoc tags that could be mentioned all at once. Is there a better way to do this? parameterTag = "@param" if (parameterTag in comments): splitComments = subsentence.split(my_string[my_string.find(start) + 1: my_string.find(parameterTag)]) Input: /** * Checks if the given node is inside the graph and * throws exception if the given node is null * @param a single node to be check * @return true if given node is contained in graph, * return false otherwise * @requires given node != null */ public boolean containsNode(E node){ if(node==null){ throw new IllegalArgumentException(); } if(graph.containsKey(node)){ return true; } return false; } Output: /** * Checks if the given node is inside the graph and * throws exception if the given node is null */ public boolean containsNode(E node){ if(node==null){ throw new IllegalArgumentException(); } if(graph.containsKey(node)){ return true; } return false; }
1
1
0
0
0
0
What i'm trying to do is to load a machine learning model for summary generation in a pickle object so that when i deploy the code to my web app, it doesn't do the manual loading over and over again. That takes quite a bit of time and I can't afford having the user wait for 10-15 min while the model loads and then the summary is generated. import cPickle as pickle from skip_thoughts import configuration from skip_thoughts import encoder_manager import en_coref_md def load_models(): VOCAB_FILE = "skip_thoughts_uni/vocab.txt" EMBEDDING_MATRIX_FILE = "skip_thoughts_uni/embeddings.npy" CHECKPOINT_PATH = "skip_thoughts_uni/model.ckpt-501424" encoder = encoder_manager.EncoderManager() print "loading skip model" encoder.load_model(configuration.model_config(), vocabulary_file=VOCAB_FILE, embedding_matrix_file=EMBEDDING_MATRIX_FILE, checkpoint_path=CHECKPOINT_PATH) print "loaded" return encoder encoder= load_models() print "Starting cPickle dumping" pickle.dump(encoder, open('/path_to_loaded_model/loaded_model.pkl', "wb")) print "pickle.dump executed" print "starting cpickle loading" loaded_model = pickle.load(open('loaded_model.pkl', 'r')) print "pickle load done" cPickle was initially pickle, but none of them did this in enough time. The first time i tried doing this, the pickle file being created was 11.2GB, which is waaay too big i think. And it took well over an hour rendering my pc useless in the meantime. And the code wasn't done executing, i force restarted my pc because it was taking too long. I'd really appreciate it if anyone could help.
1
1
0
0
0
0
Is it possible to apply a sentence-level LDA model using Gensim as proposed in Bao and Datta(2014)? The paper is here. The distinct feature is that it makes the "one topic per sentence assumption" (p.1376). This is different from other sentence-level methods, which typically allow each sentence to include multiple topics. "The most straightforward method is to treat each sentence as a document and apply the LDA model on the collection of sentences rather than documents." (P.1376). But, I think it is more reasonable to assume that one sentence deals with one topic. Thank you!
1
1
0
0
0
0
i need to understand how the epochs/iterations affect the training of a deep learning model. I am training a NER model with Spacy 2.1.3, my documents are very long so i cannot train more than 200 documents per iteration. So basically i do from the document 0 to the document 200 -> 20 epochs from the document 201 to the document 400 -> 20 epochs and so on. Maybe, it is a stupid question but, should the epochs of the next batches be the same as the first 0-200? so if i chose 20 epochs i must train the next with 20 epochs too? Thanks
1
1
0
1
0
0
Assume that we have an input string I need to buy some chicken. After working a bit on this string, suppose that we've reduced it to buy chicken. My question is, how can we understand that chicken is something related to cafe or supermarket, but not related to locksmith or post office. More specifically, I have n number of point of interest types and I am trying to come up with n probabilities p_1, p_2, ..., p_n where each probability represents the likelihood (or meaningfulness) of string-type pairs. My ultimate goal is to have an unequality containing these n probabilities, which should of course be meaningful. I want to have: p(chicken, synagogue) < p(chicken, supermarket) But not: p(chicken, train_station) > p(chicken, café) I have tried to do google searches and determine these probabilities according to the number of results but it wasn't satisfying at all. For example, when I searched chicken breast EMBASSY: I got 24,500,000 results. For chicken breast SUPERMARKET, number of results was 11,600,000. If we compute the probabilities by only taking these numbers into account, we'd arrive at a conclusion where p(chicken, supermarket) < p(chicken, embassy) which would of course be wrong. Do you have any suggestions on how to approach this problem?
1
1
0
0
0
0
I pretrained a word embedding using wang2vec (https://github.com/wlin12/wang2vec), and i loaded it in python through gensim. When i tried to get the vector of some words not in vocabulary, i obviously get: KeyError: "word 'kjklk' not in vocabulary" So, i thought about adding an item to the vocabulary to map oov (out of vocabulary) words, let's say <OOV>. Since the vocabulary is in Dict format, i would simply add the item {"<OOV>":0}. But, i searched an item of the vocabulary, with model = gensim.models.KeyedVectors.load_word2vec_format(w2v_ext, binary=False, unicode_errors='ignore') dict(list(model.vocab.items())[5:6]) The output was something like {'word': <gensim.models.keyedvectors.Vocab at 0x7fc5aa6007b8>} So, is there a way to add the <OOV> token to the vocabulary of a pretrained word embedding loaded through gensim, and avoid the KeyError? I looked at gensim doc and i found this: https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.build_vocab but it seems not work with the update parameter.
1
1
0
0
0
0
My problem is little bit different from simple word similarty.The question is that is there any algorithm to use for calculating similarty between mail adress and name. for example: mail Abd_tml_1132@gmail.com Name Abdullah temel levenstein,hamming distance 11 jaro distance 0.52 but most likely, this mail address belongs to this name.
1
1
0
0
0
0
Dummy example: I want the NER to be able to detect locations, animals and sport groups a Matcher \ PhraseMatcher\ EntityRuler (which is more relevant for this use case?) could be used to add "simple" rules like: locations: Chicago, New York animals: Bull, Chicken groups: Chicago Bulls The NER layer should be able to learn that Chicago Bulls is a group and not a location and animal (like using a matcher alone would give) and that other combinations of location + animal are sport groups and not location animal pairs (even if the specific combination didn't exists in the training set) TLDR: I don't want to use the rule based extracted entities as-is, but as hints for another layer that will use them to improve the entity extraction
1
1
0
0
0
0
I am training a GloVe model with my own corpus and I have troubles to save it/load it in an utf-8 format. Here what I tried: from glove import Corpus, Glove #data lines = [['woman', 'umbrella', 'silhouetted'], ['person', 'black', 'umbrella']] #GloVe training corpus = Corpus() corpus.fit(lines, window=4) glove = Glove(no_components=4, learning_rate=0.1) glove.fit(corpus.matrix, epochs=10, no_threads=8, verbose=True) glove.add_dictionary(corpus.dictionary) glove.save('glove.model.txt') The saved file glove.model.txt is unreadable and I can't succeed to save it with a utf-8 encoding. When I try to read it, for exemple by converting it in a Word2Vec format: from gensim.models.keyedvectors import KeyedVectors from gensim.scripts.glove2word2vec import glove2word2vec glove2word2vec(glove_input_file="glove.model.txt", word2vec_output_file="gensim_glove_vectors.txt") model = KeyedVectors.load_word2vec_format("gensim_glove_vectors.txt", binary=False) I have the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte Any idea on how I could use my own GloVe model ?
1
1
0
0
0
0
Lets say I am trying to compute the average distance between a word and a document using distances() or compute cosine similarity between two documents using n_similarity(). However, lets say these new documents contain words that the original model did not. How does gensim deal with that? I have been reading through the documentation and cannot find what gensim does with unfound words. I would prefer gensim to not count those in towards the average. So, in the case of distances(), it should simply not return anything or something I can easily delete later before I compute the mean using numpy. In the case of n_similarity, gensim of course has to do it by itself.... I am asking because the documents and words that my program will have to classify will in some instances contain unknown words, names, brands etc that I do not want to be taken into consideration during classification. So, I want to know if I'll have to preprocess every document that I am trying to classify.
1
1
0
0
0
0
I am working on a project to extract a keyword from short texts (3-4 sentences). Using the spaCy library I extract noun phrases and NER and use them as keywords. However, I would like to sort them based on their importance wrt the original text. I tried standard informational retrieval approaches, like tfidf, and even a couple of graph-based algorithms but having such short text the results weren't so great. I was thinking that maybe using a NN with an attention mechanism could help me rank those keywords. Is there any way to use the pre-trained models that come with spaCy to do some kind of ranking?
1
1
0
0
0
0
I'm working on some Artificial Intelligence project and I want to predict the bitcoin trend but while using the model.predict function from Keras with my test_set, the prediction is always equal to 1 and the line in my diagram is therefor always straight. import csv import matplotlib.pyplot as plt import numpy as np import pandas as pd from cryptory import Cryptory from keras.models import Sequential, Model, InputLayer from keras.layers import LSTM, Dropout, Dense from sklearn.preprocessing import MinMaxScaler def format_to_3d(df_to_reshape): reshaped_df = np.array(df_to_reshape) return np.reshape(reshaped_df, (reshaped_df.shape[0], 1, reshaped_df.shape[1])) crypto_data = Cryptory(from_date = "2014-01-01") bitcoin_data = crypto_data.extract_coinmarketcap("bitcoin") sc = MinMaxScaler() for col in bitcoin_data.columns: if col != "open": del bitcoin_data[col] training_set = bitcoin_data; training_set = sc.fit_transform(training_set) # Split the data into train, validate and test train_data = training_set[365:] # Split the data into x and y x_train, y_train = train_data[:len(train_data)-1], train_data[1:] model = Sequential() model.add(LSTM(units=4, input_shape=(None, 1))) # 128 -- neurons**? # model.add(Dropout(0.2)) model.add(Dense(units=1, activation="softmax")) # activation function could be different model.compile(optimizer="adam", loss="mean_squared_error") # mse could be used for loss, look into optimiser model.fit(format_to_3d(x_train), y_train, batch_size=32, epochs=15) test_set = bitcoin_data test_set = sc.transform(test_set) test_data = test_set[:364] input = test_data input = sc.inverse_transform(input) input = np.reshape(input, (364, 1, 1)) predicted_result = model.predict(input) print(predicted_result) real_value = sc.inverse_transform(input) plt.plot(real_value, color='pink', label='Real Price') plt.plot(predicted_result, color='blue', label='Predicted Price') plt.title('Bitcoin Prediction') plt.xlabel('Time') plt.ylabel('Prices') plt.legend() plt.show() The training set performance looks like this: 1566/1566 [==============================] - 3s 2ms/step - loss: 0.8572 Epoch 2/15 1566/1566 [==============================] - 1s 406us/step - loss: 0.8572 Epoch 3/15 1566/1566 [==============================] - 1s 388us/step - loss: 0.8572 Epoch 4/15 1566/1566 [==============================] - 1s 388us/step - loss: 0.8572 Epoch 5/15 1566/1566 [==============================] - 1s 389us/step - loss: 0.8572 Epoch 6/15 1566/1566 [==============================] - 1s 392us/step - loss: 0.8572 Epoch 7/15 1566/1566 [==============================] - 1s 408us/step - loss: 0.8572 Epoch 8/15 1566/1566 [==============================] - 1s 459us/step - loss: 0.8572 Epoch 9/15 1566/1566 [==============================] - 1s 400us/step - loss: 0.8572 Epoch 10/15 1566/1566 [==============================] - 1s 410us/step - loss: 0.8572 Epoch 11/15 1566/1566 [==============================] - 1s 395us/step - loss: 0.8572 Epoch 12/15 1566/1566 [==============================] - 1s 386us/step - loss: 0.8572 Epoch 13/15 1566/1566 [==============================] - 1s 385us/step - loss: 0.8572 Epoch 14/15 1566/1566 [==============================] - 1s 393us/step - loss: 0.8572 Epoch 15/15 1566/1566 [==============================] - 1s 397us/step - loss: 0.8572 I'm supposed to print a plot with the Real Price and the Predicted Price, the Real Price is displayed properly but the Predicted price is only a straight line because of that model.predict that only contains the value 1. Thanks in advance!
1
1
0
0
0
0
I am currently working on an AI Agent that will be able to identify both the start state and the goal state of the famous old plumbing game found here https://i.imgur.com/o1kFcT3.png http://html5.gamedistribution.com/a73b1e79af45414a88ce3fa091307084/ The idea is to allow the water to flow from the start point to the exit point, the AI only rotate the tiles, and not all tiles and populated, The problem that this will be an unguided search. I am really lost and help will be appreciated. What I thought about is that I should assign a number for each tile and rotation and make a series of allowed sequence ? but I am not sure if that's the best way to go or not, because they sequence will be 10! which is huge. The other approach can be assigning the holes of each pipe as North,West,South,East and check if the tiles link ? The solution should be flexible and tiles might shuffle/Change so assigning the goal state manually won't work. Any idea will be greatly appreciated.
1
1
0
0
0
0
How does nltk.pos_tag() work? Does it involve any corpus use? I found a source code (nltk.tag - NLTK 3.0 documentation) and it says _POS_TAGGER = 'taggers/maxent_treebank_pos_tagger/english.pickle'. Loading _POS_TAGGER gives an object: nltk.tag.sequential.ClassifierBasedPOSTagger , which seems to have no training from corpus. The tagging is incorrect when I use a few adjective in series before a noun (e.g. the quick brown fox). I wonder if I can improve the result by using better tagging method or somehow training with better corpus. Any suggestions?
1
1
0
0
0
0
I've started recently learning python and i am trying to classify a list of words into : positive 'pos' and negative 'neg' the result that i'm looking for is : ('joli': 'pos', 'bravo': 'pos', 'magnifique': 'pos') ('arnaque': 'neg', 'désagréable': 'neg', 'mauvais': 'neg') I have the following code: def word_feats(words): return dict([(word, True) for word in words]) vocab_positif = [ 'joli', 'bravo', 'magnifique'] vocab_negatif = [ 'arnaque', 'désagréable','mauvais'] positive_features = [(word_feats(vocab_positif), 'pos')] negative_features = [(word_feats(vocab_negatif), 'neg')] output : ({'joli': True, 'bravo': True, 'magnifique': True}, 'pos') ({'arnaque': True, 'désagréable': True, 'mauvais': True}, 'neg')
1
1
0
0
0
0
I'm trying to calculate tf-idf of selected words in a corpus, but it didn't work when I use regex on selected words. Below is the example I copied from another questions in stackoverflow and made small changes to reflect my question. The code is pasted below. The code works if I write "chocolate" and "chocolates" separately but doesn't work if I write 'chocolate|chocolates'. Can someone help me understand why and suggest possible solutions to this problem? keywords = ['tim tam', 'jam', 'fresh milk', 'chocolate|chocolates', 'biscuit pudding'] corpus = {1: "making chocolate biscuit pudding easy first get your favourite biscuit chocolates", 2: "tim tam drink new recipe that yummy and tasty more thicker than typical milkshake that uses normal chocolates", 3: "making chocolates drink different way using fresh milk egg"} tfidf = TfidfVectorizer(vocabulary = keywords, stop_words = 'english', ngram_range=(1,3)) tfs = tfidf.fit_transform(corpus.values()) feature_names = tfidf.get_feature_names() corpus_index = [n for n in corpus] rows, cols = tfs.nonzero() for row, col in zip(rows, cols): print((feature_names[col], corpus_index[row]), tfs[row, col]) tfidf_results = pd.DataFrame(tfs.T.todense(), index=feature_names, columns=corpus_index).T I expect the results to be: ('biscuit pudding', 1) 0.652490884512534 ('chocolates', 1) 0.3853716274664007 ('chocolate', 1) 0.652490884512534 ('chocolates', 2) 0.5085423203783267 ('tim tam', 2) 0.8610369959439764 ('chocolates', 3) 0.5085423203783267 ('fresh milk', 3) 0.8610369959439764 But, now it returns: ('biscuit pudding', 1) 1.0 ('tim tam', 2) 1.0 ('fresh milk', 3) 1.0
1
1
0
0
0
0
I'm trying to learn Keras and trying something very simple. I have created a dataframe with 200.000 random letters with two columns. letter and is_x. is_x is set to 1 (or True) if letter is capital "X". Here's what i did so far: model = Sequential() model.add(Dense(32, activation='tanh', input_shape=(X_train.shape[1],))) model.add(Dense(16, activation='tanh')) model.add(Dense(y_train.shape[1], activation='sigmoid')) #model.compile(optimizer=SGD(), loss='categorical_crossentropy', metrics=['accuracy']) model.compile(optimizer=Adam(lr=0.05), loss='binary_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=128, epochs=10, verbose=1, validation_data=(X_test, y_test)) results = model.evaluate(X_test, y_test) y_predict = model.predict(X_test) print(results) print("---") for i in y_predict: print(i) and here's the results: [0.09158177] [0.09158175] [0.09158177] [0.09158177] [0.09158175] [0.09158177] [0.09158173] What I'm trying to get is 1 or 0 if is_x is True. I sşmply feed letter as X_ and is_x as y_ however i only get some numbers and they are all look same like 0.996 and etc. Also accuracy is something like 0.99 but it's far away from the reality. I'm very confused about activation and optimizer and loss. I couldn't understand which to choose and how to solve this simple problem. I have studied lots of training videos on udemy but noone explains why and how they use these functions.
1
1
0
0
0
0
i have start learn ML from online curses and find it very exciting. the examples are pretty easy to understand (written in python) and the results are amazing , but all the examples are pretty simple and don't explain how to decide how many hidden layers and neurons needed in the hidden layers , so i searched in google. most of the results say its art and experience, i found one article that show how beginners-ask-how-many-hidden-layers-neurons-to-use-in-artificial-neural-networks but for large data sets with a lot of parameters i cant rely draw boundaries , is there a way to do it programmable or a better way to know how many hidden layers and neurons i need?
1
1
0
1
0
0
I want to create a dictonnary from another one but with keeping the same key. I tried to update the value of my first dictionary but I got the error. So now I am trying to create a new dictionary from the first one. I would like the key to be the same and not change them. # -*- coding: UTF-8 -*- import codecs import re import os import sys, argparse import subprocess import pprint import csv from itertools import islice import pickle try: import treetaggerwrapper from treetaggerwrapper import TreeTagger, make_tags print("import TreeTagger OK") except: print("Import TreeTagger pas Ok") from itertools import islice from collections import defaultdict #export le lexique de sentiments pickle_in = open("dict_pickle", "rb") dico_lexique = pickle.load(pickle_in) # extraction colonne verbatim d = {} with open(sys.argv[1], 'r', encoding="cp1252",) as csv_file: csv_file.readline() for line in csv_file: token = line.split(';') d[token[0]] = token[1] #print(d) #Writing in a new csv file with open('result.csv','wb', sep=';', encoding='Cp1252') as f: w = csv.writer(f) w.writerows(d.items()) tagger = treetaggerwrapper.TreeTagger(TAGLANG='fr') d_tag = {} for key,val in d.items(): newvalues = tagger.tag_text(val) #print(newvalues) for key,val in d_tag.items(): d_tag[key] = d[key] d_tag[val] = newvalues print(d_tag) #Writing in a new csv file, Writing the key to be sure it coincides with open('result.csv','wb', sep=';', encoding='Cp1252') as f: w = csv.writer(f) w.writerows(d_tag.items()) file (this is an example , the original has approx. 6000 lines in the csv id;Verbatim;score 1;tu es laid;5 2;Je suis belle; 6 3;Je n'aime pas la viande;7 What it looks after extracting the first and second column : {'1': 'tu es laid ', '2': 'Je suis belle ', '3': "Je n'aime pas la viande"} Expected answer , I would like the tag to correspond to the key of their original sentence d_tag = { "1" : ['tu\tPRO:PER\ttu', 'es\tVER:pres\têtre', 'laid\tADJ\tlaid'], "2" : ['Je\tPRO:PER\tje', 'suis\tVER:pres\tsuivre|être', 'belle\tADJ\tbeau'], "3" : ['Je\tPRO:PER\tje', "n'\tADV\tne", 'aime\tVER:pres\taimer', 'pas\tADV\tpas', 'la\tDET:ART\tle', 'viande\tNOM\tviande']} Later, I would like to extract only the third word (looping over the second dictionary and rewriting a new one with the same key but containing only the lemma which are at the index[2]. That means obtaining something like this : d_lemma = { "1" : ['tu', 'être', 'laid'], "2" : ['Je', 'suivre|être', 'beau'], "3" : ['Je', "ne", 'aimer', 'pas', 'le', 'viande']} the code above is not working , any idea how to change it, to get the result I am expecting for the second dictionary. Unfortunately, I have to use the key to preserve the sentences so that I will be able to write the values one by one in the csv either at each step or at the end.
1
1
0
0
0
0
TLDR MCTS agent implementation runs without errors locally, achieving win-rates of >40% against heuristic driven minimax but fails the autograder - which is a requirement before the project can be submitted. Autograder throws IndexError: Cannot choose from an empty sequence. I'm looking for suggestions on the part of the code that is most likely to throw this exception. Hi, I am currently stuck at this project, which I need to clear before I get to complete the program that I'm enrolled in, in 2 weeks' time. My task, which I have already completed, is to implement an agent to play against the heuristic-driven minimax agent in a game of Isolation between two chess knights. Full implementation details of the game can be found here. For my project, the game will be played on a board measuring 9 x 11, using bitboard encoding. My implementation of MCTS is straightforward, following closely the pseudocode provided in this paper (pg 6). In essence, the general MCTS approach comprises these 4 parts and they are each implemented by the following nested functions in the CustomPlayer class: Selection - tree_policy Expansion - best_child, expand Simulation - default_policy Backpropagation - backup_negamax, update_scores import math import random import time import logging from copy import deepcopy from collections import namedtuple from sample_players import DataPlayer class CustomPlayer(DataPlayer): """ Implement your own agent to play knight's Isolation The get_action() method is the only required method for this project. You can modify the interface for get_action by adding named parameters with default values, but the function MUST remain compatible with the default interface. ********************************************************************** NOTES: - The test cases will NOT be run on a machine with GPU access, nor be suitable for using any other machine learning techniques. - You can pass state forward to your agent on the next turn by assigning any pickleable object to the self.context attribute. ********************************************************************** """ def get_action(self, state): """ Employ an adversarial search technique to choose an action available in the current state calls self.queue.put(ACTION) at least This method must call self.queue.put(ACTION) at least once, and may call it as many times as you want; the caller will be responsible for cutting off the function after the search time limit has expired. See RandomPlayer and GreedyPlayer in sample_players for more examples. ********************************************************************** NOTE: - The caller is responsible for cutting off search, so calling get_action() from your own code will create an infinite loop! Refer to (and use!) the Isolation.play() function to run games. ********************************************************************** """ logging.info("Move %s" % state.ply_count) self.queue.put(random.choice(state.actions())) i = 1 statlist = [] while (self.queue._TimedQueue__stop_time - 0.05) > time.perf_counter(): next_action = self.uct_search(state, statlist, i) self.queue.put(next_action) i += 1 def uct_search(self, state, statlist, i): plyturn = state.ply_count % 2 Stat = namedtuple('Stat', 'state action utility visit nround') def tree_policy(state): statecopy = deepcopy(state) while not statecopy.terminal_test(): # All taken actions at this depth tried = [s.action for s in statlist if s.state == statecopy] # See if there's any untried actions left untried = [a for a in statecopy.actions() if a not in tried] topop = [] toappend = [] if len(untried) > 0: next_action = random.choice(untried) statecopy = expand(statecopy, next_action) break else: next_action = best_child(statecopy, 1) for k, s in enumerate(statlist): if s.state == statecopy and s.action == next_action: visit1 = statlist[k].visit + 1 news = statlist[k]._replace(visit=visit1) news = news._replace(nround=i) topop.append(k) toappend.append(news) break update_scores(topop, toappend) statecopy = statecopy.result(next_action) return statecopy def expand(state, action): """ Returns a state resulting from taking an action from the list of untried nodes """ statlist.append(Stat(state, action, 0, 1, i)) return state.result(action) def best_child(state, c): """ Returns the state resulting from taking the best action. c value between 0 (max score) and 1 (prioritize exploration) """ # All taken actions at this depth tried = [s for s in statlist if s.state == state] maxscore = -999 maxaction = [] # Compute the score for t in tried: score = (t.utility/t.visit) + c * math.sqrt(2 * math.log(i)/t.visit) if score > maxscore: maxscore = score del maxaction[:] maxaction.append(t.action) elif score == maxscore: maxaction.append(t.action) if len(maxaction) < 1: logging.error("IndexError: maxaction is empty!") return random.choice(maxaction) def default_policy(state): """ The simulation to run when visiting unexplored nodes. Defaults to uniform random moves """ while not state.terminal_test(): state = state.result(random.choice(state.actions())) delta = state.utility(self.player_id) if abs(delta) == float('inf') and delta < 0: delta = -1 elif abs(delta) == float('inf') and delta > 0: delta = 1 return delta def backup_negamax(delta): """ Propagates the terminal utility up the search tree """ topop = [] toappend = [] for k, s in enumerate(statlist): if s.nround == i: if s.state.ply_count % 2 == plyturn: utility1 = s.utility + delta news = statlist[k]._replace(utility=utility1) elif s.state.ply_count % 2 != plyturn: utility1 = s.utility - delta news = statlist[k]._replace(utility=utility1) topop.append(k) toappend.append(news) update_scores(topop, toappend) return def update_scores(topop, toappend): # Remove outdated tuples. Order needs to be in reverse or pop will fail! for p in sorted(topop, reverse=True): statlist.pop(p) # Add the updated ones for a in toappend: statlist.append(a) return next_state = tree_policy(state) if not next_state.terminal_test(): delta = default_policy(next_state) backup_negamax(delta) return best_child(state, 0) The lack of color formatting does make the code really hard to read. So, please feel free to check it out at my github. I have no issues running the game locally, with my MCTS agent achieving win-rates of >40% (under a 150ms/move limit) against the minimax player. However, when I try submitting my code to the autograder, it gets rejected with the IndexError: Cannot choose from an empty sequence exception. From my discussion with the course representation, we believe that the error is likely caused by the usage of random.choice(). There are 4 instances of its usage in my implementation: Line 39, before the MCTS algorithm, to feed a random move to the queue Line 66, to randomly select one move that has not been tried Line 114, to randomly select an action should there be a tie in the score of the best moves Line 122, to simulate the game randomly until terminal state for a chosen move I assume that the game implementation is correct and calling state.actions() will always return a list of possible moves as long as the state is terminal. Therefore, the only instance that can trigger this exception is Item 3. Items 1 and 4 are simply randomly selecting from available actions, while an explicit check is in place to make sure that random.choice() is not fed with an empty list. Hence, I applied logging to item 3 (even though no exception has been thrown while running locally) and sure enough, did not catch any exception after 50 games. I apologize for the lengthy post but I do hope that someone out there may be able to catch something that I have missed out in my implementation.
1
1
0
0
0
0
I am following the steps mentioned here - http://www.nltk.org/book/ch10.html to load and parse data using a cfg file. When I use the code below I don't face any issue. cp = load_parser('grammars/book_grammars/sql0.fcfg') query = 'What cities are located in China' trees = list(cp.parse(query.split())) answer = trees[0].label()['SEM'] answer = [s for s in answer if s] q = ' '.join(answer) print(q) What I wish to do is take out the sql0.fcfg, make changes to it and load it into the parser again to test it with my own sentences. It is here that I run into issues. I copied the contents of the sql0.fcg file into a txt file, stored in my local system, renamed it as .cfg but when I am parsing it like below I get an error saying nltk.download('C:'). cp = load_parser('C:/Users/212757677/Desktop/mygrammar.fcfg') The second method I tried was to copy the grammar from the fcfg file and try to load it in the following manner. Here I get an error saying 'Unable to parse line 2. Expected arrow' import nltk groucho_grammar = nltk.CFG.fromstring(""" S[SEM=(?np + WHERE + ?vp)] -> NP[SEM=?np] VP[SEM=?vp] VP[SEM=(?v + ?pp)] -> IV[SEM=?v] PP[SEM=?pp] VP[SEM=(?v + ?ap)] -> IV[SEM=?v] AP[SEM=?ap] NP[SEM=(?det + ?n)] -> Det[SEM=?det] N[SEM=?n] PP[SEM=(?p + ?np)] -> P[SEM=?p] NP[SEM=?np] AP[SEM=?pp] -> A[SEM=?a] PP[SEM=?pp] NP[SEM='Country="greece"'] -> 'Greece' NP[SEM='Country="china"'] -> 'China' Det[SEM='SELECT'] -> 'Which' | 'What' N[SEM='City FROM city_table'] -> 'cities' IV[SEM=''] -> 'are' A[SEM=''] -> 'located' P[SEM=''] -> 'in' """) cp = load_parser(groucho_grammar) query = 'What cities are located in China' trees = list(cp.parse(query.split())) answer = trees[0].label()['SEM'] answer = [s for s in answer if s] q = ' '.join(answer) print(q) ValueError: Unable to parse line 2: S[SEM=(?np + WHERE + ?vp)] -> NP[SEM=?np] VP[SEM=?vp] Expected an arrow I just want to edit the existing grammar in sql0.fcfg and parse it. Can someone suggest how to go about this ?
1
1
0
0
0
0
Am struggling with training wikipedia dump on doc2vec model, not experienced in setting up a server as a local machine is out of question due to the ram it requires to do the training. I couldnt find a pre trained model except outdated copies for python 2.
1
1
0
0
0
0
I have used an activation function that I have created on my own (not usually) and I used for my LSTM. Everything went well, I trained my model and saved it as .h5 file. Here is my customized activation function: from keras import backend as k def activate(ab): a = k.exp(ab[:, 0]) b = k.softplus(ab[:, 1]) a = k.reshape(a, (k.shape(a)[0], 1)) b = k.reshape(b, (k.shape(b)[0], 1)) return k.concatenate((a, b), axis=1) def weibull_loglik_discrete(y_true, ab_pred, name=None): y_ = y_true[:, 0] u_ = y_true[:, 1] a_ = ab_pred[:, 0] b_ = ab_pred[:, 1] hazard0 = k.pow((y_ + 1e-35) / a_, b_) hazard1 = k.pow((y_ + 1) / a_, b_) return -1 * k.mean(u_ * k.log(k.exp(hazard1 - hazard0) - 1.0) - hazard1) model = Sequential() model.add(Masking(mask_value=0., input_shape=(max_time, 39))) model.add(LSTM(20, input_dim=11)) model.add(Dense(2)) # Apply the custom activation function mentioned above model.add(Activation(activate)) # discrete log-likelihood for Weibull survival data as my loss function model.compile(loss=weibull_loglik_discrete, optimizer=RMSprop(lr=.001)) # Fit! model.fit(train_x, train_y, nb_epoch=250, batch_size=2000, verbose=2, validation_data=(test_x, test_y)) After training, I save my model as follow: from keras.models import load_model model.save("model_baseline_lstm.h5") Later, when I try to load the model, I run this : from keras.models import load_model model= load_model("model_baseline_lstm.h5") BUT, I get this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-11-d3f9f7415b5c> in <module>() 13 # model.save("model_baseline_lsm.h5") 14 from keras.models import load_model ---> 15 model= load_model("model_baseline_lsm.h5") /anaconda3/lib/python3.6/site-packages/keras/models.py in load_model(filepath, custom_objects, compile) 238 raise ValueError('No model found in config file.') 239 model_config = json.loads(model_config.decode('utf-8')) --> 240 model = model_from_config(model_config, custom_objects=custom_objects) 241 242 # set weights /anaconda3/lib/python3.6/site-packages/keras/models.py in model_from_config(config, custom_objects) 312 'Maybe you meant to use ' 313 '`Sequential.from_config(config)`?') --> 314 return layer_module.deserialize(config, custom_objects=custom_objects) 315 316 /anaconda3/lib/python3.6/site-packages/keras/layers/__init__.py in deserialize(config, custom_objects) 53 module_objects=globs, 54 custom_objects=custom_objects, ---> 55 printable_module_name='layer') /anaconda3/lib/python3.6/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 138 return cls.from_config(config['config'], 139 custom_objects=dict(list(_GLOBAL_CUSTOM_OBJECTS.items()) + --> 140 list(custom_objects.items()))) 141 with CustomObjectScope(custom_objects): 142 return cls.from_config(config['config']) /anaconda3/lib/python3.6/site-packages/keras/models.py in from_config(cls, config, custom_objects) 1321 model = cls() 1322 for conf in config: -> 1323 layer = layer_module.deserialize(conf, custom_objects=custom_objects) 1324 model.add(layer) 1325 return model /anaconda3/lib/python3.6/site-packages/keras/layers/__init__.py in deserialize(config, custom_objects) 53 module_objects=globs, 54 custom_objects=custom_objects, ---> 55 printable_module_name='layer') /anaconda3/lib/python3.6/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 140 list(custom_objects.items()))) 141 with CustomObjectScope(custom_objects): --> 142 return cls.from_config(config['config']) 143 else: 144 # Then `cls` may be a function returning a class. /anaconda3/lib/python3.6/site-packages/keras/engine/topology.py in from_config(cls, config) 1251 A layer instance. 1252 """ -> 1253 return cls(**config) 1254 1255 def count_params(self): /anaconda3/lib/python3.6/site-packages/keras/layers/core.py in __init__(self, activation, **kwargs) 289 super(Activation, self).__init__(**kwargs) 290 self.supports_masking = True --> 291 self.activation = activations.get(activation) 292 293 def call(self, inputs): /anaconda3/lib/python3.6/site-packages/keras/activations.py in get(identifier) 93 if isinstance(identifier, six.string_types): 94 identifier = str(identifier) ---> 95 return deserialize(identifier) 96 elif callable(identifier): 97 if isinstance(identifier, Layer): /anaconda3/lib/python3.6/site-packages/keras/activations.py in deserialize(name, custom_objects) 85 module_objects=globals(), 86 custom_objects=custom_objects, ---> 87 printable_module_name='activation function') 88 89 /anaconda3/lib/python3.6/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 158 if fn is None: 159 raise ValueError('Unknown ' + printable_module_name + --> 160 ':' + function_name) 161 return fn 162 else: ValueError: Unknown activation function:activate
1
1
0
0
0
0
I just installed PyCharm and Anaconda. I installed PyTorch to Anaconda and i can even use "import torch" in Anaconda. I've created a new Project in PyCharm with the Anaconda Interpreter but i still can't use PyTorch in PyCharm.
1
1
0
0
0
0
I have a list of strings containing 50 million search queries. [1-500+ words in each query]. I also have a list of strings containing 500 words and phrases I need to return indices of search queries (1) that contain any word or phrase (2). The goal is to only keep queries related to a certain topic (movies) and then use NLP to cluster these filtered queries (stemming -> tf_idf -> pca -> kmeans). I tried to filter queries using nested loops, but it would take more than 10 hours to finish. filtered = [] with open('search_logs.txt', 'r', encoding='utf-8') as f: for i, line in enumerate(f): query, timestamp = line.strip().split('\t') for word in key_words: if word in query: filtered.append(i) I looked into solutions which use regex (word1|word2|...|wordN), but the problem is that i cannot combine queries into a large string since i need to filter irrelevant queries. UPDATE: examples of logs and keywords search_logs.txt 'query timestamp ' 'the dark knight 2019-02-17 19:05:12 ' 'how to do a barrel roll 2019-02-17 19:05:13 ' 'watch movies 2019-02-17 19:05:13 ' 'porn 2019-02-17 19:05:13 ' 'news 2019-02-17 19:05:14 ' 'rami malek 2019-02-17 19:05:14 ' 'Traceback (most recent call last): File "t.py" 2019-02-17 19:05:15 ' .......... # millions of other search queries key_words = [ 'movie', 'movies', 'cinema', 'oscar', 'oscars', 'george lucas', 'ben affleck', 'netflix', .... # hundreds of other words and phrases ]
1
1
0
0
0
0
How to approach the problem of building a Punctuation Predictor? The working demo for the question can be found in this link. Input Text is as below: "its been a little while Kirk tells me its actually been three weeks now that Ive been using this device right here that is of course the Galaxy S ten I mean Ive just been living with this phone this has been my phone has the SIM card in it I took photos I lived live I sent tweets whatsapp slack email whatever other app this was my smart phone"
1
1
0
0
0
0
I have data with 2 important columns, Product Name and Product Category. I wanted to classify a search term into a category. The approach (in Python using Sklearn & DaskML) to create a classifier was: Clean Product Name column for stopwords, numbers, etc. Create 90% 10% train-test split Convert text to vector using OneHotEncoder Create classifier (Naive Bayes) on the training data Test the classifier I realized the OneHotEncoder (or any encoder) converts the text to numbers by creating a matrix keeping into account where and how many times a word occurs. Q1. Do I need to convert from Word to Vectors before train-test split or after train-test split? Q2. When I will search for new words (which may not be in the text already), how will I classify it because if I encode the search term, it will be irrelevant to the encoder used for the training data. Can anybody help me with the approach so that I can classify a search term into a category if the term doesn't exist in the training data?
1
1
0
1
0
0
Based on the grammar in the chapter 7 of the NLTK Book: grammar = r""" NP: {<DT|JJ|NN.*>+} # ... """ I want to expand NP (noun phrase) to include multiple NP joined by CC (coordinating conjunctions: and) or , (commas) to capture noun phrases like: The house and tree The apple, orange and mango Car, house, and plane I cannot get my modified grammar to capture those as a single NP: import nltk grammar = r""" NP: {<DT|JJ|NN.*>+(<CC|,>+<NP>)?} """ sentence = 'The house and tree' chunkParser = nltk.RegexpParser(grammar) words = nltk.word_tokenize(sentence) tagged = nltk.pos_tag(words) print(chunkParser.parse(tagged)) Results in: (S (NP The/DT house/NN) and/CC (NP tree/NN)) I've tried moving the NP to the beginning: NP: {(<NP><CC|,>+)?<DT|JJ|NN.*>+} but I get the same result (S (NP The/DT house/NN) and/CC (NP tree/NN))
1
1
0
0
0
0
Is there any way we can extract all the wikipedia entities from the text using Wikipedia2Vec? Or is there any other way to do the same. Example: Text : "Scarlett Johansson is an American actress." Entities : [ 'Scarlett Johansson' , 'American' ] I want to do it in Python Thanks
1
1
0
0
0
0
I'm trying to use Textacy to calculate the TF-IDF score for a single word across the standard corpus, but am a bit unclear about the result I am receiving. I was expecting a single float which represented the frequency of the word in the corpus. So why am I receiving a list (?) of 7 results? "acculer" is actually a French word, so was expecting a result of 0 from an English corpus. word = 'acculer' vectorizer = textacy.Vectorizer(tf_type='linear', apply_idf=True, idf_type='smooth') tf_idf = vectorizer.fit_transform(word) logger.info("tf_idf:") logger.info(tfidf) Output tf_idf: (0, 0) 2.386294361119891 (1, 1) 1.9808292530117262 (2, 1) 1.9808292530117262 (3, 5) 2.386294361119891 (4, 3) 2.386294361119891 (5, 2) 2.386294361119891 (6, 4) 2.386294361119891 The second part of the question is how can I provide my own corpus to the TF-IDF function in Textacy, esp. one in a different language? EDIT As mentioned by @Vishal I have logged the ouput using this line: logger.info(vectorizer.vocabulary_terms) It seems the provided word acculer has been split into characters. {'a': 0, 'c': 1, 'u': 5, 'l': 3, 'e': 2, 'r': 4} (1) How can I get the TF-IDF for this word against the corpus, rather than each character? (2) How can I provide my own corpus and point to it as a param? (3) Can TF-IDF be used at a sentence level? ie: what is the relative frequency of this sentence's terms against the corpus.
1
1
0
1
0
0
i build a NLP classifier based on Naive base using python scikit-learn the point is that , I want my classifier to classify a new text " that is not belongs to any of my training or testing data set" in another model"like regression" , I can extract the Theta's values so that i can predict any new value. however i know that,naive based is working by calculation the probability of each word to against every class . for example my data set include (1000 record of some text) as " it was so good " " i like it " " i don't like this movie " etc .. and each text is classified as either +ev or -ev i do separation to my data set into training and testing set. every thing is ok . now i want to classify a brand new text like " Oh, I like this movie and the sound track was perfect" how to make my model predict this text ! here is the code from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(max_features=850) X = cv.fit_transform(corpus).toarray() y = dataset.iloc[:, 1].values from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 10) from sklearn.naive_bayes import GaussianNB classifier = GaussianNB() classifier.fit(X_train, y_train) y_pred = classifier.predict() from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) now iam expecting to do some kind new text like "good movie and nice sound track" and "acting was so bad". and let my classifier predict was it good or bad ! Xnew = [["good movie and nice sound track"], ["acting was so bad"]] ynew = classifier.predict(Xnew) but i get a super error jointi = np.log(self.class_prior_[i]) 436 n_ij = - 0.5 * np.sum(np.log(2. * np.pi * self.sigma_[i, :])) --> 437 n_ij -= 0.5 * np.sum(((X - self.theta_[i, :]) ** 2) / 438 (self.sigma_[i, :]), 1) 439 joint_log_likelihood.append(jointi + n_ij) TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U32') dtype('<U32') dtype('<U32') also I wonder if i can get all the probability for each word in my NLP Bag of my corpus. thank's in advance
1
1
0
0
0
0
I am cleaning a column in my data frame, Sumcription, and am trying to do 3 things: Tokenize Lemmantize Remove stop words import spacy nlp = spacy.load('en_core_web_sm', parser=False, entity=False) df['Tokens'] = df.Sumcription.apply(lambda x: nlp(x)) spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS spacy_stopwords.add('attach') df['Lema_Token'] = df.Tokens.apply(lambda x: " ".join([token.lemma_ for token in x if token not in spacy_stopwords])) However, when I print for example: df.Lema_Token.iloc[8] The output still has the word attach in it: attach poster on the wall because it is cool Why does it not remove the stop word? I also tried this: df['Lema_Token_Test'] = df.Tokens.apply(lambda x: [token.lemma_ for token in x if token not in spacy_stopwords]) But the str attach still appears.
1
1
0
0
0
0
I have a dataframe as follow: df = pd.DataFrame(data=[[1, 'Berlin',], [2, 'Paris', ], [3, 'Lausanne', ], [4, 'Bayswater',], [5, 'Table Bay', ], [6, 'Bejing',], [7, 'Bombay',], [8, 'About the IIS']], columns=['id', 'text'],) and I want to use jaro_winkler from library jellyfish to calculate the similarity score for each string compare with the all of rest, and out put the most similarity one or got the similarity score matrix as follow: str1 str2 str3 str1 1 0.6 0.7 str2 0.6 1 0.3 str3 0.7 0.3 1 how I can get this result as the fast way. Now I just using loop to comparison each one and store the result in list. def sim_cal(string1, string2): similar = jellyfish.jaro_winkler(string1, string2) return similar But if the data become large the speed will very slow, so if have any way can speed up? Thanks.
1
1
0
0
0
0
So, basically, I'm trying to code a program that solves mazes. I did many tests with different mazes and I realize that my program isn't able to solve all kinds of mazes, only a few, since there some specific dead ends that my program gets stuck and cannot go back. The logic behind my code is basically something that runs all over the maze until it finds the exit, and, if it finds a dead end during this processes, it should be able to go back and find a new unexplored path. My code was working well until I start to test it with more complex mazes with different kinds of tricky dead ends. For Example: maze = ( [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,0,1,1,1,0,1,1,1,1,1,1,1,1,1], [1,0,0,0,0,0,0,0,1,0,0,0,0,0,1], [1,1,1,0,1,1,1,0,1,0,1,0,1,0,1], [1,1,1,0,1,0,1,0,1,0,1,1,1,0,1], [1,1,1,1,1,0,1,0,1,0,0,1,0,0,1], [1,1,0,0,0,0,1,0,1,1,0,1,0,1,1], [1,1,0,1,1,0,0,0,0,1,0,1,0,1,1], [1,1,0,0,1,1,0,1,0,1,0,1,0,0,1], [1,1,0,1,0,0,1,0,0,0,1,0,0,1,1], [1,1,1,1,1,1,1,1,1,1,1,3,1,1,1] ) This an example of a maze with dead ends that my program can solve, whereupon 1 is a wall, 0 is a free path, 3 is the end and the beginning is maze[1][1] maze = ( [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,0,0,0,0,0,0,0,0,0,0,0,1,1,1], [1,0,1,1,1,0,1,1,1,1,1,1,0,1,1], [1,0,1,0,0,0,0,0,0,1,1,1,0,1,1], [1,0,0,0,1,1,1,1,0,0,0,1,0,0,1], [1,1,0,1,1,0,0,1,1,1,0,1,1,0,1], [1,1,0,1,1,1,0,0,0,1,0,1,0,0,1], [1,0,0,1,1,1,1,1,0,1,0,1,1,0,1], [1,0,1,1,0,0,0,0,0,1,0,0,0,0,1], [1,0,0,0,0,1,1,1,1,1,0,1,1,1,1], [1,1,1,1,1,1,1,1,1,1,3,1,1,1,1] ) Now a maze that my program cannot solve. I suppose that the problem with this maze is that it has dead ends with an "L" shape or something like a zigzag, but to be honest, I don't know what is happening. For example, the dead end in maze[5][5] (where my program is stuck) Before showing the code I want to explain some topics about it: All the strings that I'm printing are in English and in Portuguese, the languages are separated by "/" so don't worry about that. To explore the maze the program uses a recursive strategy and mark with 2 the paths that were explored. The program works with x and y coordinates based on the array. x + 1 goes down, x - 1 goes up, y + 1 goes right and y - 1 goes left. If it gets stuck in a dead end, the strategy is looking at what is around to discover which kind of dead end it is, and then going back until it finds a new path marked with 0. Those are the cases of dead ends that you will see in my code. If possible I want some tips about how can I improve my code or make it clean. So, here we go: maze = ( [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,0,0,0,0,0,0,0,0,0,0,0,1,1,1], [1,0,1,1,1,0,1,1,1,1,1,1,0,1,1], [1,0,1,0,0,0,0,0,0,1,1,1,0,1,1], [1,0,0,0,1,1,1,1,0,0,0,1,0,0,1], [1,1,0,1,1,0,0,1,1,1,0,1,1,0,1], [1,1,0,1,1,1,0,0,0,1,0,1,0,0,1], [1,0,0,1,1,1,1,1,0,1,0,1,1,0,1], [1,0,1,1,0,0,0,0,0,1,0,0,0,0,1], [1,0,0,0,0,1,1,1,1,1,0,1,1,1,1], [1,1,1,1,1,1,1,1,1,1,3,1,1,1,1] ) def solve(x, y): if maze[x][y] == 0 or maze[x][y] == 2: # To walk around and explore the maze print('Visiting/Visitando xy({},{})'.format(x, y)) maze[x][y] = 2 if x < (len(maze) - 1) and (maze[x + 1][y] == 0 or maze[x + 1][y] == 3): solve(x + 1, y) elif y < (len(maze) - 1) and (maze[x][y + 1] == 0 or maze[x][y + 1] == 3): solve(x, y + 1) elif x > 0 and (maze[x - 1][y] == 0 or maze[x][y + 1] == 3): solve(x - 1, y) elif y > 0 and (maze[x][y - 1] == 0 or maze[x][y + 1] == 3): solve(x, y - 1) else: # If it gets stuck in a dead end step_back = 1 dead_end = True # each possible kind of dead ends and the strategy to go back if (maze[x + 1][y] == 1 or maze[x + 1][y] == 2) and maze[x][y - 1] == 1 and \ maze[x][y + 1] == 1 and maze[x - 1][y] == 2: print('Dead end in/Caminho sem saída encontrado em xy({},{})'.format(x, y)) while dead_end is True: if maze[x - step_back][y - 1] == 0 or maze[x - step_back][y - 1] == 3: solve(x - step_back, y - 1) elif maze[x - step_back][y + 1] == 0 or maze[x - step_back][y + 1] == 3: solve(x - step_back, y + 1) else: print("Going back/Voltando") dead_end = False step_back += 1 solve(x - step_back, y) elif (maze[x - 1][y] == 1 or maze[x - 1][y] == 2) and maze[x][y - 1] == 1 and \ maze[x][y + 1] == 1 and maze[x + 1][y] == 2: print('Dead end in/Caminho sem saída encontrado em xy({},{})'.format(x, y)) while dead_end is True: if maze[x + step_back][y - 1] == 0 or maze[x - step_back][y - 1] == 3: solve(x + step_back, y - 1) elif maze[x + step_back][y + 1] == 0 or maze[x - step_back][y + 1] == 3: solve(x + step_back, y + 1) else: print("Going back/Voltando") dead_end = False step_back += 1 solve(x + step_back, y) elif (maze[x][y + 1] == 1 or maze[x][y + 1] == 2) and maze[x + 1][y] == 1 and \ maze[x - 1][y] == 1 and maze[x][y - 1] == 2: print('Dead end in/Caminho sem saída encontrado em xy({},{})'.format(x, y)) while dead_end is True: if maze[x][y - step_back] == 0 or maze[x][y - step_back] == 3: solve(x - step_back, y) elif maze[x][y + step_back] == 0 or maze[x][y + step_back] == 3: solve(x + step_back, y) else: print("Going back/Voltando") dead_end = False step_back += 1 solve(x, y + step_back) elif (maze[x][y - 1] == 1 or maze[x][y - 1] == 2) and maze[x + 1][y] == 1 and \ maze[x - 1][y] == 1 and maze[x][y + 1] == 2: print('Dead end in/Caminho sem saída encontrado em xy({},{})'.format(x, y)) while dead_end is True: if maze[x][y - step_back] == 0 or maze[x][y - step_back] == 3: solve(x - step_back, y) elif maze[x][y + step_back] == 0 or maze[x][y + step_back] == 3: solve(x + step_back, y) else: print("Going back/Voltando") dead_end = False step_back += 1 solve(x, y - step_back) # to check if the end of the maze were found if maze[x + 1][y] == 3 or maze[x - 1][y] == 3 or maze[x][y + 1] == 3 or maze[x][y - 1] == 3: print('Exit found in/Saída encontrada em xy({},{})'.format(x, y)) solve(1,1)
1
1
0
0
0
0
I'm making a program that can predict the corresponding Business unit according to the data in text. I've set up a vocabulary to find word occurances in the text that correspond to a certain unit but I'm unsure on how to use that data with Machine Learning models for predictions. There are four units that it can possibly predict which are: MicrosoftTech, JavaTech, Pythoneers and JavascriptRoots. In the vocabulary I have put words that indicate to certain units. For example JavaTech: Java, Spring, Android; MicrosoftTech: .Net, csharp; and so on. Now I'm using the bag of words model with my custom vocabulary to find how often those words occur. This is my code for getting the word count data: def bagOfWords(description, vocabulary): bag = np.zeros(len(vocabulary)).astype(int) for sw in description: for i,word in enumerate(vocabulary): if word == sw: bag[i] += 1 print("Bag: ", bag) return bag So lets say the vocabulary is: [java, spring, .net, csharp, python, numpy, nodejs, javascript]. And the description is: "Company X is looking for a Java Developer. Requirements: Has worked with Java. 3+ years experience with Java, Maven and Spring." Running the code would output the following: Bag: [3,1,0,0,0,0,0,0] How do I use this data for my predictions with ML algoritms? My code so far: import pandas as pd import numpy as np import warnings import tkinter as tk from tkinter import filedialog from nltk.tokenize import TweetTokenizer warnings.filterwarnings("ignore", category=FutureWarning) root= tk.Tk() canvas1 = tk.Canvas(root, width = 300, height = 300, bg = 'lightsteelblue') canvas1.pack() def getExcel (): global df vocabularysheet = pd.read_excel (r'Filepath\filename.xlsx') vocabularydf = pd.DataFrame(vocabularysheet, columns = ['Word']) vocabulary = vocabularydf.values.tolist() unitlabelsdf = pd.DataFrame(vocabularysheet, columns = ['Unit']) unitlabels = unitlabelsdf.values.tolist() for voc in vocabulary: index = vocabulary.index(voc) voc = vocabulary[index][0] vocabulary[index] = voc for label in unitlabels: index = unitlabels.index(label) label = unitlabels[index][0] unitlabels[index] = label import_file_path = filedialog.askopenfilename() testdatasheet = pd.read_excel (import_file_path) descriptiondf = pd.DataFrame(testdatasheet, columns = ['Description']) descriptiondf = descriptiondf.replace(' ',' ', regex=True).replace('\xa0',' ', regex=True).replace('•', ' ', regex=True).replace('u200b', ' ', regex=True) description = descriptiondf.values.tolist() tokenized_description = tokanize(description) for x in tokenized_description: index = tokenized_description.index(x) tokenized_description[index] = bagOfWords(x, vocabulary) def tokanize(description): for d in description: index = description.index(d) tknzr = TweetTokenizer() tokenized_description = list(tknzr.tokenize((str(d).lower()))) description[index] = tokenized_description return description def wordFilter(tokenized_description): bad_chars = [';', ':', '!', "*", ']', '[', '.', ',', "'", '"'] if(tokenized_description in bad_chars): return False else: return True def bagOfWords(description, vocabulary): bag = np.zeros(len(vocabulary)).astype(int) for sw in description: for i,word in enumerate(vocabulary): if word == sw: bag[i] += 1 print("Bag: ", bag) return bag browseButton_Excel = tk.Button(text='Import Excel File', command=getExcel, bg='green', fg='white', font=('helvetica', 12, 'bold')) predictionButton = tk.Button(text='Button', command=getExcel, bg='green', fg='white', font=('helvetica', 12, 'bold')) canvas1.create_window(150, 150, window=browseButton_Excel) root.mainloop()
1
1
0
1
0
0
I have a string type list of lists that has words that need to be replaced. This replacement can occur by checking if there is a match in list 1, if there is then the position of the match is searched in list 2. The word is picked from that position in list 2 and replaced in the original list of lists. Note: The matches might not occur in all the sublists of my list of lists. list_of_list = [['apple', 'ball', 'cat'], ['apple', 'table'], ['cat', 'mouse', 'bootle'], ['mobile', 'black'], ['earphone']] list_1 = ["apple", "bootle", "earphone"] list_2 = ["fruit", "object", "electronic"] This is what I have tried so far, but it doesn't seem to work the way I want it to. for word in list_of_list: for idx, item in enumerate(word): for w in list_1: if (w in item): index_list_1= list_1.index(w) word_from_list_2 = list_2[index_list_1] word[idx] = word_from_list_2 print(list_of_list) I want something like: Input: list_of_list = [['apple', 'ball', 'cat'], ['apple', 'table'], ['cat', 'mouse', 'bootle'], ['mobile', 'black'], ['earphone']] Output: list_of_list = [['fruit', 'ball', 'cat'], ['fruit', 'table'], ['cat', 'mouse', 'object'], ['mobile', 'black'], ['electronic']]
1
1
0
0
0
0
Suppose we have an array/string of stock symbols: ['AMD','AMZN','BABA','FB']. I need to be able to convert the supplied stock symbol to 1 and others to 0. For example if we supplied 'AMZN' to the array above the resulting array should look: [0,1,0,0]. If 'FB' result should look like [0,0,0,1]. I need to feed it into an AI algorithm.
1
1
0
0
0
0