question_id
int64
1.48k
40.1M
title
stringlengths
15
142
question_body
stringlengths
46
12.1k
question_type
stringclasses
5 values
question_date
stringlengths
20
20
35,178,812
Counting unique index values in Pandas groupby
<p>In Pandas, there is a very clean way to count the distinct values in a column within a group by operation. For example</p> <pre><code>ex = pd.DataFrame([[1, 2, 3], [6, 7, 8], [1, 7, 9]], columns=["A", "B", "C"]).set_index(["A", "B"]) ex.groupby(level="A").C.nunique() </code></pre> <p>will return</p> <pre><code>A 1 2 6 1 Name: C, dtype: int64 </code></pre> <p>I would also like to count the distinct values in index level <code>B</code> while grouping by <code>A</code>. I can't find a clean way to access the levels of <code>B</code> from the <code>groupby</code> object. The best I've been able to come up with is:</p> <pre><code>ex.reset_index("B", drop=False).groupby(level="A").B.nunique() </code></pre> <p>which correctly returns:</p> <pre><code>A 1 2 6 1 Name: B, dtype: int64 </code></pre> <p>Is there a way for me to do this on the groupby without resetting the index or using an <code>apply</code> function?</p>
howto
2016-02-03T13:45:53Z
35,197,854
Python keyword arguments unpack and return dictionary
<p>I have a function definition as below and I am passing keyword arguments. How do I get to return a dictionary with the same name as the keyword arguments?</p> <p>Manually I can do:</p> <pre><code>def generate_student_dict(first_name=None, last_name=None , birthday=None, gender =None): return { 'first_name': first_name, 'last_name': last_name, 'birthday': birthday, 'gender': gender } </code></pre> <p>But I don't want to do that. Is there any way that I can make this work without actually typing the dict?</p> <pre><code> def generate_student_dict(self, first_name=None, last_name=None, birthday=None, gender=None): return # Packed value from keyword argument. </code></pre>
howto
2016-02-04T10:00:38Z
35,199,556
call function through variable or without parentheses in python
<p>I want to create an alias to call a function without the parentheses. Something like: </p> <pre><code>&gt;ls=os.getcwd() &gt;ls &gt;"/path1" &gt;os.chdir("/path2") &gt;ls &gt;"/path1" ( the wanted output would be "/path2" ) </code></pre> <p>Indeed "ls" always have the same value, the value in the moment of the assingment. </p> <p>Of course I can do:</p> <pre><code>&gt;ls=os.getcwd </code></pre> <p>and then call with </p> <pre><code>&gt;ls() </code></pre> <p>but I want is to call function without parentheses (of course when the function don't require an argument) </p> <p>I tried</p> <pre><code>&gt;def ListDir(): &gt; print(os.getcwd()) &gt; &gt;ls=ListDir() </code></pre> <p>But don't work. How I can do something this? it's possible? ( Only if is easy to do ) </p>
howto
2016-02-04T11:15:13Z
35,205,400
How to randomly pick numbers from ranked groups in python, to create a list of specific length
<p>I am trying to create a sequence of length 6 which consists of numbers randomly picked from ranked groups. <em>The first element of the sequence has to be drawn from the first group, and the last element has to be drawn from the last group</em>. </p> <p>Let the new sequence be called "seq". Then, if</p> <pre><code>a = [1,2,3] b = [9] c = [5,6] d = [11,12,4] seq[0] in a == 1 seq[-1] in d == 1 </code></pre> <p>The intermediate elements have to come from lists a,b,c,d. But, if the second element is randomly drawn from 'a', then the third one, has to be drawn either from a later 'a' element, or from b/c/d. Similarly, if the third element is drawn from 'c', then the other ones have to come from later ranks like d.The groups are ranked this way. </p> <p>The number of groups given now, is arbitrary (maximum of 6 groups). The length for the sequence ( len(seq) == 6 ) is standard.</p> <p>One element from <strong>each</strong> group has to be in the final sequence. Repetition of elements is not allowed. All group elements are unique (and they are always numbers in the range of 1-12).</p>
howto
2016-02-04T15:45:50Z
35,208,832
TensorFlow Resize image tensor to dynamic shape
<p>I am trying to read some image input for an image classification problem with TensorFlow.</p> <p>Of course, I am doing this with <code>tf.image.decode_jpeg(...)</code>. My images have variable size and hence I am not able to specify a fixed shape for the image tensor.</p> <p>But I need to scale the images depending on their actual size. Specifically, I want to scale the shorter side to a fixed value and the longer side in a way that the aspect ratio is preserved.</p> <p>I can get the actual shape of a certain image by <code>shape = tf.shape(image)</code>. I am also able to do the computation for the new longer edge like </p> <pre><code>shape = tf.shape(image) height = shape[0] width = shape[1] new_shorter_edge = 400 if height &lt;= width: new_height = new_shorter_edge new_width = ((width / height) * new_shorter_edge) else: new_width = new_shorter_edge new_height = ((height / width) * new_shorter_edge) </code></pre> <p>My problem now is that I cannot pass <code>new_height</code> and <code>new_width</code> to <code>tf.image.resize_images(...)</code> because one of them is a tensor and <code>resize_images</code> expects integers as height and width inputs.</p> <p>Is there a way to "pull out" the integer of the tensor or is there any other way to do my task with TensorFlow?</p> <p>Thanks in advance.</p> <hr> <p><strong>Edit</strong></p> <p>Since I also had <a href="https://github.com/tensorflow/tensorflow/issues/1029" rel="nofollow">some other issues</a> with <code>tf.image.resize_images</code>, here's the code that worked for me:</p> <pre><code>shape = tf.shape(image) height = shape[0] width = shape[1] new_shorter_edge = tf.constant(400, dtype=tf.int32) height_smaller_than_width = tf.less_equal(height, width) new_height_and_width = tf.cond( height_smaller_than_width, lambda: (new_shorter_edge, _compute_longer_edge(height, width, new_shorter_edge)), lambda: (_compute_longer_edge(width, height, new_shorter_edge), new_shorter_edge) ) image = tf.expand_dims(image, 0) image = tf.image.resize_bilinear(image, tf.pack(new_height_and_width)) image = tf.squeeze(image, [0]) </code></pre>
howto
2016-02-04T18:29:47Z
35,208,997
Running blocks of code inside vim
<p>I recently added the following code in the .vimrc:</p> <pre><code>" Runs python inside vim autocmd FileType python nnoremap &lt;buffer&gt; &lt;F9&gt; :exec '!clear; python' shellescape(@%, 1)&lt;cr&gt; </code></pre> <p>which allows me to run the entire python script from inside vim when pressed the F9 key. Nevertheless, several times I do not want to run the entire python script but just one line or even a block of lines. I googled in searching for these behavior but could not find any solution that worked, at least for me.</p> <p>Someone can help me on this?</p> <p>Thanks</p>
howto
2016-02-04T18:38:52Z
35,209,114
Fastest way to remove subsets of lists from a list in Python
<p>Suppose I have a list of lists like the one below (the actual list is much longer):</p> <pre><code>fruits = [['apple', 'pear'], ['apple', 'pear', 'banana'], ['banana', 'pear'], ['pear', 'pineapple'], ['apple', 'pear', 'banana', 'watermelon']] </code></pre> <p>In this case, all the items in the lists <code>['banana', 'pear']</code>, <code>['apple', 'pear']</code> and <code>['apple', 'pear', 'banana']</code> are contained in the list <code>['apple', 'pear', 'banana', 'watermelon']</code> (the order of items does not matter), so I would like to remove <code>['banana', 'pear']</code>, <code>['apple', 'pear']</code>, and <code>['apple', 'pear', 'banana']</code> as they are subsets of <code>['apple', 'pear', 'banana', 'watermelon']</code>. </p> <p>My current solution is shown below. I first use <code>ifilter</code> and <code>imap</code> to create a generator for the supersets that each list might have. Then for those cases that do have supersets, I use <code>compress</code> and <code>imap</code> to drop them.</p> <pre><code>from itertools import imap, ifilter, compress supersets = imap(lambda a: list(ifilter(lambda x: len(a) &lt; len(x) and set(a).issubset(x), fruits)), fruits) new_list = list(compress(fruits, imap(lambda x: 0 if x else 1, supersets))) new_list #[['pear', 'pineapple'], ['apple', 'pear', 'banana', 'watermelon']] </code></pre> <h3>I wonder if there are more efficient ways to do this?</h3>
howto
2016-02-04T18:44:54Z
35,220,031
retrive minimum maximum values of a ctype
<p>I parse some ASCII text using python which return me strings like: UI8, SI32, etc...</p> <p>Based on those strings I need to compute the maximum value of the types and to replace them with following strings: unsigned char, signed long, etc...</p> <p>I found in python <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow">ctypes</a> lib, but i was unable to find how to get maximum\minimum.</p> <p>Does Python have something exquivalent to <code>std::numeric_limits</code> in C++?</p>
howto
2016-02-05T09:02:36Z
35,232,897
Grouping in Python
<p>I have a list of dictionaries (which I uploaded using a CSV), and I would like to run a "group by" equivalent based on one of the "columns". I am trying to group on teamID and sum the "R" columns based on those groupings.</p> <p>I am trying the following code:</p> <pre><code>import itertools for key, group in itertools.groupby(batting, lambda item: item["teamID"]): print key, sum([item["R"] for item in group]) </code></pre> <p>However, I am not seeing them grouped correctly. There will be multiple instances of the same team ID.</p> <p>For example:</p> <pre><code>RC1 30 CL1 28 WS3 28 RC1 29 FW1 9 RC1 0 BS1 66 FW1 1 BS1 13 CL1 18 </code></pre>
howto
2016-02-05T20:25:45Z
35,234,823
Python Find n words before and after a certain words
<p>Lets say that I have a text file. which i should read and it will be like:</p> <pre><code> ... Department of Something is called (DoS) and then more texts and more text... </code></pre> <p>and then "while" I am reading the text file I find an acronym, here it is </p> <pre><code>DoS </code></pre> <p>So for finding the acronym i wrote:</p> <pre><code>import re import numpy # open the file? test_string = " a lot of text read from file ... Department of Something is called (DoS) and then more texts and more text..." regex = r'\b[A-Z][a-zA-Z\.]*[A-Z]\b\.?' found= re.findall(regex, test_string) print found </code></pre> <p>and the output is:</p> <pre><code>['DoS'] </code></pre> <p>what I want to do is: </p> <ol> <li>WHILE I am reading the file and find and acronym (here is DoS), </li> <li>calculate the number of characters of what i found (here is 3 chars for Dos) </li> <li><p>find 2 times (here is 2x3=6) words BEFORE and AFTER the 'Dos'. here will be:</p> <pre><code>3.1 pre= Department of Something is called 3.2 acronym= DoS 3.3 post= and then more texts and more </code></pre></li> <li>put these 3 (pre, acronym, post) in an array. </li> </ol> <p>Any help will be appreciated since I am new to python. </p>
howto
2016-02-05T22:45:02Z
35,242,055
Getting crawled information in dictionary format
<p>I am getting information as usual text, however I want the output in key/values format. eg: </p> <pre><code>{'Base pay':'$140,000.00 - $160,000.00 /Year'}, {'Employment Type':'Full-Time'}, {'Job Type':'Information Technology, Engineering, Professional Services'} </code></pre> <p>This is my code:</p> <pre><code>from bs4 import BeautifulSoup import urllib website = 'http://www.careerbuilder.com/jobseeker/jobs/jobdetails.aspx?APath=2.21.0.0.0&amp;job_did=J3H7FW656RR51CLG5HC&amp;showNewJDP=yes&amp;IPath=RSKV' html = urllib2.urlopen(website).read() soup = BeautifulSoup(html) for elm in soup.find_all('section',{"id":"job-snapshot-section"}): dn = elm.get_text() print dn </code></pre> <p>This is output from my code:</p> <pre><code>Job Snapshot Base Pay $140,000.00 - $160,000.00 /Year Employment Type Full-Time Job Type Information Technology, Engineering, Professional Services Education 4 Year Degree Experience At least 5 year(s) Manages Others Not Specified Relocation No Industry Computer Software, Banking - Financial Services, Biotechnology Required Travel Not Specified Job ID EE-1213256 </code></pre> <p>I have edited the code as requested including required import of libraries</p>
howto
2016-02-06T14:18:28Z
35,252,265
matching between two columns and taking value from another in pandas
<p>First of all I am sorry if this question is already answered clearly. I have seen there are very similar answers, but I couldn't use it. So my problem is to match between two sets of columns <code>(UsedFName==FName and UsedLName==LName)</code> and then fill the <code>Usedid</code> Column with the ids from <strong>'id'</strong> column when it fully matches.</p> <p>So here is a toy data set</p> <pre><code>&gt;&gt; df FName LName id UsedFName UsedLName Usedid 0 Tanvir Hossain 2001 Tanvir Hossain NaN 1 Nadia Alam 2002 Tanvir Hossain NaN 2 Pia Naime 2003 Tanvir Hossain NaN 3 Koethe Talukdar 2004 Koethe Talukdar NaN 4 Manual Hausman 2005 Koethe Talukdar NaN 5 Constantine Pape NaN Max Weber NaN 6 Andreas Kai 2006 Max Weber NaN 7 Max Weber 2007 Manual Hausman NaN 8 Weber Mac 2008 Manual Hausman NaN 9 Plank Ingo 2009 Manual Hausman NaN 10 Tanvir Hossain 2001 Pia Naime NaN 11 Weber Mac 2008 Pia Naime NaN 12 Manual Hausman 2005 Tanvir Hossain NaN 13 Max Weber 2007 Tanvir Hossain NaN 14 Nadia Alam 2002 Manual Hausman NaN 15 Weber Mac 2008 Manual Hausman NaN 16 Pia Naime 2003 Koethe Talukdar NaN 17 Pia Naime 2003 Koethe Talukdar NaN 18 Constantine Pape NaN Koethe Talukdar NaN 19 Koethe Talukdar 2004 Koethe Talukdar NaN 20 Koethe Talukdar 2005 Manual Hausman NaN 21 NaN NaN NaN Manual Hausman NaN 22 NaN NaN NaN Manual Hausman NaN 23 NaN NaN NaN Manual Hausman NaN 24 NaN NaN NaN Manual Hausman NaN 25 NaN NaN NaN Manual Hausman NaN 26 NaN NaN NaN Manual Hausman NaN 27 NaN NaN NaN Manual Hausman NaN </code></pre> <p>This is the output </p> <pre><code>&gt;&gt;&gt; df FName LName id UsedFName UsedLName Usedid 0 Tanvir Hossain 2001 Tanvir Hossain 2001 1 Nadia Alam 2002 Tanvir Hossain 2001 2 Pia Naime 2003 Tanvir Hossain 2001 3 Koethe Talukdar 2004 Koethe Talukdar 2005 4 Manual Hausman 2005 Koethe Talukdar 2005 5 Constantine Pape NaN Max Weber 2007 6 Andreas Kai 2006 Max Weber 2007 7 Max Weber 2007 Manual Hausman 2005 8 Weber Mac 2008 Manual Hausman 2005 9 Plank Ingo 2009 Manual Hausman 2005 10 Tanvir Hossain 2001 Pia Naime 2003 11 Weber Mac 2008 Pia Naime 2003 12 Manual Hausman 2005 Tanvir Hossain 2001 13 Max Weber 2007 Tanvir Hossain 2001 14 Nadia Alam 2002 Manual Hausman 2005 15 Weber Mac 2008 Manual Hausman 2005 16 Pia Naime 2003 Koethe Talukdar 2005 17 Pia Naime 2003 Koethe Talukdar 2005 18 Constantine Pape NaN Koethe Talukdar 2005 19 Koethe Talukdar 2004 Koethe Talukdar 2005 20 Koethe Talukdar 2005 Manual Hausman 2005 21 NaN NaN NaN Manual Hausman 2005 22 NaN NaN NaN Manual Hausman 2005 23 NaN NaN NaN Manual Hausman 2005 24 NaN NaN NaN Manual Hausman 2005 25 NaN NaN NaN Manual Hausman 2005 26 NaN NaN NaN Manual Hausman 2005 27 NaN NaN NaN Manual Hausman 2005 </code></pre> <p>Actually I was able to do it using nested for loops, here is the code:</p> <pre><code>for i in df['UsedFName'].index: for j in df['FName'].index: if df['UsedFName'][i]==df['FName'][j] &amp; df['UsedLName'][i]==df['LName'][j]: df.ix[i,'Usedid'] = df.ix[j,'id'] </code></pre> <p>But using nested for loops here is computationally very expensive. I have a huge data set. Is it possible to use it without nested loops? Is there any simple Pythonic ways or Pandas/Numpy ways that I can use here?</p> <p>Many thanks in advance for the help...looking forward to learn Python.</p>
howto
2016-02-07T10:24:05Z
35,253,971
How to check if all values of a dictionary are 0, in Python?
<p>I want to check if all the values, i.e values corresponding to all keys in a dictionary are 0. Is there any way to do it without loops? If so how?</p>
howto
2016-02-07T13:18:27Z
35,254,886
Obtaining dictionary value in Python
<p>Here is the dict:</p> <pre><code>sozluk_ata = {20225: 17, 20232: 9, 20233: 22, 20234: 3, 20235: 28, 20236: 69, ..} </code></pre> <p>And here is my code to get value of an element in dictionary.</p> <pre><code>ders_adi_entry_1 = entry_1.get() ders_crn_entry_1 = int(entry_11.get()) y1 = "sozluk_%s[%d]" %(ders_adi_entry_1, ders_crn_entry_1) print (y1) </code></pre> <p>This gives me <code>sozluk_ata[20225]</code>, not the value <code>17</code>.</p>
howto
2016-02-07T14:46:35Z
35,261,899
Selenium scraping with multiple urls
<p>Following my previous <a href="http://stackoverflow.com/questions/35235516/reformatting-scraped-selenium-table">question</a>, i'm now trying to scrape multiple pages of a url (all the pages with games in a given season). I'm also trying to scrape multiple parent urls (seasons):</p> <pre><code>from selenium import webdriver import pandas as pd import time url = ['http://www.oddsportal.com/hockey/austria/ebel-2014-2015/results/#/page/', 'http://www.oddsportal.com/hockey/austria/ebel-2013-2014/results/#/page/'] data = [] for i in url: for j in range(1,8): print i+str(j) driver = webdriver.PhantomJS() driver.implicitly_wait(10) driver.get(i+str(j)) for match in driver.find_elements_by_css_selector("div#tournamentTable tr.deactivate"): home, away = match.find_element_by_class_name("table-participant").text.split(" - ") date = match.find_element_by_xpath(".//preceding::th[contains(@class, 'first2')][1]").text if " - " in date: date, event = date.split(" - ") else: event = "Not specified" data.append({ "home": home.strip(), "away": away.strip(), "date": date.strip(), "event": event.strip() }) driver.close() time.sleep(3) print str(j)+" was ok" df = pd.DataFrame(data) print df # ok for six results then socket.error: [Errno 10054] An existing connection was forcibly closed by the remote host # ok for two results, then infinite load # added time.sleep(3) # ok for first result, infinite load after that # added implicitly wait # no result, infinite load </code></pre> <p>At first I tried the code twice without either the implicit wait on line 14 or the sleep on 35. First result gave the socket error. Second result stalled with no error after two good scraped pages. </p> <p>Then added the time waits as noted above and they haven't helped. </p> <p>Since the results are not consistent, my guess is connection be reset between the end of the loop &amp; next run. I'd like to know if that's a likely solution and how to implement. I checked the robots.txt of the site and can't see anything that prevents scraping after a set interval.</p> <p>Secondly, say the scraper gets 90% of the pages, then stalls (infinite wait). Is there a way to have it retry that loop after x seconds so as to save what you've got and retry from the stalled point again?</p>
howto
2016-02-08T03:20:58Z
35,269,374
get count of values associated with key in dict python
<p>list of dict is like .</p> <pre><code>[{'id': 19, 'success': True, 'title': u'apple'}, {'id': 19, 'success': False, 'title': u'some other '}, {'id': 19, 'success': False, 'title': u'dont know'}] </code></pre> <p>I want count of how many dict have <code>success</code> as <code>True</code>.</p> <p>I have tried,</p> <pre><code>len(filter(lambda x: x, [i['success'] for i in s])) </code></pre> <p>How can I make it more elegant using pythonic way ?</p>
howto
2016-02-08T12:10:46Z
35,281,863
OR style permissions for DjangoRestFramework
<p>I'm wondering if anyone has found a good way to <em>reverse</em> the way permissions work in DRF (use OR instead of AND). Right now, if any of the checks fail, the request is not authenticated. I would like a way to make it so that if any of the checks pass, the request <em>is</em> authenticated. ie.</p> <pre><code># currently: permission_classes=(HasNiceHat, HasNicePants) </code></pre> <p>Request will fail for anyone with a nice hat <strong>and</strong> pants. What I would like:</p> <pre><code># goal: AND_permission_classes=(HasNiceHat, HasNicePants) </code></pre> <p>Will succeed if user has nice hat <strong>or</strong> nice pants.</p> <p>I will assume that all users are logged in (must be for either check to pass), and that implementation of the permission is not limited in any way.</p>
howto
2016-02-09T00:25:31Z
35,288,428
How to create sub list with fixed length from given number of inputs or list in Python?
<p>I want to create sub-lists with fixed list length, from given number of inputs in Python.</p> <p>For example, my inputs are: <code>['a','b','c',......'z']</code>... Then I want to put those values in several lists. Each list length should be 6. So I want something like this:</p> <pre><code>first list = ['a','b','c','d','e','f'] </code></pre> <p><code>second list = ['g','h','i','j','k','l']</code></p> <pre><code>last list = [' ',' ',' ',' ',' ','z' ] </code></pre> <p>How can I achieve this?</p>
howto
2016-02-09T09:36:58Z
35,300,343
Transposing dataframe and sorting
<p>I have a df like so (the data represents a matrix):</p> <pre><code> Arnston Berg Carlson Arnston 0.00 1.00 2.00 Berg 1.00 0.00 3.00 Carlson 2.00 3.00 0.00 </code></pre> <p>and I want to transpose it so that the row and column names are linked, and their associated value is displayed as a new column with it sorted from smallest to largest. I only need to keep one of the row-column combinations because they are always the same (e.g. Arnston, Berg == 1.00 and Berg, Arnston == 1.00)</p> <p>My desired output is:</p> <pre><code>Arnston, Arnston 0.00 Berg, Berg 0.00 Carlson, Carlson 0.00 Arnston, Berg 1.00 Arnston, Carlson 2.00 Berg, Carlson 3.00 </code></pre> <p>I hope that makes sense.</p>
howto
2016-02-09T19:14:16Z
35,306,419
Where is the configuration information of installed packages?
<p>When I install something via <code>pip</code>, where is the information about the installed package?</p>
howto
2016-02-10T03:41:53Z
35,318,700
Convert dataFrame to list
<p>I have a pandas dataframe that I convert to numpy array as follows:</p> <pre><code>df.values </code></pre> <p>which gives the following output:</p> <pre><code>array([[2], [0], [1], ..., [0], [1], [0]], dtype=int64) </code></pre> <p>However I want to obtain the list as follows:</p> <pre><code>[0, 2, 3] </code></pre> <p>Any idea how to do this?</p>
howto
2016-02-10T15:05:40Z
35,322,452
Is there a way to sandbox test execution with pytest, especially filesystem access?
<p>I'm interested in executing potentially untrusted tests with pytest in some kind of sandbox, like docker, similarly to what continuous integration services do.</p> <p>I understand that to properly sandbox a python process you need OS-level isolation, like running the tests in a disposable chroot/container, but in my use case I don't need to protect against intentionally malicious code, only from dangerous behaviour of pairing "randomly" functions with arguments. So lesser strict sandboxing may still be acceptable. But I didn't find any plugin that enables any form of sandboxing.</p> <p>What is the best way to sandbox tests execution in pytest?</p> <p><strong>Update</strong>: This question is not about <a href="http://stackoverflow.com/questions/3068139/how-can-i-sandbox-python-in-pure-python">python sandboxing in general</a> as the tests' code is run by pytest and I can't change the way it is executed to use <code>exec</code> or <code>ast</code> or whatever. Also using pypy-sandbox is not an option unfortunately as it is "a prototype only" as per the <a href="http://pypy.org/features.html" rel="nofollow">PyPy feature page</a>.</p> <p><strong>Update 2</strong>: Hoger Krekel on the pytest-dev mailing list <a href="https://mail.python.org/pipermail/pytest-dev/2016-February/003394.html" rel="nofollow">suggests using a dedicated testuser via pytest-xdist</a> for user-level isolation:</p> <pre><code>py.test --tx ssh=OTHERUSER@localhost --dist=each </code></pre> <p>which <a href="https://mail.python.org/pipermail/pytest-dev/2016-February/003399.html" rel="nofollow">made me realise</a> that for my CI-like use case:</p> <blockquote> <p>having a "disposable" environment is as important as having a isolated one, so that every test or every session runs from the same initial state and it is not influenced by what older sessions might have left on folders writable by the <em>testuser</em> (/home/testuser, /tmp, /var/tmp, etc).</p> </blockquote> <p>So the testuser+xdist is close to a solution, but not quite there.</p> <p>Just for context I need isolation to run <a href="https://pytest-nodev.readthedocs.org" rel="nofollow">pytest-nodev</a>.</p>
howto
2016-02-10T17:51:38Z
35,346,425
Printing inherited class in Python
<p>I am attempting to combine two classes into one class. Towards the end of the code block you will see a class called starwarsbox. This incorporates the character and box classes. The goal is print out a box made out of asterisks and the information of a Star Wars character (this is for my learning). I have tried looking up how to use repr but have had no luck implementing it. I appreciate your help. </p> <p>I get <code>&lt;__main__.starwarsbox object at 0x000000000352A128&gt;</code></p> <pre><code>class character: 'common base class for all star wars characters' charCount = 0 def __init__(self, name, occupation, affiliation, species): self.name = name self.occupation = occupation self.affiliation = affiliation self.species = species character.charCount +=1 def displayCount(self): print ("Total characters: %d" % character.charCount) def displayCharacter(self): print ('Name :', self.name, ', Occupation:', self.occupation, ', Affiliation:', self.affiliation, ', Species:', self.species) darth_vader = character('Darth Vader', 'Sith Lord', 'Sith', 'Human') chewbacca = character('Chewbacca', 'Co-pilot and first mate on Millenium Falcon', 'Galactic Republic &amp; Rebel Alliance', 'Wookiee') class box: """let's print a box bro""" def __init__(self, x, y, title): self.x = x self.y = y self.title = title def createbox(self): for i in range(self.x): for j in range(self.y): print('*' if i in [0, self.x-1] or j in [0, self.y-1] else ' ', end='') print() vaderbox = box(10, 10, 'box') vaderbox.createbox() class starwarsbox(character, box): def __init__(self, name, occupation, affiliation, species, x, y, title): character.__init__(self, name, occupation, affiliation, species) box.__init__(self, x, y, title) def __str__(self): return box.__str__(self) + character.__str__(self) newbox = starwarsbox('luke','jedi','republic','human',10,10,'box') print(repr(newbox)) </code></pre>
howto
2016-02-11T17:44:42Z
35,354,005
Filtering histogram edges and counts
<p>Consider a histogram calculation of a numpy array that returns percentages:</p> <pre><code># 500 random numbers between 0 and 10,000 values = np.random.uniform(0,10000,500) # Histogram using e.g. 200 buckets perc, edges = np.histogram(values, bins=200, weights=np.zeros_like(values) + 100/values.size) </code></pre> <p>The above returns two arrays:</p> <ul> <li><code>perc</code> containing the <code>%</code> (i.e. percentages) of values within each pair of consecutive <code>edges[ix]</code> and <code>edges[ix+1]</code> out of the total.</li> <li><code>edges</code> of length <code>len(hist)+1</code></li> </ul> <p>Now, say that I want to filter <code>perc</code> and <code>edges</code> so that I only end up with the percentages and edges for <strong>values</strong> contained within a new range <code>[m, M]</code>. '</p> <p>That is, I want to work with the <strong>sub-arrays</strong> of <code>perc</code> and <code>edges</code> corresponding to the interval of values within <code>[m, M]</code>. Needless to say, the new array of percentages would still refer to the total fraction count of the input array. We just want to filter <code>perc</code> and <code>edges</code> to end up with the correct sub-arrays.</p> <p>How can I post-process <code>perc</code> and <code>edges</code> to do so?</p> <p>The values of <code>m</code> and <code>M</code> can be any number of course. In the example above, we can assume e.g. <code>m = 0</code> and <code>M = 200</code>.</p>
howto
2016-02-12T02:50:45Z
35,373,082
What's the most efficient way to accumulate dataframes in pyspark?
<p>I have a dataframe (or could be any RDD) containing several millions row in a well-known schema like this:</p> <pre><code>Key | FeatureA | FeatureB -------------------------- U1 | 0 | 1 U2 | 1 | 1 </code></pre> <p>I need to load a dozen other datasets from disk that contains different features for the same number of keys. Some datasets are up to a dozen or so columns wide. Imagine:</p> <pre><code>Key | FeatureC | FeatureD | FeatureE ------------------------------------- U1 | 0 | 0 | 1 Key | FeatureF -------------- U2 | 1 </code></pre> <p>It feels like a fold or an accumulation where I just want to iterate all the datasets and get back something like this:</p> <pre><code>Key | FeatureA | FeatureB | FeatureC | FeatureD | FeatureE | FeatureF --------------------------------------------------------------------- U1 | 0 | 1 | 0 | 0 | 1 | 0 U2 | 1 | 1 | 0 | 0 | 0 | 1 </code></pre> <p>I've tried loading each dataframe then joining but that takes forever once I get past a handful of datasets. Am I missing a common pattern or efficient way of accomplishing this task? </p>
howto
2016-02-12T22:12:51Z
35,389,648
Convert empty dictionary to empty string
<pre><code>&gt;&gt;&gt; d = {} &gt;&gt;&gt; s = str(d) &gt;&gt;&gt; print s {} </code></pre> <p>I need an empty string instead.</p>
howto
2016-02-14T08:03:36Z
35,414,625
pandas: how to run a pivot with a multi-index?
<p>I would like to run a pivot on a pandas dataframe, with the index being two columns, not one. For example, one field for the year, one for the month, an 'item' field which shows 'item 1' and 'item 2' and a 'value' field with numerical values. I want the index to be year + month.</p> <p>The only way I managed to get this to work was to combine the two fields into one, then separate them again. is there a better way?</p> <p>Minimal code copied below. Thanks a lot!</p> <p>PS Yes, I am aware there are other questions with the keywords 'pivot' and 'multi-index', but I did not understand if/how they can help me with this question.</p> <pre><code>import pandas as pd import numpy as np df= pd.DataFrame() month = np.arange(1,13) values1 = np.random.randint(0,100,12) values2 = np.random.randint(200,300,12) df['month'] = np.hstack(( month, month )) df['year']=2004 df['value'] = np.hstack(( values1, values2 )) df['item']= np.hstack(( np.repeat('item 1',12), np.repeat('item 2',12) )) # This doesn't work: ValueError: Wrong number of items passed 24, placement implies 2 # mypiv = df.pivot( ['year', 'month'], 'item' ,'value' ) #This doesn't work, either: #df.set_index(['year', 'month'], inplace=True) # ValueError: cannot label index with a null key #mypiv = df.pivot(columns='item', values='value') #This below works but is not ideal: I have to first concatenate then separate the fields I need df['new field']= df['year'] * 100 + df['month'] mypiv = df.pivot('new field', 'item', 'value').reset_index() mypiv['year'] = mypiv['new field'].apply( lambda x: int(x) / 100) mypiv['month'] = mypiv['new field'] % 100 </code></pre>
howto
2016-02-15T16:43:23Z
35,427,814
Get the number of all keys in a dictionary of dictionaries in Python
<p>I have a dictionary of dictionaries in Python 2.7.</p> <p>I need to quickly count the number of all keys, including the keys within each of the dictionaries.</p> <p>So in this example I would need the number of all keys to be 6:</p> <pre><code>dict_test = {'key2': {'key_in3': 'value', 'key_in4': 'value'}, 'key1': {'key_in2': 'value', 'key_in1': 'value'}} </code></pre> <p>I know I can iterate through each key with for loops, but I am looking for a quicker way to do this, since I will have thousands/millions of keys and doing this is just ineffective:</p> <pre><code>count_the_keys = 0 for key in dict_test.keys(): for key_inner in dict_test[key].keys(): count_the_keys += 1 # something like this would be more effective # of course .keys().keys() doesn't work print len(dict_test.keys()) * len(dict_test.keys().keys()) </code></pre>
howto
2016-02-16T08:50:37Z
35,428,388
Extracting a feature by feature name in scikit dict vectorizer
<p>I have a List of dictionaries which I convert to vectorial represenation using the <code>DictVectorizer</code> in <code>scikit-learn</code></p> <pre><code>from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer() dictvector = D = [{'foo': 'city1', 'bar': 2, 'label':'c1'}, {'foo': 'city2', 'baz': 1, 'label':'c2'}] dictVector = vec.fit_transform(dictList) </code></pre> <p>Now, from each row I want to extract the values of all the tuples for feature 'label' and then remove them from the vectors. This will help to use the vectors as input for Decision tree classifier in <code>scikit</code> and the corresponding labels as ground truth for the classifier.</p> <p>But when I tried with calling the feature name as a dictionary key. I am asked to use only integer and not strings. How can the same be resolved?</p>
howto
2016-02-16T09:20:08Z
35,431,172
Creating a table out of data in python
<p>So I am looking to create a table, with 4 columns.</p> <p>In my program I have already formed a list with the information. I can split the data into chunks of 4 (for each line), or make a separate list for every column.</p> <p>Is it possible to create a table including these values inside the python program or would I need to export the data first? </p> <p>EDIT: For Fauxpas</p> <pre><code>Column 1 Column 2 Column 3 Column 4 10 10 10 30 20 10 10 40 20 20 10 50 </code></pre>
howto
2016-02-16T11:20:52Z
35,432,378
Python reshape list to ndim array
<p>Hi I have a list flat which is length 2800, it contains 100 results for each of 28 variables: Below is an example of 4 results for 2 variables</p> <pre><code>[0, 0, 1, 1, 2, 2, 3, 3] </code></pre> <p>I would like to reshape the list to an array (2,4) so that the results for each variable are in a single element. </p> <pre><code>[[0,1,2,3], [0,1,2,3]] </code></pre> <p>The below gives me the values in the same order, but this is not correct:</p> <pre><code>np.shape = (2,4) </code></pre> <p>eg.</p> <pre><code>[[0,0,0,0] [1,1,1,1]] </code></pre>
howto
2016-02-16T12:16:03Z
35,469,417
Selecting Tags With Multiple Part Class in BeautifulSoup
<p>I'm trying to scrape some data off a webpage, that has <code>div</code> tags that have multiple part tags. E.g. <code>&lt;div class="A"&gt;</code>, <code>&lt;div class="A B"&gt;</code> and <code>&lt;div class="A X Y"&gt;</code>. I want to collect the tags of the first two types, but not the last.</p> <p>I thought that this would be simple enough using BeautifulSoup:</p> <pre><code>from bs4 import BeautifulSoup import re from urllib import request url_request = request.Request(url) html = request.urlopen(url_request) soup = BeautifulSoup(html, "lxml") divs = soup.find_all("div", {"class": re.compile("A( B)?$")}) </code></pre> <p>When I look at <code>divs</code> though, I see that all the <code>&lt;div class="A X Y"&gt;</code> tags are there too.</p> <p>The problem appears to be that BeautifulSoup considers the class to be a list, rather than a string:</p> <pre><code>&gt;&gt; div['class'] &gt;&gt; ['A', 'X', 'Y'] </code></pre> <p>Can I use regular expressions (or anything else) to remove the unwanted tags? I'd prefer to ignore them, rather than post-select the tags I want.</p>
howto
2016-02-17T22:54:55Z
35,481,842
python: how can I get a new value in round robin style every time i invoke the script
<p>I have a file called subnets which contains a list of available subnets in the system. I want that my script should return to me one subnet at each script invocation, in round robin style.</p> <p>Example:</p> <p>subnets file:</p> <pre><code>subnet1 subnet2 subnet3 </code></pre> <p>expected output:</p> <pre><code>python next_available_subnet.py output: subnet1 python next_available_subnet.py output: subnet2 python next_available_subnet.py output: subnet3 python next_available_subnet.py output: subnet1 </code></pre> <p>How can I achieve this? I have tried global variables, iterators , but i lose the value by the time i invoke the script again and it always gives the same subnet as output.</p>
howto
2016-02-18T12:44:09Z
35,485,675
How to create a vector of Matrices in python
<p>In my code i have calculated multiple flow map with respect to time and want to store in one list. This is what i want to do in may code</p> <p><a href="http://i.stack.imgur.com/xPOOo.png" rel="nofollow"><img src="http://i.stack.imgur.com/xPOOo.png" alt="enter image description here"></a></p>
howto
2016-02-18T15:31:50Z
35,488,781
Selecting the value in a row closest to zero in a pandas DataFrame
<p>We have a pandas <code>DataFrame</code> with two columns: </p> <pre><code>pd.DataFrame(pd.np.arange(12).reshape(6,2), columns=list('ab')) % 3 - 1.2 a b 0 -1.2 -0.2 1 0.8 -1.2 2 -0.2 0.8 3 -1.2 -0.2 4 0.8 -1.2 5 -0.2 0.8 </code></pre> <p>What's the best way to get the values closest to zero? The expected output of the above would be </p> <pre><code> x 0 -0.2 1 0.8 2 -0.2 3 -0.2 4 0.8 5 -0.2 </code></pre> <p>I tried using <code>df.idxmin(axis=1)</code>, and then <code>lookup</code>, but I'm betting there's an easier way?</p>
howto
2016-02-18T17:49:02Z
35,489,107
Django URL matching any 140 characters
<p>I am trying to map a view to any URL of the form home/text where text is any 140 characters (including spaces). Is there a simple way to do this? I tried:</p> <pre><code>url(r'^home/(?P&lt;text&gt;[\w]+)$',....), </code></pre> <p>but that did not allow spaces nor did it enforce any limit on length.</p> <p>Any advice would be appreciated, as I couldn't find anything similar in the documentation and am new to Django</p>
howto
2016-02-18T18:04:36Z
35,493,086
Compare values in 2 columns and output the result in a third column in pandas
<p>My data looks like below, where I am trying to create the column output with the given values.</p> <pre><code> a_id b_received c_consumed 0 sam soap oil 1 sam oil NaN 2 sam brush soap 3 harry oil shoes 4 harry shoes oil 5 alice beer eggs 6 alice brush brush 7 alice eggs NaN </code></pre> <p>The code for producing the dataset is</p> <pre><code>df = pd.DataFrame({'a_id': 'sam sam sam harry harry alice alice alice'.split(), 'b_received': 'soap oil brush oil shoes beer brush eggs'.split(), 'c_consumed': 'oil NaN soap shoes oil eggs brush NaN'.split()}) </code></pre> <p>I want a new column called Output which looks like this </p> <pre><code> a_id b_received c_consumed output 0 sam soap oil 1 1 sam oil NaN 1 2 sam brush soap 0 3 harry oil shoes 1 4 harry shoes oil 1 5 alice beer eggs 0 6 alice brush brush 1 7 alice eggs NaN 1 </code></pre> <p>So the search is if sam recieved soap, oil and brush, look for values in column 'consumed' for products he consumed, so if soap was consumed the output will be 1, but since brush wasn't consumed the output is 0. </p> <p>Similarly for harry, he received oil and shoes, then look for oil and shoes in the consumed column, if oil was consumed, the output is 1.</p> <p>To make it more clear, the output value corresponds to the first column (received), contingent on the value being present in the second column (consumed). </p> <p>I tried using this code</p> <pre><code> a=[] for i in range(len(df.b_received)): if any(df.c_consumed == df.b_received[i] ): a.append(1) else: a.append(0) df['output']=a </code></pre> <p>This gives me the output </p> <pre><code> a_id b_received c_consumed output 0 sam soap oil 1 1 sam oil NaN 1 2 sam brush soap 1 3 harry oil shoes 1 4 harry shoes oil 1 5 alice beer eggs 0 6 alice brush brush 1 7 alice eggs NaN 1 </code></pre> <p>The problem is that since sam didn't consume brush, the output should be 0 but the output is 1, since brush was consumed by a different person (alice). I need to make sure that doesn't happen. The output needs to be specific to each person's consumption.</p> <p>I know this is confusing, so if I have not made myself very clear, please do ask, I will answer your comments. </p>
howto
2016-02-18T21:48:21Z
35,510,590
How to get the location of a Zope installation from inside an instance?
<p>We are working on an <a href="https://github.com/collective/collective.fingerpointing/pull/5" rel="nofollow">add-on that writes to a log file</a> and we need to figure out where the default <code>var/log</code> directory is located (the value of the <code>${buildout:directory}</code> variable).</p> <p>Is there an easy way to accomplish this?</p>
howto
2016-02-19T16:55:07Z
35,526,501
how to groupby pandas dataframe on some condition
<p>I have a pandas dataframe like following</p> <pre><code>buyer_id item_id order_id date 139 57 387 2015-12-28 140 9 388 2015-12-28 140 57 389 2015-12-28 36 9 390 2015-12-28 64 49 404 2015-12-29 146 49 405 2015-12-29 81 49 406 2015-12-29 140 80 407 2015-12-30 139 81 408 2015-12-30 </code></pre> <p>There are lot of rows in above dataframe. What I am trying to achieve is, whether introducing new dishes driving my users to come back. <code>item_id</code> is mapped to a dish name. What I want to see is if a specific user is ordering different dish on different day. e.g <code>buyer_id 140 has ordered two dishes item_id (9,57) on 28th Dec and same buyer has ordered different dish (item_id = 80) on 30th Dec</code> Then I want to flag this user as <code>1</code></p> <p>How I am doing it in python is like this </p> <pre><code>item_wise_order.groupby(['date','buyer_id'])['item_id'].apply(lambda x: x.tolist()) </code></pre> <p>it gives me following output </p> <pre><code>date buyer_id 2015-12-28 139 [57] 140 [9,57] 36 [9] 2015-12-29 64 [49] 146 [49] 81 [49] 2015-12-30 140 [80] 139 [81] </code></pre> <p>Desired output</p> <pre><code> buyer_id item_id order_id date flag 139 57 387 2015-12-28 1 140 9 388 2015-12-28 1 140 57 389 2015-12-28 1 36 9 390 2015-12-28 0 64 49 404 2015-12-29 0 146 49 405 2015-12-29 0 81 49 406 2015-12-29 0 140 80 407 2015-12-30 1 139 81 408 2015-12-30 1 </code></pre>
howto
2016-02-20T17:20:31Z
35,559,958
Check list of tuples where first element of tuple is specified by defined string
<p>This question is similar to <a href="http://stackoverflow.com/questions/9703088/check-that-list-of-tuples-has-tuple-with-1st-element-as-defined-string">Check that list of tuples has tuple with 1st element as defined string</a> but no one has properly answered the "wildcard" question.</p> <p>Say I have <code>[('A', 2), ('A', 1), ('B', 0.2)]</code></p> <p>And I want to identify the tuples where the FIRST element is A. How do I return just the following?</p> <p><code>[('A', 2), ('A', 1)]</code></p>
howto
2016-02-22T17:25:05Z
35,560,606
Modifying HTML using python html package
<p>Can the <a href="https://pypi.python.org/pypi/html" rel="nofollow">html package</a> handle modifying the built HTML? If not, what's the best package to use that can <em>build</em> and <em>query</em>/<em>modify</em> the built HTML.</p> <p>For example, if I want to modify a table I load from a string:</p> <pre><code>table = HTML(html_table_string) # Select first td element and set it's content to 'Something' table.select('td')[0] = 'Something' </code></pre>
howto
2016-02-22T17:58:13Z
35,561,635
Packaging a python application ( with enthought, matplotlib, wxpython) into executable
<p>My Python 2.7 application uses matplotlib, enthought (mayavi, traits), wxpython libraries. I need to package it into an executable on Windows, which, after some research and experimenting seems like a not straightforward task. </p> <p>I have so far experimented with PyInstaller and bbfreeze. In both of them I specify hidden imports/includes (which I could gather fromrandom information on the web) to import the Enthought packeges. Both manage to create an executable (for bbfreeze I excluded the matplotlib part of my application so far), but when I run it, both return the same error: </p> <pre><code>Traceback (most recent call last): File "&lt;string&gt;", line 6, in &lt;module&gt; File "__main__.py", line 128, in &lt;module&gt; File "__main__test__.py", line 23, in &lt;module&gt; File "traitsui/api.py", line 36, in &lt;module&gt; File "traitsui/editors/__init__.py", line 23, in &lt;module&gt; File "traitsui/editors/api.py", line 49, in &lt;module&gt; File "traitsui/editors/table_editor.py", line 37, in &lt;module&gt; File "traitsui/table_filter.py", line 35, in &lt;module&gt; File "traitsui/menu.py", line 128, in &lt;module&gt; File "pyface/toolkit.py", line 98, in __init__ NotImplementedError: the wx pyface backend doesn't implement MenuManager </code></pre> <p>Any ideas what should I do? Alternatively has anyone had experience with creating such an executable and can recommend a tool or method ? So far I have seen only <a href="http://www.geophysique.be/2011/08/01/pack-an-enthought-traits-app-inside-a-exe-using-py2exe-ets-4-0-edit/" rel="nofollow">this tutorial</a> but it uses py2exe and apparently requires downloading the whole ETS - if nothing else gonna give it a try...</p>
howto
2016-02-22T18:56:34Z
35,561,743
python comprehension loop for dictionary
<p>Beginner question.</p> <p>I have a dictionary as such: </p> <pre><code>tadas = {'tadas':{'one':True,'two':2}, 'john':{'one':True,'two':True}} </code></pre> <p>I would like to count the True values where key is 'one'. How should I modify my code?</p> <pre><code>sum(x == True for y in tadas.values() for x in y.values() ) </code></pre>
howto
2016-02-22T19:02:26Z
35,580,801
Chunking bytes (not strings) in Python 2 and 3
<p>This is turning out to be trickier than I expected. I have a byte string:</p> <pre><code>data = b'abcdefghijklmnopqrstuvwxyz' </code></pre> <p>I want to read this data in chunks of <em>n</em> bytes. Under Python 2, this is trivial using a minor modification to the <code>grouper</code> recipe from the <code>itertools</code> documentation:</p> <pre><code>def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --&gt; ABC DEF Gxx args = [iter(iterable)] * n return (''.join(x) for x in izip_longest(fillvalue=fillvalue, *args)) </code></pre> <p>With this in place, I can call:</p> <pre><code>&gt;&gt;&gt; list(grouper(data, 2)) </code></pre> <p>And get:</p> <pre><code>['ab', 'cd', 'ef', 'gh', 'ij', 'kl', 'mn', 'op', 'qr', 'st', 'uv', 'wx', 'yz'] </code></pre> <p>Under Python 3, this gets trickier. The <code>grouper</code> function as written simply falls over:</p> <pre><code>&gt;&gt;&gt; list(grouper(data, 2)) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;stdin&gt;", line 5, in &lt;genexpr&gt; TypeError: sequence item 0: expected str instance, int found </code></pre> <p>And this is because in Python 3, when you iterate over a bytestring (like <code>b'foo'</code>), you get a list of integers, rather than a list of bytes:</p> <pre><code>&gt;&gt;&gt; list(b'foo') [102, 111, 111] </code></pre> <p>The python 3 <code>bytes</code> function will help out here:</p> <pre><code>def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --&gt; ABC DEF Gxx args = [iter(iterable)] * n return (bytes(x) for x in izip_longest(fillvalue=fillvalue, *args)) </code></pre> <p>Using that, I get what I want:</p> <pre><code>&gt;&gt;&gt; list(grouper(data, 2)) [b'ab', b'cd', b'ef', b'gh', b'ij', b'kl', b'mn', b'op', b'qr', b'st', b'uv', b'wx', b'yz'] </code></pre> <p>But (of course!) the <code>bytes</code> function under Python 2 does not behave the same way. It's just an alias for <code>str</code>, so that results in:</p> <pre><code>&gt;&gt;&gt; list(grouper(data, 2)) ["('a', 'b')", "('c', 'd')", "('e', 'f')", "('g', 'h')", "('i', 'j')", "('k', 'l')", "('m', 'n')", "('o', 'p')", "('q', 'r')", "('s', 't')", "('u', 'v')", "('w', 'x')", "('y', 'z')"] </code></pre> <p>...which is not at all helpful. I ended up writing the following:</p> <pre><code>def to_bytes(s): if six.PY3: return bytes(s) else: return ''.encode('utf-8').join(list(s)) def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --&gt; ABC DEF Gxx args = [iter(iterable)] * n return (to_bytes(x) for x in izip_longest(fillvalue=fillvalue, *args)) </code></pre> <p>This seems to work, but is this really the way to do it?</p>
howto
2016-02-23T14:58:01Z
35,595,836
pysvn: How to find out if local dir is under version control?
<p>Using <code>pysvn</code> to check some SVN working copy properties. </p> <p>What is the easy way of finding out if a local directory <code>c:\SVN\dir1</code> is under version control or not?</p>
howto
2016-02-24T07:28:29Z
35,596,269
Replacing strings in specific positions into a text and then rewriting all the text
<p>I'm working on a text with several columns and many lines as given bellow;</p> <p>I want to replace "A" in index[4] with "B".</p> <pre><code>ATOM 1 N ARG A 88 63.055 9.295 9.736 1.00 25.54 N ATOM 2 CA ARG A 88 61.952 10.108 10.353 1.00 26.02 C </code></pre> <p>and rewrite my text as:</p> <pre><code>ATOM 1 N ARG B 88 63.055 9.295 9.736 1.00 25.54 N ATOM 2 CA ARG B 88 61.952 10.108 10.353 1.00 26.02 C </code></pre> <p>I'm using this script but it changes all "A" to "B".</p> <pre><code>file = open('1qib.pdb', 'r') file2 = open('new.pdb', 'w') for i, line in enumerate(file): s = line.split()[4] file2.write(line.replace(s, "B")) file.close() file2.close() </code></pre>
howto
2016-02-24T07:54:19Z
35,608,326
pyspark - multiple input files into one RDD and one output file
<p>I have a wordcount in Python that I want to run on Spark with multiple text files and get ONE output file, so the words are counted in all files altogether. I tried a few solutions for example the ones found <a href="http://stackoverflow.com/questions/24029873/how-to-read-multiple-text-files-into-a-single-rdd">here</a> and <a href="http://stackoverflow.com/questions/23397907/spark-context-textfile-load-multiple-files/23407980">here</a>, but it still gives the same number of output files as the number of input files.</p> <pre><code>rdd = sc.textFile("file:///path/*.txt") input = sc.textFile(join(rdd)) </code></pre> <p>or</p> <pre><code>rdd = sc.textFile("file:///path/f0.txt,file:///path/f1.txt,...") rdds = Seq(rdd) input = sc.textFile(','.join(rdds)) </code></pre> <p>or</p> <pre><code>rdd = sc.textFile("file:///path/*.txt") input = sc.union(rdd) </code></pre> <p>don't work. Can anybody suggest a solution how to make one RDD of a few input text files?</p> <p>Thanks in advance...</p>
howto
2016-02-24T16:56:27Z
35,609,991
How do I print a sorted Dictionary in Python 3.4.3
<p>I am studying for my GCSE part of which requires me to print a dictionary sorted alphabetically by key and the print should include the associated value.</p> <p>I have spent hours trying to find the answer to this and have looked at various posts on this forum but most are too complex for my limited knowledge.</p> <p>I can print alphabeticallycsorted Keys and I can print sorted values but not alphabetically sorted keys with the values attached.</p> <p>This is my simple test code</p> <p>class1 = { 'Ethan':'9','Ian':'3','Helen':'8','Holly':'6' } # create dictionary</p> <p>print(sorted(class1)) # prints sorted Keys print(sorted(class1.values())) # Prints sorted values</p> <h1>need to print sorted keys with values - how to do that?</h1> <p>for k,v in class1.items(): print(k,v) # prints out in the format I want but not alphabetically sorted</p> <p>The simplest solution you can provide would be gratefully recieved.</p>
howto
2016-02-24T18:20:17Z
35,611,992
recursive way to go through a nested list and remove all of a select value
<p>I'm trying to remove all empty lists from a nested list recursively. </p> <pre><code>def listcleaner(lst): if isinstance(lst[0], int): return listcleaner(lst[1:]) if isinstance(lst[0], list): if len(lst[0]) == []: lst[0].remove([]) return listcleaner(lst) return listcleaner(lst[0]) return lst </code></pre> <p>and what I'd like the function to do is</p> <pre><code>&gt;&gt;&gt; a = listcleaner([1, [], [2, []], 5]) &gt;&gt;&gt; print(a) [1, [2], 5] </code></pre>
howto
2016-02-24T20:06:21Z
35,619,038
Generate random numbers without using the last n values in Python
<p>I have a Python function that generates a random number between 0 and 100:</p> <pre><code>def get_next_number(): value = randint(0,100) </code></pre> <p>Every time I call this function, I need it to return a random number but that number cannot be one of the last n random numbers it returned (lets say 5 for this example).</p> <p>Here are some examples:</p> <p>55, 1, 67, 12, 88, 91, 100, 54 (This is fine as there are no duplicates in the last 5 numbers returned)</p> <p>77, 42, 2, 3, 88, 2... (When the function gets a random number of 2, I need it to try again since 2 was already returned 3 numbers prior)</p> <p>89, 23, 29, 81, 99, 100, 6, 8, 23... (This one is fine because 23 occurred more than 5 times before)</p> <p>Is there something built into the random function to accomplish this?</p>
howto
2016-02-25T05:23:40Z
35,631,192
Element-wise constraints in scipy.optimize.minimize
<p>I'm using <code>scipy.optimize.minimize</code>'s COBYLA method to find a matrix of parameters for a categorical distribution. I need to impose the constraint that each parameter is greater than zero, and that the sum of the rows of the parameter matrix is a column of ones.</p> <p>It's not clear to me how to implement this in <code>scipy.minimize</code>, because the constraints are checked for non-negativity rather than truth. The minimization raises an exception if I just pass the arrays as the constraint.</p> <p>Does anyone know how to go about implementing these kinds of constraints?</p>
howto
2016-02-25T15:13:29Z
35,633,421
How to remove/omit smaller contour lines using matplotlib
<p>I am trying to plot <code>contour</code> lines of pressure level. I am using a netCDF file which contain the higher resolution data (ranges from 3 km to 27 km). Due to higher resolution data set, I get lot of pressure values which are not required to be plotted (rather I don't mind omitting certain contour line of insignificant values). I have written some plotting script based on the examples given in this link <a href="http://matplotlib.org/basemap/users/examples.html" rel="nofollow">http://matplotlib.org/basemap/users/examples.html</a>. </p> <p>After plotting the image looks like this </p> <p><a href="http://i.stack.imgur.com/q2pHB.png" rel="nofollow"><img src="http://i.stack.imgur.com/q2pHB.png" alt="Contour Plot"></a></p> <p>From the image I have encircled the contours which are small and not required to be plotted. Also, I would like to plot all the <code>contour</code> lines smoother as mentioned in the above image. Overall I would like to get the contour image like this:-</p> <p><a href="http://i.stack.imgur.com/hKU5D.gif" rel="nofollow"><img src="http://i.stack.imgur.com/hKU5D.gif" alt="Internet Image"></a></p> <p>Possible solution I think of are</p> <ol> <li>Find out the number of points required for plotting contour and mask/omit those lines if they are small in number.</li> </ol> <p><strong>or</strong></p> <ol start="2"> <li>Find the area of the contour (as I want to omit only circled contour) and omit/mask those are smaller.</li> </ol> <p><strong>or</strong></p> <ol start="3"> <li>Reduce the resolution (only contour) by increasing the distance to 50 km - 100 km.</li> </ol> <p>I am able to successfully get the points using SO thread <a href="http://stackoverflow.com/questions/18304722/python-find-contour-lines-from-matplotlib-pyplot-contour">Python: find contour lines from matplotlib.pyplot.contour()</a></p> <p>But I am not able to implement any of the suggested solution above using those points.</p> <p>Any solution to implement the above suggested solution is really appreciated.</p> <p><strong>Edit:-</strong></p> <p>@ Andras Deak I used <code>print 'diameter is ', diameter</code> line just above <code>del(level.get_paths()[kp])</code> line to check if the code filters out the required diameter. Here is the filterd messages when I set <code>if diameter &lt; 15000:</code>:</p> <pre><code>diameter is 9099.66295612 diameter is 13264.7838257 diameter is 445.574234531 diameter is 1618.74618114 diameter is 1512.58974168 </code></pre> <p>However the resulting image does not have any effect. All look same as posed image above. I am pretty sure that I have saved the figure (after plotting the wind barbs).</p> <p>Regarding the solution for reducing the resolution, <code>plt.contour(x[::2,::2],y[::2,::2],mslp[::2,::2])</code> it works. I have to apply some filter to make the curve smooth.</p> <p><strong>Full working example code for removing lines:-</strong></p> <p>Here is the example code for your review</p> <pre><code>#!/usr/bin/env python from netCDF4 import Dataset import matplotlib matplotlib.use('agg') import matplotlib.pyplot as plt import numpy as np import scipy.ndimage from mpl_toolkits.basemap import interp from mpl_toolkits.basemap import Basemap # Set default map west_lon = 68 east_lon = 93 south_lat = 7 north_lat = 23 nc = Dataset('ncfile.nc') # Get this variable for later calucation temps = nc.variables['T2'] time = 0 # We will take only first interval for this example # Draw basemap m = Basemap(projection='merc', llcrnrlat=south_lat, urcrnrlat=north_lat, llcrnrlon=west_lon, urcrnrlon=east_lon, resolution='l') m.drawcoastlines() m.drawcountries(linewidth=1.0) # This sets the standard grid point structure at full resolution x, y = m(nc.variables['XLONG'][0], nc.variables['XLAT'][0]) # Set figure margins width = 10 height = 8 plt.figure(figsize=(width, height)) plt.rc("figure.subplot", left=.001) plt.rc("figure.subplot", right=.999) plt.rc("figure.subplot", bottom=.001) plt.rc("figure.subplot", top=.999) plt.figure(figsize=(width, height), frameon=False) # Convert Surface Pressure to Mean Sea Level Pressure stemps = temps[time] + 6.5 * nc.variables['HGT'][time] / 1000. mslp = nc.variables['PSFC'][time] * np.exp(9.81 / (287.0 * stemps) * nc.variables['HGT'][time]) * 0.01 + ( 6.7 * nc.variables['HGT'][time] / 1000) # Contour only at 2 hpa interval level = [] for i in range(mslp.min(), mslp.max(), 1): if i % 2 == 0: if i &gt;= 1006 and i &lt;= 1018: level.append(i) # Save mslp values to upload to SO thread # np.savetxt('mslp.txt', mslp, fmt='%.14f', delimiter=',') P = plt.contour(x, y, mslp, V=2, colors='b', linewidths=2, levels=level) # Solution suggested by Andras Deak for level in P.collections: for kp,path in enumerate(level.get_paths()): # include test for "smallness" of your choice here: # I'm using a simple estimation for the diameter based on the # x and y diameter... verts = path.vertices # (N,2)-shape array of contour line coordinates diameter = np.max(verts.max(axis=0) - verts.min(axis=0)) if diameter &lt; 15000: # threshold to be refined for your actual dimensions! #print 'diameter is ', diameter del(level.get_paths()[kp]) # no remove() for Path objects:( #level.remove() # This does not work. produces ValueError: list.remove(x): x not in list plt.gcf().canvas.draw() plt.savefig('dummy', bbox_inches='tight') plt.close() </code></pre> <p>After the plot is saved I get the same image</p> <p><a href="http://i.stack.imgur.com/mj3IQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/mj3IQ.png" alt="Pic of working example"></a></p> <p>You can see that the lines are not removed yet. Here is the link to <code>mslp</code> array which we are trying to play with <a href="http://www.mediafire.com/download/7vi0mxqoe0y6pm9/mslp.txt" rel="nofollow">http://www.mediafire.com/download/7vi0mxqoe0y6pm9/mslp.txt</a></p> <p>If you want <code>x</code> and <code>y</code> data which are being used in the above code, I can upload for your review.</p> <p><strong>Smooth line</strong></p> <p>You code to remove the smaller circles working perfectly. However the other question I have asked in the original post (smooth line) does not seems to work. I have used your code to slice the array to get minimal values and contoured it. I have used the following code to reduce the array size:-</p> <pre><code>slice = 15 CS = plt.contour(x[::slice,::slice],y[::slice,::slice],mslp[::slice,::slice], colors='b', linewidths=1, levels=levels) </code></pre> <p>The result is below.</p> <p><a href="http://i.stack.imgur.com/SPum8.png" rel="nofollow"><img src="http://i.stack.imgur.com/SPum8.png" alt="irregular line"></a></p> <p>After searching for few hours I found this SO thread having simmilar issue:-</p> <p><a href="http://stackoverflow.com/questions/25544110/regridding-regular-netcdf-data">Regridding regular netcdf data</a></p> <p>But none of the solution provided over there works.The questions similar to mine above does not have proper solutions. If this issue is solved then the code is perfect and complete. </p>
howto
2016-02-25T16:50:48Z
35,636,806
Authentication to use for user notifications using Crossbar/Autobahn?
<p>I'm currently trying to implement a user notification system using Websockets via Crossbar/Autobahn. I have done multiple tests and gone through the documentation, however, I'm not sure if there's a solution to having the following workflow work: </p> <ol> <li>User signs in with web app -- this is done through JWT </li> <li>Frontend establishes a websocket connection to a running <code>crossbar</code> instance. </li> <li>Frontend attempts to subscribe to a URI specifically for the user's notifications: i.e. <code>com.example.notifications.user.23</code> or <code>com.example.user.23.notifications'. Where</code>23` is the user id. </li> <li>User's JWT is checked to see if user is allowed to access subscription.</li> <li>When activity is generated and causes a notification, the backend publishes the user-specific URIs.</li> </ol> <p>For step 3, I can't tell if the current support auth methods have what I need. Ideally, I would like an auth method which I can customize (in order to implement a JWT authenticator within Crossbar) that I can apply to a URI pattern, but NOT give access to the entire pattern to the subscribing user. This is partially solved by the dynamic auth methods, but is missing the latter half:</p> <p>For example (my ideal workflow):</p> <ol> <li>User attempts to subscribe to a URI <code>com.example.user.23.notifications</code>.</li> <li>URI matches <code>com.example.user..notifications</code> (wildcard pattern in <a href="http://crossbar.io/docs/Pattern-Based-Subscriptions/" rel="nofollow">http://crossbar.io/docs/Pattern-Based-Subscriptions/</a>)</li> <li>Auth token is validated and user is given access to <em>only</em> <code>com.example.user.23.notifications</code>. </li> </ol> <p>Is the above achievable in a simple way? From what I can tell, it may only be possible if I somehow generate a <code>.crossbar/config.json</code> which contains URI permutations of all user ids...and automatically generate a new config for each new user -- which is completely not a reasonable solution. </p> <p>Any help is appreciated!</p>
howto
2016-02-25T19:42:42Z
35,656,186
Sun Grid Engine, force one job per node
<p>I am running many repeats of the same job using numpy on a cluster that uses sun grid engine to distribute jobs (starcluster). Each of my nodes has 2 cores (c3.large on AWS). So say I have 5 nodes, each with 2 cores.</p> <p>The matrix operations in numpy are able to use more than one core at a time. What I'm finding is that SGE will send out 10 jobs to run at once, each job using a core. This is causing longer runtimes for the jobs. Looking at htop, it looks like the two jobs on each core are fighting over resources.</p> <p>How can I tell qsub to distribute 1 job per node. So that when I submit my jobs, only 5 will be running at once, not 10?</p>
howto
2016-02-26T15:54:08Z
35,664,103
Iterator that supports pushback
<p>I'm looking for a why to convert a regular iterator into one that supports pushing items back into it. E.g.</p> <pre><code>item = next(my_iterator) if went_too_far(item): my_iterator.pushback(item) break; </code></pre> <p>This is similar, but not identical to, an iterator that supports <code>peek</code>; with the latter, the above would look more like this:</p> <pre><code>if went_too_far(my_iterator.peek()): break else: item = next(my_iterator) </code></pre>
howto
2016-02-27T00:21:41Z
35,667,931
How to transform a pair of values into a sorted unique array?
<p>I have a result like this:</p> <pre><code>[(196, 128), (196, 128), (196, 128), (128, 196), (196, 128), (128, 196), (128, 196), (196, 128), (128, 196), (128, 196)] </code></pre> <p>And I'd like to convert it to unique values like this, in sorted order:</p> <pre><code>[128, 196] </code></pre> <p>And I'm pretty sure there's something like a one-liner trick in Python (batteries included) but I can't find one.</p>
howto
2016-02-27T09:18:12Z
35,668,472
How can i search a array from a large array by numpy
<p>I am beginning at numpy! Has numpy some function can search an array from another one ,and return the similar ones? Thanks!</p> <pre><code>import numpy as np def searchBinA(B = ['04','22'],A): result = [] ?......? numpy.search(B,A)? "is this correct?" return result A = [['03', '04', '18', '22', '25', '29','30'], ['02', '04', '07', '09', '14', '29','30'], \ ['06', '08', '11', '13', '17', '19','30'], ['04', '08', '22', '23', '27', '29','30'], \ ['03', '05', '15', '22', '24', '25','30']] print(str(searchBinA())) output:[['03', '04', '18', '22', '25', '29','30'], ['04', '08', '22', '23', '27', '29','30']] </code></pre>
howto
2016-02-27T10:16:13Z
35,678,083
Pandas: Delete rows of a DataFrame if total count of a particular column occurs only 1 time
<p>I'm looking to delete rows of a DataFrame if total count of a particular column occurs only 1 time</p> <p>Example of raw table (values are arbitrary for illustrative purposes):</p> <pre><code>print df Country Series Value 0 Bolivia Population 123 1 Kenya Population 1234 2 Ukraine Population 12345 3 US Population 123456 5 Bolivia GDP 23456 6 Kenya GDP 234567 7 Ukraine GDP 2345678 8 US GDP 23456789 9 Bolivia #McDonalds 3456 10 Kenya #Schools 3455 11 Ukraine #Cars 3456 12 US #Tshirts 3456789 </code></pre> <p>Intended outcome:</p> <pre><code>print df Country Series Value 0 Bolivia Population 123 1 Kenya Population 1234 2 Ukraine Population 12345 3 US Population 123456 5 Bolivia GDP 23456 6 Kenya GDP 234567 7 Ukraine GDP 2345678 8 US GDP 23456789 </code></pre> <p>I know that <code>df.Series.value_counts()&gt;1</code> will identify which <code>df.Series</code> occur more than 1 time; and that the code returned will look something like the following:</p> <pre><code> Population True GDP True #McDonalds False #Schools False #Cars False #Tshirts False </code></pre> <p>I want to write something like the following so that my new DataFrame drops column values from df.Series that occur only 1 time, but this doesn't work: <code>df.drop(df.Series.value_counts()==1,axis=1,inplace=True)</code></p>
howto
2016-02-28T02:40:38Z
35,707,224
Functional statement in Python to return the sum of certain lists in a list of lists
<p>I'm trying to return the count of the total number of elements contained in all sublists with a length > 1 contained in a parent list:</p> <pre><code>x = [[4], [6, 4, 9], [4, 6], [0], []] # 1) Filter on x for only lists whose length is &gt; 1 # 2) Reduce the new list to a sum of the lengths of each sublist # result should be 5 </code></pre> <p>This is what I have tried:</p> <pre><code># Invalid as y is a list reduce((lambda x, y: len(x) + y), filter((lambda x: len(x) &gt; 1), x)) </code></pre> <p>I think a map might be involved somehow, but I'm not sure how to structure it.</p>
howto
2016-02-29T18:49:20Z
35,711,059
Extract dictionary value from column in data frame
<p>I'm looking for a way to optimize my code. </p> <p>I have entry data in this form:</p> <pre><code>import pandas as pn a=[{'Feature1': 'aa1','Feature2': 'bb1','Feature3': 'cc2' }, {'Feature1': 'aa2','Feature2': 'bb2' }, {'Feature1': 'aa1','Feature2': 'cc1' } ] b=['num1','num2','num3'] df= pn.DataFrame({'num':b, 'dic':a }) </code></pre> <p>I would like to extract element 'Feature3' from dictionaries in column 'dic'(if exist) in above data frame. So far I was able to solve it but I don't know if this is the fastest way, it seems to be a little bit over complicated.</p> <pre><code>Feature3=[] for idx, row in df['dic'].iteritems(): l=row.keys() if 'Feature3' in l: Feature3.append(row['Feature3']) else: Feature3.append(None) df['Feature3']=Feature3 print df </code></pre> <p>Is there a better/faster/simpler way do extract this Feature3 to separate column in the dataframe?</p> <p>Thank you in advance for help.</p>
howto
2016-02-29T22:32:11Z
35,720,234
Pivotting via Python and Pandas
<p>Has a table like this:</p> <pre><code>ID Word 1 take 2 the 3 long 4 long 5 road 6 and 7 walk 8 it 9 walk 10 it </code></pre> <p>Wanna to use pivot table in pandas to get distinct words in columns and 1 and 0 in Values. Smth like this matrix:</p> <pre><code>ID Take The Long Road And Walk It 1 1 0 0 0 0 0 0 2 0 1 0 0 0 0 0 3 0 0 1 0 0 0 0 4 0 0 1 0 0 0 0 5 0 0 0 1 0 0 0 </code></pre> <p>and so on</p> <p>Trying to use pivot table but not familiar with pandas syntax yet:</p> <pre><code>import pandas as pd data = pd.read_csv('dataset.txt', sep='|', encoding='latin1') table = pd.pivot_table(data,index=["ID"],columns=pd.unique(data["Word"].values),fill_value=0) </code></pre> <p>How can I rewrite pivot table function to deal with it?</p>
howto
2016-03-01T10:19:40Z
35,720,330
Getting specific field from chosen Row in Pyspark DataFrame
<p>I have a Spark DataFrame built through <em>pyspark</em> from a JSON file as </p> <pre><code>sc = SparkContext() sqlc = SQLContext(sc) users_df = sqlc.read.json('users.json') </code></pre> <p>Now, I want to access a <em>chosen_user</em> data, where this is its _id field. I can do</p> <pre><code>print users_df[users_df._id == chosen_user].show() </code></pre> <p>and this gives me the full Row of the user. But suppose I just want one specific field in the Row, say the user gender, how would I obtain it?</p>
howto
2016-03-01T10:23:27Z
35,734,026
Numpy drawing from urn
<p>I want to run a relatively simple random draw in numpy, but I can't find a good way to express it. I think the best way is to describe it as drawing from an urn without replacement. I have an urn with k colors, and n_k balls of every color. I want to draw m balls, and know how many balls of every color I have.</p> <p>My current attempt it</p> <p><code>np.bincount(np.random.permutation(np.repeat(np.arange(k), n_k))[:m], minlength=k)</code></p> <p>here, <code>n_k</code> is an array of length k with the counts of the balls.</p> <p>It seems that's equivalent to <code>np.bincount(np.random.choice(k, m, n_k / n_k.sum(), minlength=k)</code></p> <p>which is a bit better, but still not great.</p>
howto
2016-03-01T21:36:36Z
35,774,261
Read a dense matrix from a file directly into a sparse numpy array?
<p>I have a matrix stored in a tab-separated format in a text file. It is stored densely, but I know it is very sparse. I want to load this matrix into one of Python's sparse formats. The matrix is very large, so doing a <code>scipy.loadtxt(...)</code> and then converting the resulting dense array to a sparse format would take too much RAM memory in the intermediate step, so that is not an option.</p>
howto
2016-03-03T14:01:21Z
35,775,207
Remove unnecessary whitespace from Jinja rendered template
<p>I'm using <code>curl</code> to watch the output of my web app. When Flask and Jinja render templates, there's a lot of unnecessary white space in the output. It seems to be added by rendering various components from Flask-WTF and Flask-Bootstrap. I could strip this using <code>sed</code>, but is there a way to control this from Jinja?</p>
howto
2016-03-03T14:42:30Z
35,781,083
python- combining list and making them a dictionary
<p>I have two list</p> <pre><code>one=[1,3] elements=["a","b","c","d"] </code></pre> <p>So, I want to create a dictionary- where <code>one</code> is treated as key and <code>elements</code> treated as values and output should be like this</p> <pre><code>{'1': ['a'], '3': [["b"],["c"],["d"]] } </code></pre>
howto
2016-03-03T19:17:54Z
35,797,523
Create new list by taking first item from first list, and last item from second list
<p>How do I loop through my 2 lists so that I can use</p> <pre><code>a=[1,2,3,8,12] b=[2,6,4,5,6] </code></pre> <p>to get </p> <pre><code>[1,6,2,5,3,8,6,12,2] </code></pre> <p>OR use</p> <pre><code>d=[a,b,c,d] e=[w,x,y,z] </code></pre> <p>to get</p> <pre><code>[a,z,b,y,c,x,d,w] </code></pre> <p>(1st element from 1st list, last element from 2nd list)<br> (2nd element from 1st list, 2nd to last element from 2nd list)</p>
howto
2016-03-04T13:38:43Z
35,805,891
How to get only even numbers from list
<pre><code>def only_evens(lst): """ (list of list of int) -&gt; list of list of int Return a list of the lists in lst that contain only even integers. &gt;&gt;&gt; only_evens([[1, 2, 4], [4, 0, 6], [22, 4, 3], [2]]) [[4, 0, 6], [2]] """ even_lists = [] for sublist in lst: for i in sublist: if i % 2 == 0: even_lists.append(i) return even_lists </code></pre> <p>I can't do this because it returns everything in one list[] But how can I return sublist that consists only with even integers?</p>
howto
2016-03-04T20:57:16Z
35,840,403
Python - filling a list of tuples with zeros in places of missing indexes
<p>I have a list of tuples:</p> <pre><code>[(0.0, 287999.70000000007), (1.0, 161123.23000000001), (2.0, 93724.140000000014), (3.0, 60347.309999999983), (4.0, 55687.239999999998), (5.0, 29501.349999999999), (6.0, 14993.920000000002), (7.0, 14941.970000000001), (8.0, 13066.229999999998), (9.0, 10101.040000000001), (10.0, 4151.6900000000005), (11.0, 2998.8899999999999), (12.0, 1548.9300000000001), (15.0, 1595.54), (16.0, 1435.98), (17.0, 1383.01)] </code></pre> <p>As can be seen, there are missing indexes (13 and 14). I want to fill the missing indexes with zeros:</p> <pre><code>[(0.0, 287999.70000000007), (1.0, 161123.23000000001), (2.0, 93724.140000000014), (3.0, 60347.309999999983), (4.0, 55687.239999999998), (5.0, 29501.349999999999), (6.0, 14993.920000000002), (7.0, 14941.970000000001), (8.0, 13066.229999999998), (9.0, 10101.040000000001), (10.0, 4151.6900000000005), (11.0, 2998.8899999999999), (12.0, 1548.9300000000001), (13.0, 0), (14.0, 0), (15.0, 1595.54), (16.0, 1435.98), (17.0, 1383.01)] </code></pre> <p>I did something ugly with <code>for loop</code> (I didn't add it cause I don't think it will contribute to anything...), but I was wondering is there any elegant way to resolve this problem? (maybe 3-4 lines with <code>list comprehension</code>). </p>
howto
2016-03-07T09:41:31Z
35,847,865
Retaining category order when charting/plotting ordered categorical Series
<p>Consider the following categorical <code>pd.Series</code>:</p> <pre><code>In [37]: df = pd.Series(pd.Categorical(['good', 'good', 'ok', 'bad', 'bad', 'awful', 'awful'], categories=['bad', 'ok', 'good', 'awful'], ordered=True)) In [38]: df Out[38]: 0 good 1 good 2 ok 3 bad 4 bad 5 awful 6 awful dtype: category Categories (4, object): [bad &lt; ok &lt; good &lt; awful] </code></pre> <p>I want to draw a pie or bar chart of these. According to <a href="http://stackoverflow.com/a/31029857/2071807">this SO answer</a>, plotting categorical <code>Series</code> requires me to plot the <code>value_counts</code>, but this does not retain the categorical ordering:</p> <pre><code>df.value_counts().plot.bar() </code></pre> <p>How can I plot an ordered categorical variable and retain the ordering? <a href="http://i.stack.imgur.com/XmrXS.png" rel="nofollow"><img src="http://i.stack.imgur.com/XmrXS.png" alt="Bar chart"></a></p>
howto
2016-03-07T15:42:07Z
35,867,650
Python multiprocessing and shared numpy array
<p>I have a problem, which is similar to this:</p> <pre><code>import numpy as np C = np.zeros((100,10)) for i in range(10): C_sub = get_sub_matrix_C(i, other_args) # shape 10x10 C[i*10:(i+1)*10,:10] = C_sub </code></pre> <p>So, apparently there is no need to run this as a serial calculation, since each submatrix can be calculated independently. I would like to use the multiprocessing module and create up to 4 processes for the for loop. I read some tutorials about multiprocessing, but wasn't able to figure out how to use this to solve my problem.</p> <p>Thanks for your help</p>
howto
2016-03-08T12:39:31Z
35,883,459
Creating a list of dictionaries in python
<p>I have a following data set that I read in from a text file:</p> <pre><code>all_examples= ['A,1,1', 'B,2,1', 'C,4,4', 'D,4,5'] </code></pre> <p>I need to create a list of dictionary as follows:</p> <pre><code>lst = [ {"A":1, "B":2, "C":4, "D":4 }, {"A":1, "B":1, "C":4, "D":5 } ] </code></pre> <p>I tried using an generator function but it was hard to create a list as such. </p> <pre><code>attributes = 'A,B,C' def get_examples(): for value in examples: yield dict(zip(attributes, value.strip().replace(" ", "").split(','))) </code></pre>
howto
2016-03-09T05:16:57Z
35,890,697
Python sorting array according to date
<p>How can i sort python list according to maximum date which is combined with string</p> <pre><code>['', 'q//Attachments/Swoop_coverletter_311386_20120103.doc', 'q//Attachments/Swoop_RESUME_311386_20091012.doc', 'q//Attachments/Swoop_Resume_311386_20100901.doc', 'q//Attachments/Swoop_reSume_311386_20120103.doc', 'q//Attachments/Swoop_coverletter_311386_20100901.doc', 'q//Attachments/Swoop_coverletter_311386_20091012.doc'] </code></pre> <p>above is the list and expected result is this</p> <pre><code>['q//Attachments/Swoop_coverletter_311386_20120103.doc','q//Attachments/Swoop_reSume_311386_20120103.doc','q//Attachments/Swoop_Resume_311386_20100901.doc','q//Attachments/Swoop_coverletter_311386_20100901.doc','q//Attachments/Swoop_RESUME_311386_20091012.doc','q//Attachments/Swoop_coverletter_311386_20091012.doc',''] </code></pre> <p>I wrote a script which is not sorting but priniting only one value at end</p> <pre><code>a = ['q//Attachments/Swoop_coverletter_311386_20120103.doc','q//Attachments/Swoop_reSume_311386_20120103.doc','q//Attachments/Swoop_Resume_311386_20100901.doc','q//Attachments/Swoop_coverletter_311386_20100901.doc','q//Attachments/Swoop_RESUME_311386_20091012.doc','q//Attachments/Swoop_coverletter_311386_20091012.doc',''] print max(a) </code></pre> <p>Result:</p> <pre><code>q//Attachments/Swoop_reSume_311386_20120103.doc </code></pre> <p>How can i get expected output like this</p> <p>Expected output:</p> <pre><code>['q//Attachments/Swoop_coverletter_311386_20120103.doc','q//Attachments/Swoop_reSume_311386_20120103.doc','q//Attachments/Swoop_Resume_311386_20100901.doc','q//Attachments/Swoop_coverletter_311386_20100901.doc','q//Attachments/Swoop_RESUME_311386_20091012.doc','q//Attachments/Swoop_coverletter_311386_20091012.doc',''] </code></pre>
howto
2016-03-09T11:44:30Z
35,913,509
Array from interpolated plot in python
<p>I have an array with a grid size of 150x80. I plotted the data and interpolate them. This is my figure:</p> <p><img src="http://i.stack.imgur.com/3ZZGN.png" alt="This is my figure:"> </p> <p>Here is my problem: I would like to do some calculations on the interpolated data, that's why I need to write the image back in an array with more details as the previous. Is it possible?</p> <p>----- NEW EDIT -----</p> <p>This is my code. As you can see I interpolate data in the plot, so I don't actually have a callable interpolation, but an image.</p> <pre><code>clip=abs(np.percentile(sint_h, 0.999)) fig, ax = plt.subplots(2,figsize=(10,10)) im0=ax[0].imshow(sint_h,extent=[0,sint_h.shape[1],sint_h.shape[0]*sr,0],interpolation='bilinear',aspect='auto',cmap='RdBu',vmin=-clip,vmax=clip) im1=ax[1].imshow(specamps_h,extent=[0,specamps_h.shape[1],ufreq_range_h.max(),0],interpolation='bilinear',aspect='auto',cmap='rainbow') ax[0].set_ylabel('TWT [s]') ax[1].set_ylabel('FREQUENCY [Hz]') ax[1].set_y </code></pre>
howto
2016-03-10T10:15:03Z
35,952,815
Python: Binning one coordinate and averaging another based on these bins
<p>I have two vectors <code>rev_count</code> and <code>stars</code>. The elements of those form pairs (let's say <code>rev_count</code> is the x coordinate and <code>stars</code> is the y coordinate). </p> <p>I would like to bin the data by <code>rev_count</code> and then average the <code>stars</code> in a single <code>rev_count bin</code> (I want to bin along the x axis and compute the average y coordinate in that bin). </p> <p>This is the code that I tried to use (inspired by my matlab background):</p> <pre><code>import matplotlib.pyplot as plt import numpy binwidth = numpy.max(rev_count)/10 revbin = range(0, numpy.max(rev_count), binwidth) revbinnedstars = [None]*len(revbin) for i in range(0, len(revbin)-1): revbinnedstars[i] = numpy.mean(stars[numpy.argwhere((revbin[i]-binwidth/2) &lt; rev_count &lt; (revbin[i]+binwidth/2))]) print('Plotting binned stars with count') plt.figure(3) plt.plot(revbin, revbinnedstars, '.') plt.show() </code></pre> <p>However, this seems to be incredibly slow/inefficient. Is there a more natural way to do this in python?</p>
howto
2016-03-12T01:53:17Z
35,962,295
How to create a Dictionary in Python with 2 string keys to access an integer?
<p>I'm very new to Python and I am trying to create a 2D dictionary that has 2 strings keys. One key will represent a starting location and the other an ending location. The return value needs to be an integer that represents the distance between those two cities. </p> <p>For example if I have:</p> <pre><code>dict['New York']['Chicago'] </code></pre> <p>I want to return the integer that represents the distance between those two cities. </p> <p>I have parsed all the locations and distances from a text file, but I have no idea how to setup the dictionary values for each of those 3 components as I read each value.</p> <p>Input is being received in the order: Start, Finish, Distance. Which is repeated until the end of the file is reached. Any help would be greatly appreciated. Thanks!</p>
howto
2016-03-12T19:28:23Z
35,966,940
finding the max of a column in an array
<pre><code>def maxvalues(): for n in range(1,15): dummy=[] for k in range(len(MotionsAndMoorings)): dummy.append(MotionsAndMoorings[k][n]) max(dummy) L = [x + [max(dummy)]] ## to be corrected (adding columns with value max(dummy)) ## suggest code to add new row to L and for next function call, it should save values here. </code></pre> <p>i have an array of size (k x n) and i need to pick the max values of the first column in that array. Please suggest if there is a simpler way other than what i tried? and my main aim is to append it to L in columns rather than rows. If i just append, it is adding values at the end. I would like to this to be done in columns for row 0 in L, because i'll call this function again and add a new row to L and do the same. Please suggest.</p>
howto
2016-03-13T05:38:41Z
36,041,797
Python list to txt
<p>I need this function to output to a .txt file, with what I have below it is returning only last row of integers. </p> <pre><code>def random_grid(file): grid = [] num_rows = raw_input("How many raws would you like in your grid? ") num_columns = raw_input("How many columns would you like in your grid? ") min_range = raw_input("What is the minimum number you would like in your grid? ") max_range = raw_input("what is the maximum number you would like in your grid? ") for row in range(int(num_rows)): grid.append([]) for column in range(int(num_columns)): grid[row].append(random.randint((int(min_range)),(int(max_range)))) for row in grid: x = (' '.join([str(x) for x in row])) print x with open(r"test.txt", 'w') as text_file: text_file.write(x) </code></pre> <p>If the user choose a 3 by 3 grid, a low number of 1 and high number of 9 it could print like this. </p> <pre><code>1 2 3 4 5 6 7 8 9 </code></pre> <p>I am only getting </p> <pre><code>7 8 9 </code></pre> <p>in my outputted .txt file</p>
howto
2016-03-16T16:41:49Z
36,050,713
Using Py_BuildValue() to create a list of tuples in C
<p>I am trying to use <code>Py_BuildValue()</code> to create a list of tuples in C. </p> <p>What I am trying to build would look like this:</p> <pre><code>[ (...), (...), ... ] </code></pre> <p>I don't know the amount of tuples to create at compilation, so I can't use some static amount here. </p> <p>Essentially using <code>Py_BuildValue()</code> with one tuple here is what it would look like for the code:</p> <pre><code>PyObject * Py_BuildValue("[(siis)]", name, num1, num2, summary); </code></pre> <p>But that would only be for one tuple. I need to have multiple tuples in the list that I could add via a for loop. How can I accomplish this?</p>
howto
2016-03-17T02:40:26Z
36,061,608
Erasing list of phrases from list of texts in python
<p>I am trying to erase specific words found in a list. Lets say that I have the following example:</p> <pre><code>a= ['you are here','you are there','where are you','what is that'] b = ['you','what is'] </code></pre> <p>The desired output should be the following:</p> <pre><code>['are here', 'are there', 'where are', 'that'] </code></pre> <p>I created the following code for that task:</p> <pre><code>import re def _find_word_and_remove(w,strings): """ w:(string) strings:(string) """ temp= re.sub(r'\b({0})\b'.format(w),'',strings).strip()# removes word from string return re.sub("\s{1,}", " ", temp)# removes double spaces def find_words_and_remove(words,strings): """ words:(list) strings:(list) """ if len(words)==1: return [_find_word_and_remove(words[0],word_a) for word_a in strings] else: temp =[_find_word_and_remove(words[0],word_a) for word_a in strings] return find_words_and_remove(words[1:],temp) find_words_and_remove(b,a) &gt;&gt;&gt; ['are here', 'are there', 'where are', 'that'] </code></pre> <p>It seems that I am over-complicating the 'things' by using recursion for this task. Is there a more simple and readable way to do this task?</p>
howto
2016-03-17T13:13:17Z
36,066,726
python - numpy: read csv into numpy with proper value type
<p>Here is my test_data.csv:</p> <pre><code>A,1,2,3,4,5 B,6,7,8,9,10 C,11,12,13,14,15 A,16,17,18,19,20 </code></pre> <hr> <p>And I am reading it to a numpy array using the code below:</p> <pre><code>def readCSVToNumpyArray(dataset): with open(dataset) as f: values = [i for i in csv.reader(f)] data = numpy.array(values) return data </code></pre> <hr> <p>In the main code, I have:</p> <pre><code> numpyArray = readCSVToNumpyArray('test_data.csv') print(numpyArray) </code></pre> <p>which gives me the output:</p> <pre><code>(array([['A', '1', '2', '3', '4', '5'], ['B', '6', '7', '8', '9', '10'], ['C', '11', '12', '13', '14', '15'], ['A', '16', '17', '18', '19', '20']], dtype='|S2')) </code></pre> <p>But all the numbers in the array is treated as <code>string</code>, is there a good way to make them stored as <code>float</code> without going through each element and assign the type?</p> <p>Thanks!</p>
howto
2016-03-17T16:41:43Z
36,071,592
Find difference between two multi dimensional lists
<p>I would like to find intersection between multi dimensional list with first element, but I am not able to find solution.</p> <p>Example : </p> <pre><code>a = [[greg ,1.2 ,400 ,234] [top ,9.0 ,5.1 ,2300] [file ,5.7 ,2.2, 900] [stop, 1.6 ,6.7 ,200] b = [[hall,5.2 ,460 ,234] [line ,5.3 ,5.91 ,100] [file ,2.7 ,3.3, 6.4] [stop, 6.6 ,5.7 ,230] </code></pre> <p>What I need :</p> <p>1.element not in a but in b , I want to compare only with <code>element[0]</code></p> <p>Expecting output = <code>[[hall,5.2 ,460 ,234] [line ,5.3 ,5.91 ,100]]</code></p> <p>2.element not in b but in a ,I want to compare only with <code>element[0]</code></p> <p>Expecting output = <code>[greg ,1.2 ,400 ,234]</code></p> <p>Then append missing list to a and b vice versa.</p> <p>I have sample code, but not working.</p> <pre><code>at = map(tuple,a) bt = map(tuple,b) st1 = set(at) st2 = set(bt) s1 = st1.intersection(st2) s2 = st2.intersection(st1) </code></pre>
howto
2016-03-17T21:00:01Z
36,086,075
Unique duplicate rows with range
<p>I have a file like so:</p> <pre><code>x 48012 F 1.000 x 48169 R 0.361 x 87041 R 0.118 x 9032 R 0.176 x 9150 R 0.521 </code></pre> <p>I wanted to filter out rows having in the result file a unique value based on whether column 1,2 and 3 are the same - with a tolerance of +/- 200 for column2. So for example the first two rows</p> <pre><code>x 48012 F 1.000 x 48169 R 0.361 </code></pre> <p>would become </p> <pre><code>x 48012 F 1.000 </code></pre> <p>because 48169-48012 is 157 and that is in the ±200 range</p> <p>Overall, the end file would be </p> <pre><code> x 48012 F 1.000 x 87041 R 0.118 x 9032 R 0.176 </code></pre> <p>I've tried</p> <pre><code>out=open('result.txt', 'w') my_file= open('test.txt', 'r') seen = set() for line in my_file: line=line.strip().split('\t') if line[0]==seen[0] and line[2]==seen[2] and ((int(line[1])==int(seen[1]-200)) or (int(line[1])==(seen[1]-200))): out.write(line) </code></pre> <p>but sets can't be indexed</p>
howto
2016-03-18T13:46:36Z
36,144,303
Python - split list of lists by value
<p>I want to split the following list of lists</p> <pre><code>a = [["aa",1,3] ["aa",3,3] ["sdsd",1,3] ["sdsd",6,0] ["sdsd",2,5] ["fffffff",1,3]] </code></pre> <p>into the three following lists of lists:</p> <pre><code>a1 = [["aa",1,3] ["aa",3,3]] a2 = [["sdsd",1,3] ["sdsd",6,0] ["sdsd",2,5]] a3 = [["fffffff",1,3]] </code></pre> <p>That is, according to the first value of each list. I need to do this for a list of lists with thousands of elements... How can I do it efficiently?</p>
howto
2016-03-22T00:45:29Z
36,149,707
Modify a python script with bash and execute it with the changes
<p>I have a python script that is a command-line interface. I execute it from a bash script that saves the output in a txt file:</p> <pre><code>#!/bin/bash for ... in ... do echo "Foo:$foo" echo "Bar:$bar" ./pythonScript.py --argument1 "arg" # make changes done </code></pre> <p>What I want to do is modify a specific line of the python script and execute it again with the new line changed until the for loop finishes.</p> <p>The piece of code from the python script that I want to change is similar to that:</p> <pre><code>QUERY = 'www.foo.com' + '/bar?' \ + '&amp;title=%(title)s' \ + '&amp;start=0' \ + '&amp;num=%(num)s' </code></pre> <p>The <code>start</code> parameter of the query must be increased by 20 units every time the for loop is executed. So, after 5 executions <code>start</code> should be 100.</p> <p>Is there any way to do this?</p>
howto
2016-03-22T08:44:37Z
36,165,854
BeautifulSoup scraping information from multiple divs using loops into JSON
<p>I am scraping titles, descriptions, links, and people's names from a multiple divs that follow the same structure. I am using BeautifulSoup, and I am able to scrape everything out of the first div. However, I'm having trouble scraping from my long list of divs, and getting the data in a portable format like CSV or JSON.</p> <p>How can I scrape each item from my long list of divs, and store that information in JSON objects together for each mp3? </p> <p>The divs look like this: </p> <pre><code>&lt;div class="audioBoxWrap clearBoth"&gt; &lt;h3&gt;Title 1&lt;/h3&gt; &lt;p&gt;Description 1&lt;/p&gt; &lt;div class="info" style="line-height: 1px; height: 1px; font-size: 1px;"&gt;&lt;/div&gt; &lt;div class="audioBox" style="display: none;"&gt; stuff &lt;/div&gt; &lt;div&gt; [ &lt;a href="link1.mp3"&gt;Right-click to download&lt;/a&gt;] &lt;/div&gt; &lt;/div&gt; &lt;div class="audioBoxWrap clearBoth"&gt; &lt;h3&gt;Title 2&lt;/h3&gt; &lt;p&gt;Description 2&lt;/p&gt; &lt;div class="info" style="line-height: 1px; height: 1px; font-size: 1px;"&gt;&lt;/div&gt; &lt;div class="audioBox" style="display: none;"&gt; stuff &lt;/div&gt; &lt;div&gt; [ &lt;a href="link2.mp3"&gt;Right-click to download&lt;/a&gt;] &lt;/div&gt; &lt;/div&gt; </code></pre> <p>I've figured out how to scrape from the first div, but I cannot grab the info for each div. For example, my code below only spits out the h3 for the first div over and over. </p> <p>I know that I can create a python list for titles, descriptions, etc, but how do I keep the metadata structure like JSON, so that title1, link1, and description1 stay together, as well as title2's information. </p> <pre><code>with open ('soup.html', 'r') as myfile: html_doc = myfile.read() soup = BeautifulSoup(html_doc, 'html.parser') audio_div = soup.find_all('div', {'class':"audioBoxWrap clearBoth"}) print len(audio_div) #create dictionary for storing scraped data. I don't know how to store the values for each mp3 separately. for i in audio_div: print soup.find('h3').text </code></pre> <p>I want my JSON to look something like this: </p> <pre><code>{ "podcasts":[ { "title":"title1", "description":"description1", "link":"link1" }, { "title":"title2", "description":"description2", "link":"link2" } ] } </code></pre>
howto
2016-03-22T21:45:47Z
36,186,624
parse list of tuple in python and eliminate doubles
<p>I have the following problem :</p> <p>I have a list of tuple representing packages and their version (some packages don't have a specified version so no problem with that) like so :</p> <pre><code> ('lib32c-dev', '', '', '') ('libc6-i386', '2.4', '', '') ('lib32c-dev', '', '', '') ('libc6-i386', '1.06', '', '') ('libc6-i386', '2.4', '', '') ('lib32c-dev', '', '', '') ('libc6-i386', '2.16', '', '') ('libc6-dev', '', '', '') ('', '', 'libc-dev', '') ('libc6-dev', '', '', '') ('', '', 'libc-dev', '') ('libncurses5-dev', '5.9+20150516-2ubuntu1', '', '') ('libc6-dev-x32', '', '', '') ('libc6-x32', '2.16', '', '') ('libncursesw5-dev', '5.9+20150516-2ubuntu1', '', '') ('libc6-dev-x32', '', '', '') ('libc6-x32', '2.16', '', '') ('libc6-dev-x32', '', '', '') ('libc6-x32', '2.16', '', '') ('libncurses5-dev', '5.9+20150516-2ubuntu1', '', '') ('libncursesw5-dev', '5.9+20150516-2ubuntu1', '', '') </code></pre> <p>As you can see, some packages are listed in tuples more than once but with a different version.</p> <p>What I need is to parse the list of tuple in order to have for each package the most recent version before transforming the list into a dictionary.</p> <p>PS : The position of the package name and it's version are not fixed. But we can say that the version is always after the package name so can we say that the version will always be at position 1 and 3 ?</p> <p>Thank you for your help.</p>
howto
2016-03-23T19:03:40Z
36,190,533
python: check if an numpy array contains any element of another array
<p>What is the best way to check if an numpy array contains any element of another array?</p> <p>example:</p> <pre><code>array1 = [10,5,4,13,10,1,1,22,7,3,15,9] array2 = [3,4,9,10,13,15,16,18,19,20,21,22,23]` </code></pre> <p>I want to get a <code>True</code> if <code>array1</code> contains any value of <code>array2</code>, otherwise a <code>False</code>.</p>
howto
2016-03-23T23:19:59Z
36,193,225
Numpy Array Rank All Elements
<p>I have a two-dimensional numpy array and I am wondering how I can create a new two-dimensional numpy array represent the ranking of the values based on <strong>all</strong> items in the original 2d array.</p> <p>I would like to use the following array :</p> <pre><code>anArray = [[ 18.5, 25.9, 7.4, 11.1, 11.1] [ 33.3, 37. , 14.8, 22.2, 25.9] [ 29.6, 29.6, 11.1, 14.8, 11.1] [ 25.9, 25.9, 14.8, 14.8, 11.1] [ 29.6, 25.9, 14.8, 11.1, 7.4]] </code></pre> <p>to create a new rank ordered array [based on all values and having same rank for multiple numbers] :</p> <pre><code>anOrder = [[ 6, 4, 9, 8, 8] [ 2, 1, 7, 5, 4] [ 3, 3, 8, 7, 8] [ 4, 4, 7, 7, 8] [ 3, 4, 7, 8, 9]] </code></pre> <p>Thank you.</p>
howto
2016-03-24T04:19:46Z
36,226,959
Collect values of pandas dataframe column A if column B is NaN (Python)
<p>I have a pandas dataframe.</p> <p>I want to collect/print the values of column A where column B is NaN.</p> <p><strong>Question</strong> How do I do this?</p> <p><strong>Edit</strong> Further: Say I have a set of columns (b,c,d). I want to select the values of column a if either b,c, or d is NaN.</p> <p>(The trick for identifying NaNs is a bit different than simply "==" etc.)</p> <p>Thank you</p>
howto
2016-03-25T19:48:13Z
36,241,474
How to plot real-time graph, with both axis dependent on time?
<p>I want to create an animation showing a diver jumps into water.</p> <p>By the given parameters of original height of diver from the water, <code>h</code> and his mass, <code>m</code>, I defined a procedure in Python to calculate the moment he touches the water, <code>Tc</code> .</p> <p>Knowing that he jumps vertically, X axis is fixed, and Y axis obeys equation (1/2)<em>g</em>t^2 + h (g is gravitational constant)</p> <p>How do I plot a graph while time <code>t</code> is in range(<code>Tc</code>), X and Y axis shows the projection the diver? (<code>x</code> is fixed and <code>y</code> depends on time <code>t</code>)</p> <p>In the graphic window we are supposed to see a dot that 'jumps' from certain height vertically downwards, without seeing the line/trace of projection.</p> <p>Here is part of my work. I don't know where to introduce <code>Tc</code> in the procedure:</p> <pre><code>import numpy as np from matplotlib import pyplot as plt from matplotlib import animation # First set up the figure, the axis, and the plot element we want to animate fig = plt.figure() ax = plt.axes(xlim=(0, 2), ylim=(-2, 2)) line, = ax.plot([], [], lw=2) # initialization function: plot the background of each frame def init(): line.set_data([], []) return line, # animation function. This is called sequentially def animate(i): x = np.empty(n) ; x.fill(1) # the vertical position is fixed on x-axis y = 0.5*g*i^2 + h # the equation of diver's displacement on y axis line.set_data(x, y) return line, # call the animator. blit=True means only re-draw the parts that have changed. anim = animation.FuncAnimation(fig, animate, init_func=init, frames=200, interval=20, blit=True) plt.show() </code></pre> <p>Edit:</p> <p>Here is the whole program. I applied and modified the suggestion given by @Mike Muller, but it didn't work. I don’t understand where it goes wrong. Hope you can clarify my doubts.</p> <pre><code># -*- coding: utf-8 -*- from math import * import numpy as np from matplotlib import pyplot as plt from matplotlib import animation def Plongeon(): h = input("height = ") h = float(h) m = input(" mass = ") m = float(m) global g g = 9.8 g = float(g) global Tc #calculate air time, Tc Tc = sqrt(2*h/g) Tc = round(Tc,2) print Tc # First set up the figure, the axis, and the plot element we want to animate fig = plt.figure() ax = plt.axes(xlim=(0, 2), ylim=(-2, h+1)) #ymax : initial height+1 line, = ax.plot([], [], ' o', lw=2) Tc = int(Tc+1) #make Tc an integer to be used later in def get_y() xs = [1] # the vertical position is fixed on x-axis ys = [h, h] # initialization function: plot the background of each frame def init(): line.set_data([], []) return line, # animation function. This is called sequentially def animate(y): ys[-1] = y line.set_data(xs, ys) return line, def get_y(): for step in range(Tc): t = step / 100.0 y = -0.5*g*t**2 + h # the equation of diver's displacement on y axis yield y # call the animator. blit=True means only re-draw the parts that have changed. anim = animation.FuncAnimation(fig, animate, frames=get_y, interval=100) plt.show() Plongeon() </code></pre>
howto
2016-03-26T22:09:11Z
36,242,061
Most efficient way to delete needless newlines in Python
<p>I'm looking to find out how to use Python to get rid of needless newlines in text like what you get from Project Gutenberg, where their plain-text files are formatted with newlines every 70 characters or so. In Tcl, I could do a simple <code>string map</code>, like this:</p> <pre><code>set newtext [string map "{\r} {} {\n\n} {\n\n} {\n\t} {\n\t} {\n} { }" $oldtext] </code></pre> <p>This would keep paragraphs separated by two newlines (or a newline and a tab) separate, but run together the lines that ended with a single newline (substituting a space), and drop superfluous CR's. Since Python doesn't have <code>string map</code>, I haven't yet been able to find out the most efficient way to dump all the needless newlines, although I'm pretty sure it's <em>not</em> just to search for each newline in order and replace it with a space. I could just evaluate the Tcl expression in Python, if all else fails, but I'd like to find out the best Pythonic way to do the same thing. Can some Python connoisseur here help me out?</p>
howto
2016-03-26T23:21:37Z
36,282,772
How to perform a 'one-liner' assignment on all elements of a list of lists in python
<p>Given a list of lists <code>lol</code>, I would like to do in one line</p> <pre><code>for ele in lol: ele[1] = -2 </code></pre> <p>I tried</p> <pre><code>lol = map(lambda x: x[1] = -2, lol) </code></pre> <p>But its not possible to perform an assignment in a lambda function.</p>
howto
2016-03-29T11:17:09Z
36,296,993
avoiding regex in pandas str.replace
<p>I have the following pandas dataframe. for the sake of simplicity, lets assume it only has two columns: <code>id</code> and <code>search_term</code></p> <pre><code>id search_term 37651 inline switch </code></pre> <p>I do:</p> <pre><code>train['search_term'] = train['search_term'].str.replace("in."," in. ") </code></pre> <p>expecting that the dataset above is unaffected, but I get in return for this dataset:</p> <pre><code>id search_term 37651 in. in. switch </code></pre> <p>which means <code>inl</code> is replaced by <code>in.</code> and <code>ine</code> is replaced by <code>in.</code>, as if I where using a regular expression, where dot means any character.</p> <p>How do I restate the first command so that, literally, <code>in.</code> is replaced by <code>in.</code> but any <code>in</code> not followed by a dot is untouched, as in:</p> <pre><code>a = 'inline switch' a = a.replace('in.','in. ') a &gt;&gt;&gt; 'inline switch' </code></pre>
howto
2016-03-29T23:33:17Z
36,336,637
how to know the type of sql query result before it is executed in sqlalchemy
<p>As far as I know, there exists no such features that gives the expected type of query result in sql without executing the query. However, I think that it's possible to implement and there could be some tricks for it.</p> <p>I'm using sqlalchemy, so I hope that the solution is easy to implement with sqlalchemy. Any idea how to do this?</p>
howto
2016-03-31T14:42:33Z
36,344,619
Getting the key and value of br.forms() in Mechanize
<p>Using Mechanize, I am able to get all the forms of the page.</p> <pre><code>for f in br.forms(): print f </code></pre> <p>For my page, it gives me information like this:</p> <pre><code>&lt;HiddenControl(assoc_term_in=201535) (readonly)&gt; &lt;HiddenControl(CRN_IN=34688) (readonly)&gt; &lt;HiddenControl(start_date_in=03/28/2016) (readonly)&gt; &lt;HiddenControl(end_date_in=06/11/2016) (readonly)&gt; &lt;HiddenControl(SUBJ=ECEC) (readonly)&gt; &lt;HiddenControl(CRSE=451) (readonly)&gt; &lt;HiddenControl(SEC=001) (readonly)&gt; &lt;HiddenControl(LEVL=Undergraduate Quarter) (readonly)&gt; &lt;HiddenControl(CRED= 3.000) (readonly)&gt; &lt;HiddenControl(GMOD=Standard Letter) (readonly)&gt; &lt;HiddenControl(TITLE=Computer Arithmetic) (readonly)&gt; &lt;HiddenControl(MESG=DUMMY) (readonly)&gt; &lt;SelectControl(RSTS_IN=[*, WR])&gt; &lt;HiddenControl(assoc_term_in=201535) (readonly)&gt; &lt;HiddenControl(CRN_IN=31109) (readonly)&gt; &lt;HiddenControl(start_date_in=03/28/2016) (readonly)&gt; &lt;HiddenControl(end_date_in=06/11/2016) (readonly)&gt; &lt;HiddenControl(SUBJ=BIO) (readonly)&gt; &lt;HiddenControl(CRSE=141) (readonly)&gt; &lt;HiddenControl(SEC=073) (readonly)&gt; &lt;HiddenControl(LEVL=Undergraduate Quarter) (readonly)&gt; &lt;HiddenControl(CRED= 0.000) (readonly)&gt; &lt;HiddenControl(GMOD=Non Gradeable Unit) (readonly)&gt; &lt;HiddenControl(TITLE=Essential Biology) (readonly)&gt; &lt;HiddenControl(MESG=DUMMY) (readonly)&gt; &lt;SelectControl(RSTS_IN=[*, WD])&gt; </code></pre> <p>However, I want to print out just the values within the <code>f</code> variable, such as printing just the <code>TITLE</code>, <code>SUBJ</code> and <code>CRSE</code></p> <pre><code>ECEC 451 Computer Arithmetic </code></pre> <p>I tried using <code>f.value</code>, <code>f.value</code>, <code>f['TITLE']</code>, but no luck.</p> <p>I got this working before, but I lost the code when I removed that comment to commit the code to version control</p>
howto
2016-03-31T22:01:51Z
36,364,188
Python - dataframe conditional index value selection
<p>I have a dataframe similar to the below:</p> <pre><code> close_price short_lower_band long_lower_band Equity(8554) 180.530 184.235603 183.964306 Equity(2174) 166.830 157.450404 157.160282 Equity(23921) 124.670 127.243468 126.072039 Equity(26807) 117.910 108.761587 107.190081 Equity(42950) 108.070 97.491851 96.868036 Equity(4151) 97.380 98.954371 98.335786 </code></pre> <p>I want to generate a list of index values where 'close price' is less than 'short_lower_band' and less than 'long_lower_band'. So from the sample dataframe above we would get:</p> <pre><code>long_secs = [Equity(8554), Equity(23921), Equity(4151)] </code></pre> <p>Any help in figuring out how to do this would be appreciated. </p>
howto
2016-04-01T19:15:19Z
36,364,512
alternate for multiple constructors
<p>I have a class that has a constructor like this:</p> <pre><code>class MyClass: def __init__(self, options): self.options = options ..... </code></pre> <p>At times I initiate an instance of this class by passing <code>options</code>, for example:</p> <pre><code>parser = argparse.ArgumentParser(description='something') parser.add_argument('-v', '--victor', dest='vic') options = parser.parse_args() x = MyClass(options) </code></pre> <p>This works fine, however, there are some scenarios when there will be no options passed, so for those scenarios I've created a method in <code>MyClass</code> that will create default option like this. </p> <pre><code>class MyClass: def __init__(self, options): self.options = options def create_default_parser(self): parser = argparse.ArgumentParser(description='something') parser.add_argument('-v', '--victor', dest='vic', default="winning") options = parser.parse_args() self.options = options </code></pre> <p><strong>Question</strong></p> <p>How should I initiate an instance of this class when I don't want to pass any <code>options</code> and use the <code>create_default_parser</code>? Ideally I would make another constructor that doesn't accept any parameters and in that constructor I would call <code>create_default_parser</code>, like this:</p> <pre><code>def __init__(self): self.create_default_parser() </code></pre>
howto
2016-04-01T19:35:25Z
36,381,230
How to find row of 2d array in 3d numpy array
<p>I'm trying to find the row in which a 2d array appears in a 3d numpy ndarray. Here's an example of what I mean. Give:</p> <pre><code>arr = [[[0, 3], [3, 0]], [[0, 0], [0, 0]], [[3, 3], [3, 3]], [[0, 3], [3, 0]]] </code></pre> <p>I'd like to find all occurrences of:</p> <pre><code>[[0, 3], [3, 0]] </code></pre> <p>The result I'd like is:</p> <pre><code>[0, 3] </code></pre> <p>I tried to use <code>argwhere</code> but that unfortunately got me nowhere. Any ideas?</p>
howto
2016-04-03T03:11:25Z
36,408,096
is there a way to change the return value of a function without changing the function's body?
<pre><code>def f(x): return (x - 2)/2 def g(x): return x </code></pre> <p>this code will do this:</p> <pre><code>func = g(f) </code></pre> <p>now func(1) = -1/2</p> <p>what if I want to modify g(x) (and not f(x)) so that </p> <pre><code>func = g(f) func(1) = 1/2 </code></pre> <p>is there a way to do this?</p> <p>thank you Edit: f(x) can be any function that possibly returns a negative number </p>
howto
2016-04-04T16:32:37Z