question_id
int64
1.48k
40.1M
title
stringlengths
15
142
question_body
stringlengths
46
12.1k
question_type
stringclasses
5 values
question_date
stringlengths
20
20
36,409,213
Modifying a recursive function that counts no. of paths, to get sequence of all paths
<p>I've written a simple recursive Python program to find the number of paths in a triangle of characters.</p> <p>Btw, this is an attempt to solve Project Euler's P18.</p> <pre><code>triangle = """\ 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23""" grid = triangle.split("\n") grid[:] = [[int(n) for n in (line.split())] for line in grid] def find_paths(x,y): n = 0 if x == 14: return 1 n += find_paths(x+1,y+1) n += find_paths(x+1,y) return n print find_paths(0, 0) </code></pre> <p>This successfully prints 16384. However, how can I modify this same function to simply get a list of all the paths for eg. [[(0,0),(1,0)...(14,0)],[(0,0),(1,0)...]] ? Or if it takes too much memory, simply print each path instead of storing them in a list..</p> <p>Thanks!</p>
howto
2016-04-04T17:34:37Z
36,436,065
Parsing through a file
<p>I have a .txt file with 10 lines and 3 columns, all tab delimitated. The columns contain numbers or a ?. I want to parse through each line of the file and where a ? is found to call a certain function relating to the column the ? is found in. I have three functions so if a ? is found in column 1 then function_a is called, if its found in column 2 then function_b is called and if its found in column 3 then function_c is called. </p> <p>I have looked at trying this:</p> <pre><code>for line in fileinput.readlines(): print(line.split("?")) </code></pre> <p>but am not sure how to get a specific function called. </p>
howto
2016-04-05T20:08:58Z
36,436,953
Removing word and replacing character in a column of strings
<p>I need to change values in the column <code>DSFS</code> of a I've dataframe imported. </p> <pre><code>MemberID,Year,DSFS,DrugCount 48925661,Y2,9-10 months,7+ 90764620,Y3,8- 9 months,3 61221204,Y1,2- 3 months,1 </code></pre> <p>For example, "9-10 months" needs to be changed to 9_10. </p> <p>How would I do this?</p>
howto
2016-04-05T20:59:49Z
36,445,193
splitting one csv into multiple files in python
<p>I have a csv file of about 5000 rows in python i want to split it into five files.</p> <p>I wrote a code for it but it is not working</p> <pre><code>import codecs import csv NO_OF_LINES_PER_FILE = 1000 def again(count_file_header,count): f3 = open('write_'+count_file_header+'.csv', 'at') with open('import_1458922827.csv', 'rb') as csvfile: candidate_info_reader = csv.reader(csvfile, delimiter=',', quoting=csv.QUOTE_ALL) co = 0 for row in candidate_info_reader: co = co + 1 count = count + 1 if count &lt;= count: pass elif count &gt;= NO_OF_LINES_PER_FILE: count_file_header = count + NO_OF_LINES_PER_FILE again(count_file_header,count) else: writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL) writer.writerow(row) def read_write(): f3 = open('write_'+NO_OF_LINES_PER_FILE+'.csv', 'at') with open('import_1458922827.csv', 'rb') as csvfile: candidate_info_reader = csv.reader(csvfile, delimiter=',', quoting=csv.QUOTE_ALL) count = 0 for row in candidate_info_reader: count = count + 1 if count &gt;= NO_OF_LINES_PER_FILE: count_file_header = count + NO_OF_LINES_PER_FILE again(count_file_header,count) else: writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL) writer.writerow(row) read_write() </code></pre> <p>The above code creates many fileswith empty content.</p> <p>How to split one files into five csv files?</p>
howto
2016-04-06T08:12:41Z
36,454,494
Calcuate mean for selected rows for selected columns in pandas data frame
<p>I have pandas df with say, 100 rows, 10 columns, (actual data is huge). I also have row_index list which contains, which rows to be considered to take mean. I want to calculate mean on say columns 2,5,6,7 and 8. Can we do it with some function for dataframe object?</p> <p>What I know is do a for loop, get value of row for each element in row_index and keep doing mean. Do we have some direct function where we can pass row_list, and column_list and axis, for ex <code>df.meanAdvance(row_list,column_list,axis=0)</code> ?</p> <p>I have seen DataFrame.mean() but it didn't help I guess.</p> <pre><code> a b c d q 0 1 2 3 0 5 1 1 2 3 4 5 2 1 1 1 6 1 3 1 0 0 0 0 </code></pre> <p>I want mean of <code>0, 2, 3</code> rows for each <code>a, b, d</code> columns </p> <pre><code> a b d 0 1 1 2 </code></pre>
howto
2016-04-06T14:43:15Z
36,458,482
How to not render a entire string with jinja2
<p>I'm building a blog from start for a homework assignment in Google App Engine in python and I'm using jinja2 to render my html. My problem is that like every blog when an entry is too long; the blog just renders a part of the entry in the main page. I want to do that, when the main page is rendered I took the post from the database and pasted it to jinja. Are there any filters or functions to tell jinja, for example, this string can not be longer than x number?</p>
howto
2016-04-06T17:42:37Z
36,459,148
Pandas: Collapse first n rows in each group by aggregation
<p>I have a dataframe which is grouped by id. There are many groups, and each group has a variable number of rows. The first three rows of all groups do not contain interesting data. I would like to "collapse" the first three rows in each group to form a single row in the following way: </p> <p>'id', and 'type' will remain the same in the new 'collapsed' row.<br> 'grp_idx' will be renamed "0" when the aggregation of the first three rows occurs<br> col_1 will be the sum of the first three rows<br> col_2 will be the sum of the first three rows<br> The 'flag' in the "collapsed" row will be 0 if the values are all 0 in the first 3 rows. 'flag' will be 1 if it is 1 in any of the first three rows. (A simple sum will suffice for this logic, since the flag is only set in one row for all groups) </p> <p>Here is an example of what the dataframe looks like:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame.from_items([ ('id', [283,283,283,283,283,283,283,756,756,756]), ('type', ['A','A','A','A','A','A','A','X','X','X']), ('grp_idx', [1,2,3,4,5,6,7,1,2,3]), ('col_1', [2,4,6,8,10,12,14,5,10,15]), ('col_2', [3,6,9,12,15,18,21,1,2,3]), ('flag', [0,0,0,0,0,0,1,0,0,1]), ]); print(df) id type grp_idx col_1 col_2 flag 0 283 A 1 2 3 0 1 283 A 2 4 6 0 2 283 A 3 6 9 0 3 283 A 4 8 12 0 4 283 A 5 10 15 0 5 283 A 6 12 18 0 6 283 A 7 14 21 1 7 756 X 1 5 1 0 8 756 X 2 10 2 0 9 756 X 3 15 3 1 </code></pre> <p>After processing, I expect the dataframe to look like:</p> <pre><code>ID Type grp_idx col_1 col_2 flag 283 A 0 12 18 0 283 A 4 8 12 0 283 A 5 10 15 0 283 A 6 12 18 0 283 A 7 14 21 1 756 X 0 30 6 1 </code></pre> <p>I'm not sure how to proceed. I was trying to play around with</p> <p>df.groupby('id').head(3).sum() </p> <p>but this is not doing what I need. Any help, suggestions, code snippet would be really appreciated.</p>
howto
2016-04-06T18:14:18Z
36,464,357
Matplotlib in Pyside with Qt designer (PySide)
<p>I have been looking for a working example how to embed a matplotlib plot in pyside that is created with the QT designer while keeping the logic in a separate file. I know that there are numerous examples on the web but none of them actually uses the QT designer and then creates a separate file to add the logic where the matplitlib plot is added to a widget. I found an example that 'almost' works <a href="http://blog.rcnelson.com/building-a-matplotlib-gui-with-qt-designer-part-1/" rel="nofollow">http://blog.rcnelson.com/building-a-matplotlib-gui-with-qt-designer-part-1/</a> but but in my version it's not possible to "Change the layoutName property from “verticalLayout” to “mplvl”".</p> <p>So I have the following specific questions: I'm not clear into what item that plot can be embedded to in Pyside Qt designer. Is it a simple "widget" (as there is no matplotlib widget available in pyside). If so, how can I then add the plot to that widget? Or do I have to create a 'FigureCanvas' with Qt Designer? Is this possible at all? If so, how?</p> <p>Here is the simplest possible design I can make with the Pyside Qt designer in embedding a widget (is this correct?). How can I now add a matplotlib plot on top of it?</p> <p>As suggested in one of the answers I have now promoted the Qwidget to MyStaticMplCanvas and edited the name of Qwidget to mplvl.</p> <p>Automatically generated file with Pyside Qt designer and compiled with pyside-uic ui.ui -o ui.py -x</p> <p>ui.py looks like this:</p> <pre><code># -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'gui.ui' # # Created: Wed Apr 20 14:00:02 2016 # by: pyside-uic 0.2.15 running on PySide 1.2.2 # # WARNING! All changes made in this file will be lost! from PySide import QtCore, QtGui class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(444, 530) self.centralwidget = QtGui.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.mplvl = MyStaticMplCanvas(self.centralwidget) self.mplvl.setGeometry(QtCore.QRect(120, 190, 221, 161)) self.mplvl.setObjectName("mplvl") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtGui.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 444, 21)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtGui.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(QtGui.QApplication.translate("MainWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8)) from mystaticmplcanvas import MyStaticMplCanvas if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) MainWindow = QtGui.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) </code></pre> <p>how can I now add a plot into the mplvl object from a separate .py file?</p>
howto
2016-04-06T23:57:24Z
36,479,374
Python identify in which interval the numbers are
<p>I would like to run a <code>for</code> loop in Python that checks, given a certain amount of intervals, for each element of the loop, in which interval it is. For instance:</p> <pre><code>interval_1 = [1; 10] interval_2 = [11; 58] </code></pre> <p>I was looking for a more elegant solution than a large <code>if/elif/else</code> condition, for instance my idea was to load an excel worksheet containing <code>n</code> couples of numbers corresponding to the interval extremities, and use a function that finds me for in which interval my number is.</p> <p>Does a similar function exist in Python? Or eventually how could this be done?</p>
howto
2016-04-07T14:33:33Z
36,495,903
Convert pandas datetime objects
<p>I have a column in a pandas data frame that contains datetime objects after applying the pd.to_datetime() method. So far so good. Now the column contains the dates in a the form '2016-02-08 09:59:00.510'. However, I would like to somehow 'drop' the date information, producing input in the form HH:MM:SS, like this:</p> <p>2016-02-08 09:59:00.510 --> 09:59:00</p> <p>I was wondering, if that is possible and if so, I would really appreciate some hints to the right way to do that.</p> <p>Below there is a small working example. I was able to convert the datetime objects to integers (ns?) but I couldn't find out how to convert the objects in column 'Date' to the format I want.</p> <p>As mentioned: Any help is highly appreciated! </p> <pre><code>import pandas as pd import time s1 = {'Timestamp' : ['20160208_095900.51','20160208_095901.51','20160208_095902.51','20160208_095903.51', '20160208_095904.51','20160208_095905.51','20160208_095906.51','20160208_095907.51', '20160208_095908.51','20160208_095909.51']} df = pd.DataFrame(s1) df['Date'] = pd.to_datetime(df['Timestamp'], format = '%Y%m%d_%H%M%S.%f') df['ns'] = (df['Date'].astype(np.int64) / int(1e6)) print df </code></pre>
howto
2016-04-08T09:18:15Z
36,518,800
Sort a list in python based on another sorted list
<p>I would like to sort a list in Python based on a pre-sorted list</p> <pre><code>presorted_list = ['2C','3C','4C','2D','3D','4D'] unsorted_list = ['3D','2C','4D','2D'] </code></pre> <p>Is there a way to sort the list to reflect the presorted list despite the fact that not all the elements are present in the unsorted list?</p> <p>I want the result to look something like this:</p> <pre><code>after_sort = ['2C','2D','3D','4D'] </code></pre> <p>Thanks!</p>
howto
2016-04-09T15:22:38Z
36,542,169
Extract first and last row of a dataframe in pandas
<p><strong>How can I extract the first and last rows of a given dataframe as a new dataframe in pandas?</strong></p> <p>I've tried to use <code>iloc</code> to select the desired rows and then <code>concat</code> as in:</p> <pre><code>df=pd.DataFrame({'a':range(1,5), 'b':['a','b','c','d']}) pd.concat([df.iloc[0,:], df.iloc[-1,:]]) </code></pre> <p>but this does not produce a pandas dataframe:</p> <pre><code>a 1 b a a 4 b d dtype: object </code></pre>
howto
2016-04-11T07:16:27Z
36,549,666
Pythonic way of comparing all adjacent elements in a list
<p>I want to know if there's a more Pythonic way of doing the following:</p> <pre><code>A = some list i = 0 j = 1 for _ in range(1, len(A)): #some operation between A[i] and A[j] i += 1 j += 1 </code></pre> <p>I feel like this should/could be done differently. Ideas?</p> <p>EDIT: Since some are asking for requirements. I wanted a <strong>general-purpose</strong> answer. Maybe to check if A[i], A[j] are between a certain range, or if they're equal. Or maybe I wanted to do a "trickle-up" of elements. The more general, the better. </p>
howto
2016-04-11T13:09:22Z
36,550,795
Pythonic way of looping over variable that is either an element or a list
<p>I am trying to use a for loop with a variable that is either an element(of a list) or a list. The code I have now, I find very ugly.</p> <pre><code>for x in test if isinstance(test, list) else [test]: print(x) </code></pre> <p>Any pythonic way of improving on this?</p>
howto
2016-04-11T13:54:56Z
36,554,940
Gnuplot: use a function to transform a column of a data file and plot the transformed data and the function
<p>I have a file that is called: <code>energy_vs_volume.dat</code> which is the following:</p> <pre><code> # Volume Energy 123.598570 -1.883015140352E+03 124.960411 -1.883015431207E+03 126.122583 -1.883015514359E+03 126.332431 -1.883015514750E+03 </code></pre> <p>I have a function in that converts the <code>energy</code> to <code>pressure</code>, which is the following:</p> <pre><code> E0=-1883.01544309 B0=32.13 V0=126.4025 B0_prime=-0.95 f0=(3.0/2.0)*B0 f1(x)=((V0/x)**(7.0/3.0))-((V0/x)**(5.0/3.0)) f2(x)=((V0/x)**(2.0/3.0))-1 P(x)= f0*f1(x)*(1+(3.0/4.0)*(B0_prime-4)*f2(x)) </code></pre> <p>I would like to plot in the same plot:</p> <p><strong>Goal #1</strong>: The function <code>P(x)</code></p> <p><strong>Goal #2</strong>: All 4 rows of discrete data of <code>Energy</code> in the file <code>energy_vs_volume.dat</code>, convert it to pressure using the function, and plot these 4 points as <code>y axis: pressure</code>; <code>x axis: volume</code></p> <p>So far, I have achieved this:</p> <pre><code> set encoding utf8 set termoption enhanced E0=-1883.01544309 B0=32.13 V0=126.4025 B0_prime=-0.95 f0=(3.0/2.0)*B0 f1(x)=((V0/x)**(7.0/3.0))-((V0/x)**(5.0/3.0)) f2(x)=((V0/x)**(2.0/3.0))-1 P_Birch_Murnaghan(x)= f0*f1(x)*(1+(3.0/4.0)*(B0_prime-4)*f2(x)) set xrange [123:138] plot P_Birch_Murnaghan(x) with line lt -1 lw 3 </code></pre> <p>This prints the continuous function of <code>P(x)</code> -> <strong>Goal #1</strong></p> <p>In order to make <strong>Goal #2</strong> possible, I add to the script the following:</p> <pre><code> plot "energy_vs_volume.dat" using (P(x)):2 with line lt -1 lw 3 </code></pre> <p>or similar with <code>$</code>, but it does not work.</p> <p>I would appreciate very much if you could help me.</p> <p><strong>EDIT:</strong></p> <p>I would need to know the exact values of the conversion (the pressures generated for the 4 data of volumes). Is there a way of redirecting the output to a file? something similar to the idea of plot <code>energy_vs_volume.dat" using 1:(P($1)) &gt; file.txt</code> ?</p>
howto
2016-04-11T17:05:32Z
36,579,996
Python: Loop through all nested key-value pairs created by xmltodict
<p>Getting a specific value based on the layout of an xml-file is pretty straight forward. (See: <a href="http://stackoverflow.com/questions/36253405/python-get-value-with-xmltodict">StackOverflow</a>)</p> <p>But when I don't know the xml-elements, I can't recurse over it. Since xmltodoc nests OrderedDicts in OrderedDicts. These nested OrderedDicts are typified by Python as type: 'unicode'. And not (still) as OrderedDicts. Therefor looping over like this, doens't work:</p> <pre><code>def myprint(d): for k, v in d.iteritems(): if isinstance(v, list): myprint(v) else: print "Key :{0}, Value: {1}".format(k, v) </code></pre> <p>What I basically want is to recursive over the whole xml-file where every key-value pair is shown. And when a value of a key is another list of key-value pairs, it should recursive into it.</p> <p>With this xml-file as input:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;session id="2934" name="Valves" docVersion="5.0.1"&gt; &lt;docInfo&gt; &lt;field name="Employee" isMandotory="True"&gt;Jake Roberts&lt;/field&gt; &lt;field name="Section" isOpen="True" isMandotory="False"&gt;5&lt;/field&gt; &lt;field name="Location" isOpen="True" isMandotory="False"&gt;Munchen&lt;/field&gt; &lt;/docInfo&gt; &lt;/session&gt; </code></pre> <p>and the above listed code, all data under session is added as a value to the key session.</p> <p>Example output:</p> <pre><code>Key :session, Value: OrderedDict([(u'@id', u'2934'), (u'@name', u'Valves'), (u'@docVersion', u'5.0.1'), (u'docInfo', OrderedDict([(u'field', [OrderedDict([(u'@name', u'Employee'), (u'@isMandotory', u'True'), ('#text', u'Jake Roberts')]), OrderedDict([(u'@name', u'Section'), (u'@isOpen', u'True'), (u'@isMandotory', u'False'), ('#text', u'5')]), OrderedDict([(u'@name', u'Location'), (u'@isOpen', u'True'), (u'@isMandotory', u'False'), ('#text', u'Munchen')])])]))]) </code></pre> <p>And this is obviously not what I want.</p>
howto
2016-04-12T17:17:58Z
36,597,386
Match C++ Strings and String Literals using regex in Python
<p>I am trying to match <strong>Strings</strong> (both between <strong>double &amp; single quotes</strong>) and <strong>String Literals</strong> in <strong>C++ source files</strong>. I am using the <code>re</code> library in Python.</p> <p>I have reached the point where I can match double quotes with <code>r'"(.*?)"'</code> but having trouble with the syntax for extending the above regex to also match the single quotes strings (confused with the <code>\</code> and how to escape the quotes in a Python regex).</p> <p>Also, from <a href="http://en.cppreference.com/w/cpp/language/string_literal" rel="nofollow">here</a> I want to be able to match each of these cases:</p> <ul> <li><blockquote> <p>" (unescaped_character|escaped_character)* "</p> </blockquote></li> <li><blockquote> <p>L " (unescaped_character|escaped_character)* " </p> </blockquote></li> <li><blockquote> <p>u8 " (unescaped_character|escaped_character)* " </p> </blockquote></li> <li><blockquote> <p>u " (unescaped_character|escaped_character)* " </p> </blockquote></li> <li><blockquote> <p>U " (unescaped_character|escaped_character)* "</p> </blockquote></li> <li><blockquote> <p>prefix(optional) R "delimiter( raw_characters )delimiter"</p> </blockquote></li> </ul> <p>I am so confused with regexes and all I try fail. Any suggestions and example code will be awesome for me to gain understanding and -hopefully- build all these regexes.</p>
howto
2016-04-13T11:46:16Z
36,623,789
How to convert unicode text to normal text
<p>I am learning Beautiful Soup in Python.</p> <p>I am trying to parse a simple webpage with list of books.</p> <p>E.g</p> <pre><code>&lt;a href="https://www.nostarch.com/carhacking"&gt;The Car Hacker’s Handbook&lt;/a&gt; </code></pre> <p>I use the below code.</p> <pre><code>import requests, bs4 res = requests.get('http://nostarch.com') res.raise_for_status() nSoup = bs4.BeautifulSoup(res.text,"html.parser") elems = nSoup.select('.product-body a') #elems[0] gives &lt;a href="https://www.nostarch.com/carhacking"&gt;The Car Hacker\u2019s Handbook&lt;/a&gt; </code></pre> <p>And</p> <pre><code>#elems[0].getText() gives u'The Car Hacker\u2019s Handbook' </code></pre> <p>But I want the proper text which is given by,</p> <pre><code>s = elems[0].getText() print s &gt;&gt;&gt;The Car Hacker’s Handbook </code></pre> <p>How to modify my code in order to give "The Car Hacker’s Handbook" output instead of "u'The Car Hacker\u2019s Handbook'" ?</p> <p>Kindly help.</p>
howto
2016-04-14T12:55:07Z
36,633,059
Make a pandas series by running a function on all adjacent values
<p>I have a pandas series, s1, and I want to make a new series, s2, by applying a function that takes two inputs to create one new value. This function would be applied to a 2-value window on s1. The resulting series, s2, should have one fewer value than s1. There are many ways to accomplish this but I'm looking for a way to do it very efficiently. This is on Linux and I'm currently running python 2.7 and 3.4 and pandas 15.2, though I can update pandas if that's necessary. Here's a simplification of my problem. My series consists of musical pitches represented as strings.</p> <pre><code>import pandas s1 = pandas.Series(['C4', 'E-4', 'G4', 'A-4') </code></pre> <p>I'd like to use this function:</p> <pre><code>def interval_func(event1, event2): ev1 = music21.note.Note(event1) ev2 = music21.note.Note(event2) intrvl = music21.interval.Interval(ev1, ev2) return intrvl.name </code></pre> <p>On s1 and a shifted version of s1, to get the following series:</p> <pre><code>s2 = pandas.Series(['m3', 'M3', 'm2']) </code></pre>
howto
2016-04-14T20:12:30Z
36,661,837
How to retrieve only arabic texts from a string using regular expression?
<p>I have a string which has both Arabic and English sentences. What I want is to extract Arabic Sentences only.</p> <pre><code>my_string=""" What is the reason ذَلِكَ الْكِتَابُ لَا رَيْبَ فِيهِ هُدًى لِلْمُتَّقِينَ behind this? ذَلِكَ الْكِتَابُ لَا رَيْبَ فِيهِ هُدًى لِلْمُتَّقِينَ """ </code></pre> <p><a href="http://jrgraphix.net/r/Unicode/0600-06FF" rel="nofollow">This Link</a> shows that the Unicode range for Arabic letters is <code>0600-06FF</code>. </p> <p>So, very basic attempt came to my mind is:</p> <pre><code>import re print re.findall(r'[\u0600-\u06FF]+',my_string) </code></pre> <p>But, this fails miserably as it returns the following list.</p> <pre><code>['What', 'is', 'the', 'reason', 'behind', 'this?'] </code></pre> <p>As you can see, this is exactly opposite of what I want. What I am missing here?</p> <p>N.B. </p> <p>I know I can match the Arabic letters by using inverse matching like below:</p> <pre><code>print re.findall(r'[^a-zA-Z\s0-9]+',my_string) </code></pre> <p>But, I don't want that.</p>
howto
2016-04-16T08:16:37Z
36,672,440
How to force sympy to extract specific subexpressions?
<p>I have a sympy result that winds up like this:</p> <pre><code>from sympy import * Vin,Vc,C1,Cs,R1,Rs,t=symbols(r'V_{in},V_{C},C_1,C_S,R_1,R_S,t') k1=Symbol('k_1') eqVc=Eq(Vc(t),(Rs*(exp(t*(R1+Rs)/(R1*Rs*(C1+Cs))) - 1)*Heaviside(t) + k1*(R1+Rs))*exp(-t*(R1+Rs)/(R1*Rs*(C1+Cs)))/(R1+Rs)) </code></pre> <p>The expression <code>eqVc</code> comes out like this:</p> <p><img src="http://i.stack.imgur.com/qlLoE.png" width="500"></p> <p>I know this function to be of the form:</p> <p><img src="http://i.stack.imgur.com/sCFIM.png" width="300"></p> <p>My goal is to get the values of Vcinit, Vcfinal, and tau, but particularly tau.</p> <p>Is there a way to get Sympy to extract these values? cse() doesn't quite do what I want-- I can get it to replace C1+Cs, for example by using cse([C1+Cs,eqVc]), and it does recognize that the exponent to e is a common subexpression, but it tends to include the t in the subexpression.</p>
howto
2016-04-17T03:43:01Z
36,709,837
Slicing and arranging dataframe in pandas
<p>I want to arrange the data from a data frame into multiple dataframes or groups. The input data is </p> <pre><code>id channel path 15 direct a1 15 direct a2 15 direct a3 15 direct a4 213 paid b2 213 paid b1 2222 direct as25 2222 direct dw46 2222 direct 32q 3111 paid d32a 3111 paid 23ff 3111 paid www32 3111 paid 2d2 </code></pre> <p>The desired output should be like</p> <pre><code>id channel p1 p2 213 paid b2 b2 id channel p1 p2 p3 2222 direct as25 dw46 dw46 id channel p1 p2 p3 p4 15 direct a1 a2 a3 a4 3111 paid d32a 23ff www32 2d2 </code></pre> <p>Please tell the way i can achieve it. Thanks</p>
howto
2016-04-19T06:05:10Z
36,719,792
How can I pack images? -Pygame -PyInstaller
<p>So I got <code>main.py</code> which I turn into <code>main.exe</code> through PyInstaller <code>--onefile</code>, but <code>main.exe</code> still needs image file <code>img.png</code> in <code>\data</code> folder which is located in same folder as <code>main.py/main.exe</code>...</p> <pre><code>img_l = pygame.image.load(os.path.join('data', 'img.png')) screen.blit(img_l, (0, 0)) </code></pre> <p>How can I pack these images correctly? I've been messing around for a while now with <code>pygame.image.tostring</code> and then trying to save that into <code>.txt</code> file so I can use that <code>.txt</code> file in <code>data</code> folder instead of <code>img.png</code>, from which then I could use <code>pygame.image.fromstring</code> (maybe) but I have not figured out how to make it work.</p> <p>I am not even sure if it is the right/ok way to go about it.</p> <p>Any ideas/suggestions <em>sincerely</em> appreciated.</p>
howto
2016-04-19T13:22:23Z
36,731,365
Check Type: How to check if something is a RDD or a dataframe?
<p>I'm using python, and this is Spark Rdd/dataframes.</p> <p>I tried isinstance(thing, RDD) but RDD wasn't recognized.</p> <p>The reason I need to do this:</p> <p>I'm writing a function where both RDD and dataframes could be passed in, so I'll need to do input.rdd to get the underlying rdd if a dataframe is passed in.</p>
howto
2016-04-19T23:44:54Z
36,744,627
Network capturing with Selenium/PhantomJS
<p>I want to capture the traffic to sites I'm browsing to using Selenium with python and since the traffic will be https using a proxy won't get me far.</p> <p>My idea was to run phantomJS with selenium to and use phantomJS to execute a script (not on the page using webdriver.execute_script(), but on phantomJS itself). I was thinking of the netlog.js script (from here <a href="https://github.com/ariya/phantomjs/blob/master/examples/netlog.js" rel="nofollow">https://github.com/ariya/phantomjs/blob/master/examples/netlog.js</a>).</p> <p>Since it works like this in the command line</p> <pre><code>phantomjs --cookies-file=/tmp/foo netlog.js https://google.com </code></pre> <p>there must be a similar way to do this with selenium?</p> <p>Thanks in advance</p> <p><strong>Update:</strong></p> <p>Solved it with browsermob-proxy.</p> <pre><code>pip3 install browsermob-proxy </code></pre> <p>Python3 code</p> <pre><code>from selenium import webdriver from browsermobproxy import Server server = Server(&lt;path to browsermob-proxy&gt;) server.start() proxy = server.create_proxy({'captureHeaders': True, 'captureContent': True, 'captureBinaryContent': True}) service_args = ["--proxy=%s" % proxy.proxy, '--ignore-ssl-errors=yes'] driver = webdriver.PhantomJS(service_args=service_args) proxy.new_har() driver.get('https://google.com') print(proxy.har) # this is the archive # for example: all_requests = [entry['request']['url'] for entry in proxy.har['log']['entries']] </code></pre>
howto
2016-04-20T13:00:28Z
36,753,799
Join unique values into new data frame (python, pandas)
<p>I have two dataFrames, from where I extract the unique values of a column into a and b</p> <pre><code>a = df1.col1.unique() b = df2.col2.unique() </code></pre> <p>now a and b are something like this</p> <pre><code>['a','b','c','d'] #a [1,2,3] #b </code></pre> <p>they are now type numpy.ndarray</p> <p>I want to join them to have a DataFrame like this</p> <pre><code> col1 col2 0 a 1 1 a 2 3 a 3 4 b 1 5 b 2 6 b 3 7 c 1 . . . </code></pre> <p>Is there a way to do it not using a loop?</p>
howto
2016-04-20T19:49:47Z
36,765,952
Python: Is there a shortcut to finding which substring(from a set of substrings) comes first in a string?
<p>Let's say, I have a string: </p> <pre><code>s = "Hello, stack exchange. Let's solve my query" </code></pre> <p>And let's say I have 3 substrings</p> <pre><code>s1 = "solve" s2 = "stack" s3 = "Not present" </code></pre> <p>Is there a shortcut to determine which substring comes first in s?</p> <p>I know, I can write a function which can find indexes of substrs, probably store substr-index pair in a dictionary and then compare all non negative index but is there a shorter way or pythonic way of doing this?</p>
howto
2016-04-21T09:52:04Z
36,779,891
Insert values in lists following a pattern
<p>I have the following example list:</p> <pre><code>list_a = [(1, 6), (6, 66), (66, 72), (72, 78), (78, 138), (138, 146), (154, 208), (208, 217), (217, 225), (225, 279), (279, 288) ..... ] </code></pre> <p>And what I need is:</p> <ol> <li>After every 6 elements on a list, insert in this place a new tuple formed by the last number of the previous one, and the first number in the previous 6 tuples. </li> <li>After the tuple inserted, insert another with formed by the first number of the previous one plus 1, and by the last number of the previous one and the first number of the next tuple.</li> </ol> <p>so the result may look like:</p> <pre><code>list_a = [(1, 6), (6, 66), (66, 72), (72, 78), (78, 138), (138, 146), (146, 1), # &lt;- first part (147, 154), # &lt;- second part (154, 208), (208, 217), (217, 225), (225, 279), (279, 288) (288, 147) # &lt;- first part ..... ] </code></pre> <p>I have tried this, but the last last elements are missing</p> <pre><code>for i in range(0, len(list_a)+1, 6): if i &gt; 0: list_a.insert(i, (list_a[i - 1][1], list_a[i - 6][0])) list_a.insert(i + 1, (list_a[i - 1][1] + 1, list_a[i + 1][0],)) </code></pre>
howto
2016-04-21T20:33:17Z
36,783,166
Use map over a list of 50 generated colours to count, using filter, and reduce, or len, the frequency of occurence
<blockquote> <p>Given:</p> <pre><code>c = ["red", "blue", "green", "yellow", "purple", "orange", "white", "black"] </code></pre> <p>Generate and print a list of 50 random colours. You will need to use the <code>random</code> module to get <code>random</code> numbers. </p> <p>Use <code>range</code> and <code>map</code> to generate the required amount of numbers. </p> <p>Then use <code>map</code> to translate numbers to colours. Then use <code>map</code> over the colours to count (using <code>filter</code>, and <code>reduce</code> or <code>len</code>) how often each colour occurs. Print the result.</p> </blockquote> <p>This is what I've got so far:</p> <pre><code>import random colours = ['red', 'blue', 'green', 'yellow', 'purple', 'orange', 'white', 'black'] nums = map(lambda x: random.randint(0,7), range(50)) c = map(lambda y: colours[y], nums) </code></pre> <p>Which when printed, gives me the required set of 50 random colours from the given list. I'm sort of confused as to where to move from here.</p>
howto
2016-04-22T01:28:08Z
36,785,204
Conditionally replace several columns with default values in Pandas
<p>Let's say I have a dataframe <code>df</code> with numerical columns <code>"A", "B", "C"...</code> as well as a boolean column <code>"DEFAULT"</code>. I also have a list of special columns, for example <code>special = ["A", "D", "E", "H", ...]</code>, and a list of corresponding default values: <code>default = [a, d, e, h, ...]</code>. What I want to do is: for every row where <code>DEFAULT</code> is <code>True</code>, replace the values of the special columns by the corresponding default values.</p> <p>Of course I can manually loop through the dataframe to do so, but that's ugly and probably slow. </p> <p>I've tried all sorts of intuitive ways like:</p> <pre><code>df[df.DEFAULT][special] = default </code></pre> <p>or</p> <pre><code>df[special] = df[special].where(not df.DEFAULT, default, axis = 1) </code></pre> <p>but none of my attempts worked. I've also read many <em>similar</em> questions but none seemed to work for me. Sorry if I missed the right one. </p> <p>Example of input data:</p> <pre><code>df = pd.DataFrame(np.random.rand(10,10)) df.columns = list('ABCDEFGHIJ') df["DEFAULT"] = [False,False,True,False,True,False,False,True,True,False] special = list("ADGI") default = [1,2,3,4] </code></pre>
howto
2016-04-22T05:02:07Z
36,794,619
Customizing time of the datetime object in python
<p>For the billing purpose, I have code which will generate necessary data as needed.</p> <p>But there is one issue I am currently facing with the time gap.I have idea of fixing the same by adding the default time value.</p> <p>To be precise consider the following date time object value.</p> <pre><code>2016-03-22 05:36:07.703078 2016-04-21 05:36:07.703078 </code></pre> <p>One is the value of begin date and whereas another one is the value of end date.</p> <p>Now I need to set the begin date as "2016-03-22 00:00:00" and end date as "2016-04-21 23:59:59".</p> <p>Here it is a code, I have used for creation of begin date and end date.</p> <pre><code> # begin date and end date set begin = date - dateutil.relativedelta.relativedelta(months=1) print(begin) print(type(begin)) end = date - dateutil.relativedelta.relativedelta(days=1) print(end) </code></pre> <p>Someone let me know the way to achieve this.</p>
howto
2016-04-22T13:02:32Z
36,798,227
Python CSVkit compare CSV files
<p>I have two CSV files that look like this..</p> <p><strong>CSV 1</strong></p> <pre><code>reference | name | house ---------------------------- 2348A | john | 37 5648R | bill | 3 RT48 | kate | 88 76A | harry | 433 </code></pre> <p><strong>CSV2</strong></p> <pre><code>reference --------- 2348A 76A </code></pre> <p>Using Python and CSVkit I am trying to create an output CSV of the rows in CSV1 by comparing it to CSV2. Does anybody have an example they can point me in the direction of?</p>
howto
2016-04-22T15:47:29Z
36,799,190
Update a Pyspark DF Column based on an Array in another column
<p>This is my pyspark dataframe schema:</p> <pre><code>root |-- user: string (nullable = true) |-- table: string (nullable = true) |-- changeDate: string (nullable = true) |-- fieldList: string (nullable = true) |-- id: string (nullable = true) |-- value2: integer (nullable = false) |-- value: double (nullable = false) |-- name: string (nullable = false) |-- temp: array (nullable = true) | |-- element: string (containsNull = true) |-- num_cols_changed: integer (nullable = true) </code></pre> <p>The data in the dataframe:</p> <pre><code>+--------+-----+--------------------+--------------------+------+------+-----+----+--------------------+----------------+ | user|table| changeDate| fieldList| id|value2|value|name| temp|num_cols_changed| +--------+-----+--------------------+--------------------+------+------+-----+----+--------------------+----------------+ | user11 | TAB1| 2016-01-24 19:10...| value2 = 100|555555| 200| 0.5| old| [value2 = 100]| 1| | user01 | TAB1| 2015-12-31 13:12...|value = 0.34,name=new| 1111| 200| 0.5| old|[value = 0.34, n...| 2| +--------+-----+--------------------+--------------------+------+------+-----+----+--------------------+----------------+ </code></pre> <p>I want to read the array in temp column and based on the values in that, I want to change the column in the dataframe. For example, first row has only one column being changed i.e. <code>value 2</code>, so I want to update the column <code>df.value2</code> with the new value 100. similarly, in the next row, 2 columns are changed, so I need to extract value and name with their values and update appropriate columns in the dataframe. So output should be like:</p> <pre><code>+--------+-----+--------------------+------+------+-----+----+ | user|table| changeDate| id|value2|value|name| +--------+-----+--------------------+------+------+-----+----+ | user11 | TAB1| 2016-01-24 19:10...|555555| 100| 0.5| old| | user01 | TAB1| 2015-12-31 13:12...| 1111| 200| 0.34| new| +--------+-----+--------------------+------+------+-----+----+ </code></pre> <p>I want to keep the performance of the program in mind, hence focussing on ways just using dataframes, but if there is no options I can go rdd route too. Basically, I do not know how to process multiple values in a row and then compare. I know that I can compare column names using <code>column in df.columns</code>, but doing this for each row using an array is confusing me. Any help or new idea is appreciated.</p>
howto
2016-04-22T16:39:33Z
36,804,141
Vectorized construction of DatetimeIndex in Pandas
<p>I'd like to create a <code>DateTimeIndex</code> in Pandas from broadcasted arrays of years, months, days, hours, etc. This is relatively straightforward to do via a list comprehension; e.g.</p> <pre><code>import numpy as np import pandas as pd def build_DatetimeIndex(*args): return pd.DatetimeIndex([pd.datetime(*tup) for tup in np.broadcast(*args)]) </code></pre> <p>For example:</p> <pre><code>&gt;&gt;&gt; year = 2012 &gt;&gt;&gt; months = [1, 2, 5, 6] &gt;&gt;&gt; days = [1, 15, 1, 15] &gt;&gt;&gt; build_DatetimeIndex(year, months, days) DatetimeIndex(['2012-01-01', '2012-02-15', '2012-05-01', '2012-06-15'], dtype='datetime64[ns]', freq=None) </code></pre> <p>But due to the list comprehension, this becomes rather slow as the size of the inputs grow. Is there a built-in way to do this in Pandas, or is there any way to define <code>build_DatetimeIndex</code> in terms of a fast, vectorized operation?</p>
howto
2016-04-22T22:16:45Z
36,804,586
Reordering same characters such that the characters are at least distance d from each other
<p>Please help me in writing a python code for below problem:</p> <blockquote> <p>Given a string of lowercase characters, reorder them such that the same characters are at least distance d from each other.</p> <p><strong>Input:</strong> <code>{ a, b, b }, distance = 2</code></p> <p><strong>Output:</strong> <code>{ b, a, b }</code></p> </blockquote>
howto
2016-04-22T23:01:43Z
36,806,340
Python3 Rename files in a directory importing the new names from a txt file
<p>I have a directory containing multiple files. The name of the files follows this pattern 4digits.1.4digits.[barcode] The barcode specifies each file and it is composed by 7 leters. I have a txt file where in one column I have that barcode and in the other column the real name of the file. What I would like to do is to right a pyhthon script that automatically renames each file according to the barcode to it s new name written in the txt file.</p> <p>Is there anybody that could help me?</p> <p>Thanks a lot!</p>
howto
2016-04-23T03:33:40Z
36,835,793
Pandas - group by consecutive ranges
<p>I have a dataframe with the following structure - Start, End and Height.</p> <p>Some properties of the dataframe:</p> <ul> <li>A row in the dataframe always starts from where the previous row ended i.e. if the end for row n is 100 then the start of line n+1 is 101. </li> <li>The height of row n+1 is always different then the height in row n+1 (this is the reason the data is in different rows).</li> </ul> <p>I'd like to group the dataframe in a way that heights will be grouped in buckets of 5 longs i.e. the buckets are <em>0, 1-5, 6-10, 11-15 and >15</em>. </p> <p>See code example below where what I'm looking for is the implemetation of <em>group_by_bucket</em> function.</p> <p>I tried looking at other questions but couldn't get exact answer to what I was looking for.</p> <p>Thanks in advance!</p> <pre><code>&gt;&gt;&gt; d = pd.DataFrame([[1,3,5], [4,10,7], [11,17,6], [18,26, 12], [27,30, 15], [31,40,6], [41, 42, 7]], columns=['start','end', 'height']) &gt;&gt;&gt; d start end height 0 1 3 8 1 4 10 7 2 11 17 6 3 18 26 12 4 27 30 15 5 31 40 6 6 41 42 7 &gt;&gt;&gt; d_gb = group_by_bucket(d) &gt;&gt;&gt; d_gb start end height_grouped 0 1 17 6_10 1 18 30 11_15 2 31 42 6_10 </code></pre>
howto
2016-04-25T08:48:30Z
36,849,151
Compare rows then take rows out if neccessary
<p>I have an example dataframe looking like below. </p> <pre><code>df = pd.DataFrame({ 'Area' : ['1', '2', '3', '4','5', '6', '7', '8', '9', '10'], 'Distance' : ['19626207', '20174412', '20175112', '19396352', '19391124', '19851396', '19221462', '20195112', '21127633', '19989793'], }) Area Distance 0 1 19626207 1 2 20174412 2 3 20175112 3 4 19396352 # smaller, take out 4 5 19391124 # 5 6 19851396 # 6 7 19221462 # 7 8 20195112 8 9 21127633 9 10 19989793 # </code></pre> <p>The 'Distance' column needs to be ordered by ascending. </p> <p>But the order of dataframe is fixed (Order of 'Area' is not changable), </p> <p>which means, if rows are smaller than previous rows, then the rows need to be taken out. For example, here is the result I'd like to see.</p> <pre><code> Area Distance 1 19626207 2 20174412 3 20175112 8 20195112 9 21127633 </code></pre> <p>I know I can try something like <code>for i in range(0, len(index), 1)</code>...</p> <p>But is there esaier way to achieve the goal using pandas?</p> <p>Any hints please?</p>
howto
2016-04-25T19:11:43Z
36,875,258
copying one file's contents to another in python
<p>I've been taught the best way to read a file in python is to do something like:</p> <pre><code>with open('file.txt', 'r') as f1: for line in f1: do_something() </code></pre> <p>But I have been thinking. If my goal is to copy the contents of one file completely to another, are there any dangers of doing this:</p> <pre><code>with open('file2.txt', 'w+') as output, open('file.txt', 'r') as input: output.write(input.read()) </code></pre> <p>Is it possible for this to behave in some way I don't expect?</p> <p>Along the same lines, how would I handle the problem if the file is a binary file, rather than a text file. In this case, there would be no newline characters, so <code>readline()</code> or <code>for line in file</code> wouldn't work (right?).</p> <p><strong>EDIT</strong> Yes, I know about <code>shutil</code>. There are many better ways to copy a file if that is exactly what I want to do. I want to know about the potential risks, if any, of this approach specifically, because I may need to do more advanced things than simply copying one file to another (such as copying several files into a single one).</p>
howto
2016-04-26T20:38:23Z
36,921,573
Given two numpy arrays of same size, how to apply a function two each pair of elements at identical position?
<p>I have two numpy arrays of same size and want to apply a function (here <code>binom_test</code>) to each pair of elements that are at the same position.</p> <p>The following code does what I want, but I guess there exists a more elegant solution.</p> <pre><code>import numpy as np from scipy.stats import binom_test h, w = 3, 4 x=np.random.random_integers(4,9,(h,w)) y=np.random.random_integers(4,9,(h,w)) result = np.ones((h,w)) for row in range(h): result[row,:] = np.array([binom_test(x[row,_], x[row,_]+y[row,_]) for _ in range(w)]) print(result) </code></pre>
howto
2016-04-28T17:26:55Z
36,923,865
Uploading files using Django Admin
<p>I want to be able to upload .PDF files using the Django Admin interface and for those files to be reflected in a page on my website. Is this at all possible? If so, how do I do that? Otherwise, what could be a workaround for an admin user to privately upload files that will later be displayed in the site? </p> <p>What I'm getting from the Django documentation on <a href="https://docs.djangoproject.com/en/1.9/topics/http/file-uploads/" rel="nofollow">File Uploads</a> is that the upload happens on the website itself, but I want to do this using the Admin interface so I know only an admin user can upload these files. Found <a href="http://stackoverflow.com/questions/5467691/uploading-images-using-django-admin">this</a> question but this dealt with images. Any sort of help would be highly appreciated.</p>
howto
2016-04-28T19:31:41Z
36,928,577
How can I get a list of package locations from a PIP requirements file?
<p>Is there a way to generate a list from PIP, to the actual resources in the file? For instance, if I had a requirements file with</p> <pre><code>Flask Flask-Login </code></pre> <p>I would like to get output something like:</p> <pre><code>Name: Flask Version: 0.10.1 Summary: A microframework based on Werkzeug, Jinja2 and good intentions Home-page: http://github.com/mitsuhiko/flask/ Name: Flask-Login Version: 0.3.2 Summary: User session management for Flask Home-page: https://github.com/maxcountryman/flask-login </code></pre> <p>I found that information from pip show, but would like it to run on all the requirements that I have in the requirements.txt file.</p>
howto
2016-04-29T02:31:02Z
36,935,617
Remove following duplicates in a tuple
<p>I have tuples of arbitrary size. This is an example:</p> <pre><code>ax = ('0','1','1','1','2','2','2','3') </code></pre> <p>For x axis labeling I want to convert this tuple to:</p> <pre><code>ax = ('0','1','','','2','','','3') </code></pre> <p>So duplicates should be erased while the tuple size should stay the same. Is there an easy way to do that?</p>
howto
2016-04-29T10:13:45Z
36,939,122
Average of key values in a list of dictionaries
<p>I have this list of dictionaries : </p> <pre><code>[{'Eva': [4, 8, 2]}, {'Ana': [57, 45, 57]}, {'Ada': [12]}] </code></pre> <p>I need to get the average of each key value, so that the output is :</p> <pre><code>[{'Eva': [5], {'Ana' : [53]}, {'Ada':[12]}] </code></pre> <p>The average must be rounded up or down by adding 0.5 and taking the only the integer part. For example, if the average is 4.3 adding 0.5 equals 4.8, so the output is 4. If the average is 4.6 adding 0.5 equals 5.1, so the output is 5.</p> <p>I know how to use <code>iteritems()</code> to iterate over a dictionary, but since this is a list I don't know how to reach every value.</p> <p>Thanks in advance.</p>
howto
2016-04-29T13:03:33Z
36,949,277
Ordering a nested dictionary by the frequency of the nested value
<p>I have this <code>list</code> made from a csv which is massive. For every item in <code>list</code>, I have broken it into it's <code>id</code> and <code>details</code>. <code>id</code> is always between 0-3 characters max length and <code>details</code> is variable. I created an empty dictionary, D...(rest of code below):</p> <pre><code>D={} for v in list: id = v[0:3] details = v[3:] if id not in D: D[id] = {} if details not in D[id]: D[id][details] = 0 D[id][details] += 1 </code></pre> <p>aside: Can you help me understand what the two <code>if</code> statements are doing? <em>Very new to python and programming.</em> </p> <p>Anyway, it produces something like this:</p> <pre><code>{'KEY1_1': {'key2_1' : value2_1, 'key2_2' : value2_2, 'key2_3' : value2_3}, 'KEY1_2': {'key2_1' : value2_1, 'key2_2' : value2_2, 'key2_3' : value2_3}, and many more KEY1's with variable numbers of key2's </code></pre> <p>Each 'KEY1' is unique but each 'key2' isn't necessarily. The <code>value2_ s</code> are all different.</p> <p>Ok so, right now I found a way to sort by the first KEY</p> <pre><code>for k, v in sorted(D.items()): print k, ':', v </code></pre> <p>I have done enough research to know that dictionaries can't really be sorted but I don't care about sorting, I care about ordering or more specifically frequencies of occurrence. In my code <code>value2_x</code> is the number of times its corresponding <code>key2_x</code> occurs for that particular <code>KEY1_x</code>. I am starting to think I should have used better variable names. </p> <p>Question: How do I order the top-level/overall dictionary by the number in <code>value2_x</code> which is in the nested dictionary? I want to do some statistics to those numbers like...</p> <ol> <li>How many times does the most frequent KEY1_x:key2_x pair show up? </li> <li>What are the 10, 20, 30 most frequent KEY1_x:key2_x pairs? </li> </ol> <p>Can I only do that by each <code>KEY1</code> or can I do it overall? Bonus: If I could order it that way for presentation/sharing that would be very helpful because it is such a large data set. So much thanks in advance and I hope I've made my question and intent clear.</p>
howto
2016-04-30T00:08:41Z
36,950,503
How to find the average of previous sales at each time in python
<p>I have a csv file with four columns: date, wholesaler, product, and sales. I am looking for finding average of previous sales for each Product and Wholesaler combination at each date. It means what is the average previous sales of product 'A' at wholesaler 'B' at time 'C'. </p> <p>For instance we know sales of product 'A' at wholesaler 'B' at Jan, Apr, May, Aug that are 100, 200, 300, 400 respectively. Let assume we do not have any record before Jan. So the average of previous sale of product 'A' in wholesaler 'B' at Apr is equal to 100/1, and at May is equal to (200+100)/2 and at Aug is (300+200+100)/3. </p> <p>The following table shows my data:</p> <pre><code>date wholesaler product sales 12/31/2012 53929 UPE54 4 12/31/2012 13131 UPE55 1 2/23/2013 13131 UPE55 1156 4/24/2013 13131 UPE55 1 12/1/2013 83389 UPE54 9 12/17/2013 83389 UPE54 1 12/18/2013 52237 UPE54 9 12/19/2013 53929 UME24 1 12/31/2013 82204 UPE55 9 12/31/2013 11209 UME24 4 12/31/2013 52237 UPE54 1 </code></pre> <p>Now I am using this code:</p> <pre><code>df = pd.read_csv('Sample.csv',index_col='date') df2 = df.groupby(['wholesaler','product'])['sales'].mean() </code></pre> <p>That gives an average sales for each wholesaler-product while I am looking for average of previous sales at each date. </p> <pre><code>wholesaler product avg sales 11209 UME24 4.00 13131 UPE55 713.00 22423 UME24 1.00 24302 U4E16 121.00 </code></pre> <p>Thank you for your help!</p>
howto
2016-04-30T03:52:46Z
36,957,908
Removing white space from txt with python
<p>I have a .txt file (scraped as pre-formatted text from a website) where the data looks like this:</p> <pre><code>B, NICKOLAS CT144531X D1026 JUDGE ANNIE WHITE JOHNSON ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS </code></pre> <p>I'd like to remove all extra spaces (they're actually different number of spaces, not tabs) in between the columns. I'd also then like to replace it with some delimiter (tab or pipe since there's commas within the data), like so:</p> <pre><code>ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS </code></pre> <p>Looked around and found that the best options are using regex or shlex to split. Two similar scenarios:</p> <ul> <li><a href="http://stackoverflow.com/questions/3609596/python-regular-expression-must-strip-whitespace-except-between-quotes">Python Regular expression must strip whitespace except between quotes</a>,</li> <li><a href="http://stackoverflow.com/questions/13152585/remove-white-spaces-from-dict-python">Remove white spaces from dict : Python</a>.</li> </ul>
howto
2016-04-30T17:20:32Z
36,967,883
Sorting data from a csv alphabetically, highest to lowest and average
<p>This is the next step in my <a href="http://stackoverflow.com/questions/36957360/replacing-specific-data-in-a-csv-file">currently unresolved question</a> in which I am attempting to sort the scores from 3 different teams. I have very limited knowledge of python because I am new to programming so my problem solving of this current project is quite difficult. </p> <p>To begin I will need the example data (shown below) which are split over two cells to be sorted alphabetically according to the names, I will have this for 3 different teams in 3 different files. I am also trying to sort it out from highest to lowest depending on the score, this has proven of much difficulty to me so far. </p> <pre><code>Jake,5 Jake,3 Jake,7 Jeff,6 Jeff,4 Fred,5 </code></pre> <p>The third and final way to sort I am trying to do is by average. For this I had attempted to make it so if the user had there name 2 or 3 times (as the program will store the last 3 scores for each user, this is a <a href="http://stackoverflow.com/questions/36957360/replacing-specific-data-in-a-csv-file">currently unresolved</a> problem) then it would add their scores then divide by how many of them it had there. Unfortunately this was very difficult for me and i struggled to be able to get any output, though I had an idea that this will print their average scores to a separate file then re-read the scores.</p> <p>The current layout I have so far is shown below:</p> <pre><code>admin_data = [] team_choice = input("Choose a team to sort") if team_choice == 'Team 1': path = 'team1scores.csv' elif team_choice == 'Team 2': path = 'team2scores.csv' elif team_choice == 'Team 3': path = 'team3scores.csv' else: print("--Error Defining File Path--") print("As an admin you have access to sorting the data") print("1 - Alpahbetical") print("2 - Highest to Lowest") print("3 - Average Score") admin_int = int(input("Choose either 1, 2 or 3?")) if sort_int == 1 and team_choice == 'Team 1': do things elif sort_int == 2 and team_choice == 'Team 1': do things elif sort_int == 3 and team_choice == 'Team 1': do things </code></pre> <p>This part of the program will be used for each file, but have had no luck producing any solutions for each of the different sorting ways I need. I will also appreciate if the answer for the <a href="http://stackoverflow.com/questions/36957360/replacing-specific-data-in-a-csv-file">first part</a> of my project is answered too. </p> <p><strong>EDIT</strong> (16:43): I have managed to complete the highest to lowest part of the program but is printing:</p> <pre><code>[['Fred', '9'], ['George', '7'], ['Jake', '5'], ['Jake', '4'], ['Derek', '4'], ['Jake', '2']] </code></pre> <p>So if this is the formatting I read the data as, how will I be able to read the file for duplicate names and add the scores if they are in arrays like this?</p>
howto
2016-05-01T14:42:49Z
36,971,201
map array of numbers to rank efficiently in Python
<p>Hi I'm trying to map an array of numbers to their ranks. So for example [2,5,3] would become [0,2,1]. </p> <p>I'm currently using np.where to lookup the rank in an array, but this is proving to take a very long time as I have to do this for a very large array (over 2 million datapoints). </p> <p>If anyone has any suggestions on how I could achieve this, I'd greatly appreciate it! </p> <p>[EDIT] This is what the code to change a specific row currently looks like: </p> <pre><code>def change_nodes(row): a = row new_a = node_map[node_map[:,1] == a][0][0] return new_a </code></pre> <p>[EDIT 2] Duplicated numbers should additionally have the same rank</p> <p>[EDIT 3] Additionally, unique numbers should only count once towards the ranking. So for example, the rankings for this list [2,3,3,4,5,7,7,7,7,8,1], would be:</p> <p>{1:0, 2:1, 3:2, 4:3, 5:4, 7:5, 8:6 }</p>
howto
2016-05-01T19:51:11Z
36,971,758
Python handling newline and tab characters when writing to file
<p>I am writing some text (which includes <code>\n</code> and <code>\t</code> characters) taken from one source file onto a (text) file ; for example:</p> <p>source file (test.cpp):</p> <pre><code>/* * test.cpp * * 2013.02.30 * */ </code></pre> <p>is taken from the source file and stored in a string variable like so</p> <p><code>test_str = "/*\n test.cpp\n *\n *\n *\n\t2013.02.30\n *\n */\n"</code></p> <p>which when I write onto a file using</p> <pre><code> with open(test.cpp, 'a') as out: print(test_str, file=out) </code></pre> <p>is being written with the newline and tab characters converted to new lines and tab spaces (exactly like <code>test.cpp</code> had them) <em>whereas</em> I want them <strong>to remain <code>\n</code> and <code>\t</code></strong> exactly like the <code>test_str</code> variable holds them in the first place.</p> <p>Is there a way to achieve that in Python when writing to a file these 'special characters' without them being translated?</p>
howto
2016-05-01T20:46:07Z
36,974,140
Scrapy xpath get text of an element that starts with <
<p>I am trying to get text "&lt;1 hour" from this html snippet.</p> <pre><code>&lt;div class="details_wrapper"&gt; &lt;div class="detail"&gt; &lt;b&gt;Recommended length of visit:&lt;/b&gt; &lt;1 hour &lt;/div&gt; &lt;div class="detail"&gt; &lt;b&gt;Fee:&lt;/b&gt; No &lt;/div&gt; &lt;/div&gt; </code></pre> <p>This is the xpath expression that I am using:</p> <pre><code>visit_length = response.xpath( "//div[@class='details_wrapper']/" "div[@class='detail']/b[contains(text(), " "'Recommended length of visit:')]/parent::div/text()" ).extract() </code></pre> <p>But it is not able to get the text. I think this is due to the "&lt;" in the text that I need, it is being considered as a html tag. How can I scrape the text "&lt;1 hour" ?</p>
howto
2016-05-02T02:13:55Z
36,988,306
Pandas check for future condition by group
<p>I'm trying to flag each row by whether a condition will occur at a future date in the data. Whether this condition has occurred in the past is irrelevant. Moreover, I'm trying to perform this labeling by group.</p> <p>An intuitive way to think about this is whether someone will buy pants at a future date.</p> <pre><code>id date item 1 2000-01-01 'foo' 1 2000-01-02 'pants' 1 2000-01-03 'bar' 2 2000-01-02 'organ' 2 2000-02-01 'beef' 3 2000-01-01 'pants' 3 2000-01-10 'oranges' 3 2000-02-20 'pants' </code></pre> <p>Would in turn become:</p> <pre><code>id date item will_buy_pants 1 2000-01-01 'foo' 1 1 2000-01-02 'pants' 0 1 2000-01-03 'bar' 0 2 2000-01-02 'organ' 0 2 2000-02-01 'beef' 0 3 2000-01-01 'pants' 1 3 2000-01-10 'oranges' 1 3 2000-02-20 'pants' 0 </code></pre> <p>Edit: This is not a prediction problem. Whether someone will buy pants is already expressed in the data. I just want a flag on each row.</p>
howto
2016-05-02T17:37:53Z
37,042,635
How to make a test function using pytest
<p>I don't know how to make test function. I have this function </p> <pre><code>import pytest def added(a, b, c): d = b + c e = a + c f = a + b return (d, e, f) added(4,6,7) </code></pre> <p>How I can make a test_function for this function. </p> <p>Thanks for any help in advance</p>
howto
2016-05-05T04:54:00Z
37,048,689
Abaqus: script to select elements on a surface
<p>I am trying write an Abaqus/Python script that will select all the elements that "belong" to a certain face. I.e. taking all the elements that have a connection to one face of a meshed cube (I will calculate the total force acting on that face for force-displacement or stress-strain curves later).</p> <p>If I do it using the GUI I get:</p> <pre><code>mdb.models['Model-1'].rootAssembly.Set(elements= mdb.models['Model-1'].rootAssembly.instances['Part-1-1'].elements.getSequenceFromMask( mask=('[#0:5 #fff80000 #ff #f #ffe00000 #f000000f #3f', ' #0:6 #fffe #c0003f00 #3 #3fff8 #ffc00 ]', ), ), name='Set-1') </code></pre> <p>But, <code>getSequenceFromMask</code> does not work in a general case. I tried using <code>findat</code> with no luck.</p> <p>Is there a way to do that? </p>
howto
2016-05-05T10:52:19Z
37,079,175
How to remove a column from a structured numpy array *without copying it*?
<p>Given a structured numpy array, I want to remove certain columns by name without copying the array. I know I can do this:</p> <pre><code>names = list(a.dtype.names) if name_to_remove in names: names.remove(name_to_remove) a = a[names] </code></pre> <p>But this creates a temporary copy of the array which I want to avoid because the array I am dealing with might be very large.</p> <p>Is there a good way to do this?</p>
howto
2016-05-06T18:30:59Z
37,083,117
How to Chage selection field automaticaly in odoo
<p>Hi I'm working on Odoo 8 and i am a beginner, I have a view that contains a set of combo-box type fields and a selection field. I want to make a test on the combo-box fields and if there are all checked then the selection field value should change. Here is what i have so far: </p> <pre><code>def get_etat_dossier(self,cr,uid,ids,args,fields,context=None): res = {} for rec in self.browse(cr,uid,ids): if rec.casier_judiciare==True: # test field if = true res[rec.id]= 02 # field etat_dos type selection = Dossier Complet else: res[rec.id] = 01 return res _columns= { 'casier_judiciare' : fields.boolean('Casier Judiciaire'), # field to test 'reference_pro' : fields.boolean('Réferences Professionnelles'), 'certificat_qual' : fields.boolean('Certificat de qualification'), 'extrait_role' : fields.boolean('Extrait de Role'), 'statut_entre' : fields.selection([('eurl','EURL'),('sarl','SARL')],'Statut Entreprise'), 'etat_dos': fields.selection([('complet','Dossier Complet'),('manquant','Dossier Manquant')],'Etat De Dossier'), # field ho change after test } </code></pre> <p><a href="http://i.stack.imgur.com/gAhdt.jpg" rel="nofollow">enter image description here</a></p> <p>Here is the code for my view</p> <pre><code>&lt;group col='4' name="doss_grp" string="Dossier de Soumission" colspan="4" &gt; &lt;field name="casier_judiciare"/&gt; &lt;field name="certificat_qual"/&gt; &lt;field name="extrait_role"/&gt; &lt;field name="reference_pro"/&gt; &lt;field name="statut_entre" style="width:20%%"/&gt; &lt;field name="etat_dos"/&gt; &lt;/group&gt; </code></pre>
howto
2016-05-07T00:12:47Z
37,084,812
How to remove decimal points in pandas
<p>I have a pandas data frame, df, which looks like this:</p> <pre><code>Cut-off &lt;=35 &gt;35 Calcium 0.0 1.0 Copper 1.0 0.0 Helium 0.0 8.0 Hydrogen 0.0 1.0 </code></pre> <p>How can I remove the decimal point so that the data frame looks like this:</p> <pre><code>Cut-off &lt;= 35 &gt; 35 Calcium 0 1 Copper 1 0 Helium 0 8 Hydrogen 0 1 </code></pre> <p>I have tried <code>df.round(0)</code> without success.</p>
howto
2016-05-07T05:21:51Z
37,088,428
Python Matplotlib: plotting feet and inches
<p>I want to make a plot in pylab which displays feet on the y axis, and subdivides feet into inches rather than fractions of a foot. This is not an issue in the metric system because unit subdivisions align with the decimal system, but it does make plots difficult to read when using Imperial units.</p> <p>Is this possible?</p> <p>What I have now:</p> <pre><code>40 ft | 39.90 | 39.80 | 39.70 | 39.60 | 39.50 |------------&gt; </code></pre> <p>What I want:</p> <pre><code>40 ft | 39 11in | 39 10in | 39 9in | 39 8in | 39 7in | 39 6in |------------&gt; </code></pre>
howto
2016-05-07T12:17:16Z
37,091,273
Changing color TabbedPannelHeader in Kivy
<p>I tried many different ways, but nothing solve this. When I change the color of a button, for (0,0,1,1) I have blue. If I use the same list for TabbedPannel, I have dark blue, and for (0, 0, 1, 0) I have white. Is like I have a black background and I always have a mix of it and any other color, but I'm not able to get the specific color. This happens on Spinner too, but not with Labels or Buttons. What should I do? I tried use default_tab_cls, but, as I could imagine it just changes the default tab. </p> <p>Edition after first answer:</p> <p>This is the part I having problem. I call a function that returns my TabbedPanelHeader. Everything is ok with this.font_padrao is a custom font and this is working well. This example returns me white background and blue color font. If I change background_color to (0,0,1,1) the blue is different from the blue I have when I do the same thing ins Button for example. There, the blue is "real blue", not a "dark blue" or something like this.</p> <p><pre></p> <code>return TabbedPanelHeader(text=nome, background_color = (0, 0, 1, 0), font_name=fonte_padrao, color = (0,0,1,1)) </code></pre> <p></p>
howto
2016-05-07T16:56:48Z
37,106,934
How to create a random multidimensional array from existing variables
<p>I'm trying to use a randomly constructed two dimensional array as a map for a text based game I'm building. If I define the world space as such:</p> <pre><code>class WorldSpace(object): # Map information: def __init__(self): self.inner = ['g'] * 60 # Creates 60 'grass' tiles. self.outer = [[].extend(self.inner) for x in self.inner] # 'inner' &amp; 'outer' variables define the world space as a 60 x 60 2-D array. self.userpos = self.outer[0][0] # Tracks player tile in world space. # The variables below are modifiers for the world space. self.town = 't' self.enemy = 'e' </code></pre> <p>I assume I would use a for loop to access the array data of self.outer -- using maybe the random module to determine what tiles will be replaced -- but how would I go about randomly replacing that data with specific modifier variables, and restricting how much data is being replaced? For example, if I wanted to replace about 25 of the original grass, or 'g', tiles with enemy, or 'e', tiles, how could I do that? Same with maybe 5 randomly placed towns?</p> <p>Thanks for any replies.</p>
howto
2016-05-09T02:17:59Z
37,113,173
Compare 2 excel files using Python
<p>I have two <code>xlsx</code> files as follows:</p> <pre><code>value1 value2 value3 0.456 3.456 0.4325436 6.24654 0.235435 6.376546 4.26545 4.264543 7.2564523 </code></pre> <p>and </p> <pre><code>value1 value2 value3 0.456 3.456 0.4325436 6.24654 0.23546 6.376546 4.26545 4.264543 7.2564523 </code></pre> <p>I need to compare all cells, and if a cell from <code>file1 !=</code> a cell from <code>file2</code> <code>print</code> that.</p> <pre><code>import xlrd rb = xlrd.open_workbook('file1.xlsx') rb1 = xlrd.open_workbook('file2.xlsx') sheet = rb.sheet_by_index(0) for rownum in range(sheet.nrows): row = sheet.row_values(rownum) for c_el in row: print c_el </code></pre> <p>How can I add the comparison cell of <code>file1</code> and <code>file2</code> ?</p>
howto
2016-05-09T10:14:43Z
37,116,967
Mongoengine filter query on list embedded field based on last index
<p>I'm using Mongoengine with Django.</p> <p>I have an embedded field in my model. that is a list field of embedded documents.</p> <pre><code>import mongoengine class OrderStatusLog(mongoengine.EmbeddedDocument): status_code = mongoengine.StringField() class Order(mongoengine.DynamicDocument): incr_id = mongoengine.SequenceField() status = mongoengine.ListField(mongoengine.EmbeddedDocumentField(OrderStatusLog)) </code></pre> <p>Now I want to filter the result on <code>Order</code> collection based on the last value in <code>status</code> field.</p> <p>e.g. <code>Order.objects.filter(status__last__status_code="scode")</code></p> <p>I guess there is no such thing <code>__last</code>. I tried the approach mentioned in the docs <a href="http://docs.mongoengine.org/guide/querying.html#querying-lists" rel="nofollow">http://docs.mongoengine.org/guide/querying.html#querying-lists</a> but didn't work.</p> <p>I can solve this by looping over all the documents in the collection but thats not efficient, how can we write this query efficiently.</p>
howto
2016-05-09T13:24:51Z
37,119,071
Scipy rotate and zoom an image without changing its dimensions
<p>For my neural network I want to augment my training data by adding small random rotations and zooms to my images. The issue I am having is that scipy is changing the size of my images when it applies the rotations and zooms. I need to to just clip the edges if part of the image goes out of bounds. All of my images must be the same size.</p> <pre><code>def loadImageData(img, distort = False): c, fn = img img = scipy.ndimage.imread(fn, True) if distort: img = scipy.ndimage.zoom(img, 1 + 0.05 * rnd(), mode = 'constant') img = scipy.ndimage.rotate(img, 10 * rnd(), mode = 'constant') print(img.shape) img = img - np.min(img) img = img / np.max(img) img = np.reshape(img, (1, *img.shape)) y = np.zeros(ncats) y[c] = 1 return (img, y) </code></pre>
howto
2016-05-09T15:01:02Z
37,119,314
How do I generate a sequence of integer numbers in a uniform distribution?
<p>I want to generate 4 random integer numbers in the range <code>[1,4]</code> in a uniform distribution. For example, each number appears 3 times for a sequence of 12 elements.</p>
howto
2016-05-09T15:11:57Z
37,122,210
django object get two fields into a list from a model
<p>this is my model</p> <pre><code>class pdUser(models.Model): Name = models.CharField(max_length=200) Mobile = models.CharField(max_length=200) PagerDutyID = models.CharField(max_length=200) PagerDutyPolicyID = models.CharField(max_length=200) PagerDutyPolicy = models.CharField(max_length=200) </code></pre> <p>Want i want to be able to do is group by PagerDutyPolicy &amp; PagerDutyPolicyID and return that as an object of its own with unique values only</p> <p>so for example </p> <pre><code>Name: bob PagerDutyPolicyID: 232 PagerDutyPolicy: Team 1 Name: Bill PagerDutyPolicyID: 232 PagerDutyPolicy: Team 1 Name: Fred PagerDutyPolicyID: 145 PagerDutyPolicy: Team 2 </code></pre> <p>what i need is an object that has only got </p> <pre><code>PolicyID: 145 Policy: Team 2 PolicyID: 232 Policy: Team 1 </code></pre> <p>in it, how would i do this?</p> <p>Thanks</p>
howto
2016-05-09T17:47:18Z
37,125,495
How to get the N maximum values per row in a numpy ndarray?
<p>We know how to do it when N = 1</p> <pre><code>import numpy as np m = np.arange(15).reshape(3, 5) m[xrange(len(m)), m.argmax(axis=1)] # array([ 4, 9, 14]) </code></pre> <p>What is the best way to get the top N, when N > 1? (say, 5)</p>
howto
2016-05-09T21:08:48Z
37,136,697
Partial symbolic derivative in Python
<p>I need to partially derivate my equation and form a matrix out of the derivatives. My equation is: <a href="http://i.stack.imgur.com/P4ERm.png" rel="nofollow"><img src="http://i.stack.imgur.com/P4ERm.png" alt="enter image description here"></a> While this conditions must be met: <a href="http://i.stack.imgur.com/kpMU8.png" rel="nofollow"><img src="http://i.stack.imgur.com/kpMU8.png" alt="enter image description here"></a> For doing this I've used the sympy module and its diff() function. My code so far is:</p> <pre><code>from sympy import* import numpy as np init_printing() #delete if you dont have LaTeX installed logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2 = symbols('logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2') A = (logt_r - logt_a - (T - T_a) * (a_0 + a_1 * logS + a_2 * logS**2) )**2 parametri = [logt_a, a_0, Taa_0, a_1, Taa_1, a_2, Taa_2] M = expand(A) M = M.subs(T_a*a_0, Taa_0) M = M.subs(T_a*a_1, Taa_1) M = M.subs(T_a*a_2, Taa_2) K = zeros(len(parametri), len(parametri)) O = [] def odv(par): for j in range(len(par)): for i in range(len(par)): P = diff(M, par[i])/2 B = P.coeff(par[j]) K[i,j] = B return K odv(parametri) </code></pre> <p>My result: <a href="http://i.stack.imgur.com/avIFV.png" rel="nofollow"><img src="http://i.stack.imgur.com/avIFV.png" alt="enter image description here"></a></p> <p><strong>My problem</strong></p> <p>The problem that I'm having is in the partial derivatives of products (T_a<em>a_0, T_a</em>a_1 and T_a*a_2), because by using the diff() function, you cannot derivate a function with a product (obviously), else you get an error:</p> <pre><code>ValueError: Can't calculate 1-th derivative wrt T_a*a_0. </code></pre> <p>To solve this I substitued this products with coefficients, like:</p> <pre><code>M = M.subs(T_a*a_0, Taa_0) M = M.subs(T_a*a_1, Taa_1) M = M.subs(T_a*a_2, Taa_2) </code></pre> <p>But as you can see in the final result, this works only in some cases. I would like to know if there is a better way of doing this where I wouldn't need to substitude the products and that it would work in all cases.</p> <h2><strong>ADDITIONAL INFORMATION</strong></h2> <p>Let me rephrase my question. Is it possible to symbolically derive an equation with a function by using python or in that matter, to use the sympy module?</p>
howto
2016-05-10T11:04:57Z
37,154,201
Get the count of the each date entry from onr of the raw from CSV file
<p>I am working on getting values and creating a chart from CSV file using python. How to get the number of entry of the each date? For Example, sample Date row:</p> <pre><code>4/14/2016 11:05:15 AM 4/14/2016 09:06:15 PM 6/14/2016 11:05:15 AM </code></pre> <p>It should gives an output as </p> <pre><code>4/14/2016 entry 2 times 6/14/2016 entry 1 time </code></pre>
howto
2016-05-11T06:01:12Z
37,177,688
Subsetting 2D array based on condition in numpy python
<p>I have a numpy 2D array of size 3600 * 7200. I have another array of same shape which I want to use as a mask.</p> <p>The problem is that when I do something like this:</p> <pre><code>import numpy as np N = 10 arr_a = np.random.random((N,N)) arr_b = np.random.random((N,N)) arr_a[arr_b &gt; 0.0] </code></pre> <p>The resulting array is no longer 2D, it is 1D. How do I get a 2D array in return?</p>
howto
2016-05-12T05:04:41Z
37,212,307
HTML data from Beautiful Soup needs formatting
<p>I have a Test Report file from Nose in html format. I would like to extract some parts of the text out of it in Python. I will be sending this in an email in the message part.</p> <p>I have the following sample:</p> <pre><code>&lt;table&gt; &lt;tr&gt; &lt;th&gt;Class&lt;/th&gt; &lt;th class="failed"&gt;Fail&lt;/th&gt; &lt;th class="failed"&gt;Error&lt;/th&gt; &lt;th&gt;Skip&lt;/th&gt; &lt;th&gt;Success&lt;/th&gt; &lt;th&gt;Total&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Regression_TestCase&lt;/td&gt; &lt;td class="failed"&gt;1&lt;/td&gt; &lt;td class="failed"&gt;9&lt;/td&gt; &lt;td&gt;0&lt;/td&gt; &lt;td&gt;219&lt;/td&gt; &lt;td&gt;229&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt; &lt;td class="failed"&gt;1&lt;/td&gt; &lt;td class="failed"&gt;9&lt;/td&gt; &lt;td&gt;0&lt;/td&gt; &lt;td&gt;219&lt;/td&gt; &lt;td&gt;229&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; </code></pre> <p>If i open the file in the browser the formatting looks like this for the text i want: This is the text I would like to extract from the html file.</p> <pre><code> Class Fail Error Skip Success Total Regression_TestCase 1 9 0 219 229 </code></pre> <p>Using BeautifulSoup4 in Python27 I have managed to extract the following:</p> <pre><code>[&lt;th&gt;Class&lt;/th&gt;, &lt;th class="failed"&gt;Fail&lt;/th&gt;, &lt;th class="failed"&gt;Error&lt;/th&gt;, &lt;th&gt;Skip&lt;/th&gt;, &lt;th&gt;Success&lt;/th&gt;, &lt;th&gt;Total&lt;/th&gt;] [&lt;td&gt;Regression_TestCase.RegressionProject_TestCase2.RegressionProject_TestCase2&lt;/td&gt;, &lt;td class="failed"&gt;1&lt;/td&gt;, &lt;td class="failed"&gt;9&lt;/td&gt;, &lt;td&gt;0&lt;/td&gt;, &lt;td&gt;219&lt;/td&gt;, &lt;td&gt;229&lt;/td&gt;, &lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;, &lt;td class="failed"&gt;1&lt;/td&gt;, &lt;td class="failed"&gt;9&lt;/td&gt;, &lt;td&gt;0&lt;/td&gt;, &lt;td&gt;219&lt;/td&gt;, &lt;td&gt;229&lt;/td&gt;] </code></pre> <p>My code is as follows:</p> <pre><code>def extract_pass_summary_from_selenium_report(): html_report = open(r"C:\test_runners\selenium_regression_test_5_1_1\ClearCore 501 - Regression Test\TestReport\SeleniumTestReport.html",'r').read() soup = BeautifulSoup(html_report, "html.parser") print soup.find_all('th') print soup.find_all('td') </code></pre> <p>How can I just extract the text and keep the formatting like this:?</p> <pre><code> Class Fail Error Skip Success Total Regression_TestCase 1 9 0 219 229 </code></pre> <p>Thanks, Riaz</p>
howto
2016-05-13T14:13:47Z
37,246,418
How to avoid getting imaginary/complex number python
<p>I am using a python code, where one of the equations got sqrt root of negative value. I use <code>cmath.sqrt</code> to solve it. All answers I got from that equation are shown in imaginary/complex number (e.g. x.xxxxx<strong>j</strong>). I don't want to get that imaginary/complex number as the code that I use subsequently cannot read those imaginary/ complex number. As such, how can I avoid not to get imaginary numbers? OR in other way, how can I convert those imaginary number into real ones? or how can I remove those "j". Thanks. </p>
howto
2016-05-16T03:47:07Z
37,256,540
Applying sqrt function on a column
<p>I have following data frame</p> <pre><code>data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012], 'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'], 'wins': [11, 8, 10, 15, 11, 6, 10, 4], 'losses': [5, 8, 6, 1, 5, 10, 6, 12]} football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'losses']) football.set_index(['team', 'year'], inplace=True) </code></pre> <p>How I can apply <code>sqrt</code> function after I do sum to the columns?</p> <pre><code>football[['wins', 'losses']].sum(axis=1) </code></pre>
howto
2016-05-16T14:37:56Z
37,258,152
More efficient way to make unicode escape codes
<p>I am using python to automatically generate <code>qsf</code> files for Qualtrics online surveys. The <code>qsf</code> file requires unicode characters to be escaped using the <code>\u+hex</code> convention: 'слово' = '\u0441\u043b\u043e\u0432\u043e'. Currently, I am achieving this with the following expression:</p> <pre><code>'слово'.encode('ascii','backslashreplace').decode('ascii') </code></pre> <p>The output is exactly what I need, but since this is a two-step process, I wondered if there is a more efficient way to get the same result.</p>
howto
2016-05-16T15:56:44Z
37,262,062
PYTHON: How do I create a list of every possible letter mapping using a dictionary that stores every possible letter mapping combination?
<p>I am working on a program that breaks one-to-one mapping ciphers where the current state is stored in a dictionary that contains the possible mappings for each letter. Each letter key contains a list of the letters that it could possibly be mapped to. In the end, there should only be one letter in each letter's list. For this problem, the dictionary would look like this with the respective (key : value) pairs:</p> <pre><code>'A' : ['A'] 'B' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'C' : ['C'] 'D' : ['D'] 'E' : ['E'] 'F' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'G' : ['G', 'W'] 'H' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'I' : ['I'] 'J' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'K' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'L' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'M' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'N' : ['N'] 'O' : ['O'] 'P' : ['P'] 'Q' : ['Q'] 'R' : ['R'] 'S' : ['S'] 'T' : ['T'] 'U' : ['U'] 'V' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'W' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] 'X' : ['X'] 'Y' : ['Y'] 'Z' : ['B', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'V', 'W', 'Z'] </code></pre> <p>How would I create a list that contains every possible mapping situation as an element? Such a list would contain each possible dictionary where every letter key has exactly one letter value in its list. This would serve to find all possible mappings with this current state. An example element would be the dictionary:</p> <pre><code>'A' : ['A'] 'B' : ['B'] 'C' : ['C'] 'D' : ['D'] 'E' : ['E'] 'F' : ['F'] 'G' : ['G'] 'H' : ['H'] 'I' : ['I'] 'J' : ['J'] 'K' : ['K'] 'L' : ['L'] 'M' : ['M'] 'N' : ['N'] 'O' : ['O'] 'P' : ['P'] 'Q' : ['Q'] 'R' : ['R'] 'S' : ['S'] 'T' : ['T'] 'U' : ['U'] 'V' : ['V'] 'W' : ['W'] 'X' : ['X'] 'Y' : ['Y'] 'Z' : ['Z'] </code></pre>
howto
2016-05-16T19:46:49Z
37,316,698
Python binary conversion to hex
<p>I'm trying to convert a binary I have in python (a gzipped protocol buffer object) to an hexadecimal string in a string escape fashion (eg. \xFA\x1C ..).</p> <p>I have tried both </p> <pre><code>repr(&lt;mygzipfileobj&gt;.getvalue()) </code></pre> <p>as well as </p> <pre><code>&lt;mygzipfileobj&gt;.getvalue().encode('string-escape') </code></pre> <p>In both cases I end up with a string which is not made of HEX chars only.</p> <pre><code>\x86\xe3$T]\x0fPE\x1c\xaa\x1c8d\xb7\x9e\x127\xcd\x1a.\x88v ... </code></pre> <p>How can I achieve a consistent hexadecimal conversion where every single byte is actually translated to a \xHH format ? (where H represents a valid hex char 0-9A-F)</p>
howto
2016-05-19T07:32:28Z
37,340,568
Interactive shell program wrapper in python
<p>I'm trying to run a shell program through python. I need to run a command, then while it's still running and waiting for input to continue, I need to take the output received by the program, and process that data as a string. Then I need to parse some data into that program, and simulate an enter pressing. What would be the best way to achieve this?</p>
howto
2016-05-20T07:29:26Z
37,348,050
Getting file path from command line arguments in python
<p>I would like to read a file path from command line arguments, using argparse. Is there any optimal way to check if the path is relative (file is in current directory) or the complete path is given? (Other than checking the input and adding current directory to file name if the path does not exist.)</p>
howto
2016-05-20T13:31:28Z
37,365,033
How to print framed strings
<p>I want to print a frame all arround strings. I want to use ASCII chars from 185 to 188 and from 200 to 206. I want something like this but I don't like the last row because of the wrong allignment on the bottom-right side. It's possible to make it better?</p> <pre><code>retString = "\n╔══&gt; Stanza n. %03d &lt;══╗" % nRoom retString += "\n╠═&gt; Num letti: %-3d ║" % nBeds retString += "\n╠═&gt; Fumatori ║" retString += "\n╠═&gt; Televisione ║" retString += "\n╠═&gt; Aria Condizionata ║" retString += "\n╚══════════════╝" return retString </code></pre> <p><a href="http://i.stack.imgur.com/UcGcY.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/UcGcY.jpg" alt="Output"></a></p>
howto
2016-05-21T15:50:19Z
37,374,947
Elegant way to split list on particular values
<p>I am trying to think of an elegant way to do the following task: </p> <p>I have a list of mixed types, and would like to 'break' the list on one of the types. For example, I might have </p> <pre><code>['a', 1, 2, 3, 'b', 4, 5, 6] </code></pre> <p>and I'd like to return something like </p> <pre><code>{'a': [1, 2, 3], 'b': [4, 5, 6]} </code></pre> <p>The motivation for doing this is the following: I have some html data that is split up as follows</p> <pre><code>&lt;div ...&gt; ... &lt;/div&gt; &lt;table&gt; ... &lt;/table&gt; &lt;table&gt; ... &lt;/table&gt; &lt;div ...&gt; ... &lt;/div&gt; &lt;table&gt; ... &lt;/table&gt; &lt;table&gt; ... &lt;/table&gt; ... &lt;table&gt; ... &lt;/table&gt; </code></pre> <p>Which I would like to organize into blocks delimited by the divs. If anyone can think of a nicer approach than what I proposed above that would be great too! Thanks.</p>
howto
2016-05-22T13:23:39Z
37,397,296
Summing similar elements within a tuple-of-tuples
<p>Following on from <a href="http://stackoverflow.com/questions/37341822/summing-a-tuple-of-tuples-and-a-nested-dict/37341998#37341998">this</a> question, I now need to <strong>sum</strong> similar entries (tuples) within an overall tuple.</p> <p>So given a tuple-of-tuples such as:</p> <pre><code>T = (('a', 'b', 2), ('a', 'c', 4), ('b', 'c', 1), ('a', 'b', 8),) </code></pre> <p>For all tuples where the <em>first and second element are identical</em>, I want to sum the <em>third</em> element, otherwise, leave the tuple in place. So I will end up with the following tuple-of-tuples:</p> <pre><code>(('a', 'b', 10), ('a', 'c', 4), ('b', 'c', 1),) </code></pre> <p>The order of the tuples within the enclosing tuple (and the summing) doesn't matter.</p> <p>We are dealing with tuples so we can't take advantage of something like <code>dict.get()</code>. If we go the <code>defaultdict</code> route :</p> <pre><code>In [1218]: d = defaultdict(lambda: defaultdict(int)) In [1220]: for t in T: d[t[0]][t[1]] += t[2] ......: In [1225]: d Out[1225]: defaultdict(&lt;function __main__.&lt;lambda&gt;&gt;, {'a': defaultdict(int, {'b': 10, 'c': 4}), 'b': defaultdict(int, {'c': 1})}) </code></pre> <p>I'm not quite sure how to reconstruct that into a tuple-of-tuples. Any anyway, although the order of the three elements within each tuple will be consistent, I'm not comfortable with my indexing of the tuples. Can this be done without any conversion to other data types?</p>
howto
2016-05-23T17:41:59Z
37,399,461
vectorized implementation for pseudo pivot table in python
<p>I have the following dataframe, including some vehicles and the components for said vehicles:</p> <pre><code>df &gt;&gt;&gt;&gt; Vehicle Component 0 Ford Air conditioner 1 Ford airbag 2 Ford engine with 150 H/P 3 Toyota airbag 4 Toyota 1-year concierge assistance 5 Toyota ABS breaks 6 Chrysler ABS breaks 7 Chrysler airbag 8 Chrysler air conditioner 9 Chrysler engine with 250 H/P </code></pre> <p>I want to create a second dataframe with the following format, i.e., a pseudo dataframe where I add a 1 to every vehicle-component existing combination, and a 0 otherwise.</p> <pre><code>second_df &gt;&gt;&gt;&gt; Vehicle Air conditioner airbag engine with 150 H/P engine with 250 H/P ABS breaks 1-year concierge assistance 0 Ford 1 1 1 0 0 0 1 Toyota 0 1 0 0 1 1 2 Chrysler 1 1 0 1 1 0 </code></pre> <p>I implemented this with thesolution posted below, but it is pretty inefficient. Appreciate your help.</p>
howto
2016-05-23T19:56:45Z
37,417,157
Changing the columns in DataFrame with respect to values in other columns
<p>I have a data Frame which looks like this,</p> <pre><code>Head CHR Start End Trans Num A 1 29554 30039 ENST473358 1 A 1 30564 30667 ENST473358 2 A 1 30976 31097 ENST473358 3 B 1 36091 35267 ENST417324 1 B 1 35491 34544 ENST417324 2 B 1 35184 35711 ENST417324 3 B 1 36083 35235 ENST461467 1 B 1 35491 120765 ENST461467 2 </code></pre> <p>And I need to change the Column Start and End with respect to column Trans and Num. Means, the column Trans has values which are repeating which is mentioned in column Num. And so on. Means I want to change Start as -End+10 and End as- Start from next row (which has same Trans) -10 and so on for all rows .So what I am aiming is to get an output which looks like follows,</p> <pre><code> Head CHR Start End Trans Num A 1 30564 30667 ENST473358 1 A 1 30976 31097 ENST473358 2 A 1 30267 NA ENST473358 3 B 1 35277 35481 ENST417324 1 B 1 34554 35174 ENST417324 2 B 1 35721 NA ENST417324 3 B 1 35245 35481 ENST461467 1 B 1 120775 NA ENST461467 2 </code></pre> <p>Any help is much appreciated I could do it without considering the Trans with the following script, but I won't get my desired output.</p> <pre><code>start = df['Start'].copy() df['Start'] = df.End + 10 df['End'] = ((start.shift(-1) - 10)) df.iloc[-1, df.columns.get_loc('Start')] = '' df.iloc[-1, df.columns.get_loc('End')] = '' print (df) </code></pre>
howto
2016-05-24T14:56:15Z
37,423,445
Python prettytable Sort by Multiple Columns
<p>I'm using PrettyTable to print data to the terminal in a nice table format. It's pretty easy to print it ordered by a single column.</p> <pre><code>from prettytable import PrettyTable table = PrettyTable(["Name", "Grade"]) table.add_row(["Joe", 90]) table.add_row(["Sally", 100]) print table.get_string(sortby="Grade", reversesort=True) &gt;&gt; Table with Sally on top, because her score is highest. </code></pre> <p>My trouble is I want to sort on two columns. In this surrogate case, I would want to print by grade, and then alphabetically if there was a tie. </p> <pre><code>table = PrettyTable(["Name", "Grade"]) table.add_row(["Joe", 90]) table.add_row(["Sally", 100]) table.add_row(["Bill", 90]) print table.get_string(sortby=("Grade","Name"), reversesort=True) &gt;&gt; Doesn't work </code></pre> <p>The docs say that sort_key will allow me to write a function to accomplish this, but I haven't seen an actual implementation to work off.</p>
howto
2016-05-24T20:40:01Z
37,425,477
remove newline and whitespace parse XML with python Xpath
<p>Here is the xml file <a href="http://www.diveintopython3.net/examples/feed.xml" rel="nofollow">http://www.diveintopython3.net/examples/feed.xml</a></p> <p>My code is <a href="http://i.stack.imgur.com/22GNA.png" rel="nofollow"><img src="http://i.stack.imgur.com/22GNA.png" alt="enter image description here"></a></p> <p>My result is <a href="http://i.stack.imgur.com/x6Gz4.png" rel="nofollow"><img src="http://i.stack.imgur.com/x6Gz4.png" alt="enter image description here"></a></p> <p>My questions are</p> <ol> <li><p>how to remove the <code>\n</code> and the following white space in the text</p></li> <li><p>how to get the node whose text is "dive into mark", how to search the text syntax</p></li> </ol>
howto
2016-05-24T23:24:04Z
37,482,313
Comparing List and get indices in python
<p>I have a dataframe <code>A['name', 'frequency']</code> and a list B of 'name' Both are quite long. B is smaller one which I get on daily basis. I have to check whether element of B which is 'name' is there in DataFrame. <code>A['name']</code> if it is there i have to update the frequency of that 'name' in the dataframe every time it appears in B and if B has some new element I have to add that as a new row in DataFrame A with frequency 1. I have to do it in python 2.7. Thank you A is my mac_list like this</p> <pre><code>mac_list.iloc[0:6] Out[59]: mac_address frequency 0 20c9d0892feb 2 1 28e34789c4c2 1 2 3480b3d51d5f 1 3 4480ebb4e28c 1 4 4c60de5dad72 1 5 4ca56dab4550 1 </code></pre> <p>and B is my new_mac_list like this </p> <pre><code>['20c9d0892feb' '3480b3d51d5f' '20c9d0892feb' '249cji39fj4g'] </code></pre> <p>I want a output for mac_list like</p> <pre><code>mac_address frequency 0 20c9d0892feb 4 1 28e34789c4c2 1 2 3480b3d51d5f 2 3 4480ebb4e28c 1 4 4c60de5dad72 1 5 4ca56dab4550 1 6 249cji39fj4g 1 </code></pre> <p>I have tried this </p> <pre><code>b = mac_list['mac_address'].isin(new_mac_list) b=list(b) for i in range(len(b)): if b[i]==True: mac_list['frequency'].iloc[i]+=1 </code></pre> <p>to update the frequency but problem is frequency is increasing by one even if it appears more than 1 in new_mac_list </p> <p>And I have used this to insert new element</p> <pre><code>c = new_mac_list.isin(mac_list['mac_address']) c=list(c) for i in range(len(c)): if c[i]==False: mac_list.append(new_mac_list[i],1) </code></pre> <p>But it is very inefficient way I guess it can be done by comparing once only.</p>
howto
2016-05-27T11:27:55Z
37,492,239
python plot distribution across mean
<p>I have 500 simulation for each day in 2015. So, my data looks as:</p> <pre><code>from datetime import date, timedelta as td, datetime d1 = datetime.strptime('1/1/2015', "%m/%d/%Y") d2 = datetime.strptime('12/31/2015', "%m/%d/%Y") AllDays = [] while(d1&lt;=d2): AllDays.append(d1) d1 = d1 + td(days=1) </code></pre> <p>For each day I have 500 points representing temperature for that day.</p> <pre><code>TempSims.shape (500,365) </code></pre> <p>I want to have a 2D plot with x-axis as dates and y-axis with a line showing mean of simulation for each day in 2015 and the 500 sims spread across the mean to show how mean stacks up against the distribution.</p> <p>This is my first plot in python so I am having a hard time implementing it.</p> <p>Edit: My arrays are numpy arrays and date is datetime.</p> <p>Edit2: I am looking for plot as in this example: <a href="http://i.stack.imgur.com/1dBtB.png" rel="nofollow"><img src="http://i.stack.imgur.com/1dBtB.png" alt="enter image description here"></a></p>
howto
2016-05-27T21:04:55Z
37,497,559
Python Pandas Identify Duplicated rows with Additional Column
<p>I have the following <code>Dataframe</code>:</p> <pre><code>df Out[23]: PplNum RoomNum Value 0 1 0 265 1 1 12 170 2 2 0 297 3 2 12 85 4 2 0 41 5 2 12 144 </code></pre> <p>Generally the <code>PplNum</code> and <code>RoomNum</code> is generated like this, and it will always follow this format:</p> <pre><code>for ppl in [1,2,2]: for room in [0, 12]: print(ppl, room) 1 0 1 12 2 0 2 12 2 0 2 12 </code></pre> <p>But now what I would like to achieve is to mark those duplicates combinations of <code>PplNum</code> and <code>RoomNum</code> so that I can know which combinationss are the first occurrence, which are the second occurrence and so on... So the expected output Dataframe will be like this:</p> <pre><code> PplNum RoomNum Value C 0 1 0 265 1 1 1 12 170 1 2 2 0 297 1 3 2 12 85 1 4 2 0 41 2 5 2 12 144 2 </code></pre>
howto
2016-05-28T09:53:07Z
37,501,075
How to get parameter arguments from a frozen spicy.stats distribution?
<h1>Frozen Distribution</h1> <p>In <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#statistics-scipy-stats" rel="nofollow"><code>scipy.stats</code></a> you can create a <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#freezing-a-distribution" rel="nofollow"><em>frozen</em> distribution</a> that allows the parameterization (shape, location &amp; scale) of the distribution to be permanently set for that instance.</p> <p>For example, you can create an gamma distribution (<a href="http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.stats.gamma.html#scipy-stats-gamma" rel="nofollow"><code>scipy.stats.gamma</code></a>) with <code>a</code>,<code>loc</code> and <code>scale</code> parameters and <em>freeze</em> them so they do not have to be passed around every time that distribution is needed. </p> <pre><code>import scipy.stats as stats # Parameters for this particular gamma distribution a, loc, scale = 3.14, 5.0, 2.0 # Do something with the general distribution parameterized print 'gamma stats:', stats.gamma(a, loc=loc, scale=scale).stats() # Create frozen distribution rv = stats.gamma(a, loc=loc, scale=scale) # Do something with the specific, already parameterized, distribution print 'rv stats :', rv.stats() </code></pre> <hr> <pre><code>gamma stats: (array(11.280000000000001), array(12.56)) rv stats : (array(11.280000000000001), array(12.56)) </code></pre> <h2>Accessible <code>rv</code> parameters?</h2> <p>Since the parameters will most likely not be passed around as a result of this feature, is there a way to get those values back from only the frozen distribution, <code>rv</code>, later on?</p>
howto
2016-05-28T15:55:08Z
37,546,552
Make a variable from what's in a text file
<p>So I want to make a variable from the contents of a text file.</p> <p>E.g The text file contains the sentence "Ask not what your country can do for you ask what you can do for your country"</p> <p>Now I want the variable to be like Sentence = ("Ask not what your country can do for you ask what you can do for your country")</p> <p>But from the text file, if that makes any sense?</p>
howto
2016-05-31T13:02:48Z
37,584,492
Regular expression substitution in Python
<p>I have a string</p> <p><code>line = "haha (as jfeoiwf) avsrv arv (as qwefo) afneoifew"</code></p> <p>From this I want to remove all instances of <code>"(as...)"</code> using some regular expression. I want the output to look like</p> <p><code>line = "haha avsrv arv afneoifew"</code></p> <p>I tried:</p> <pre><code>line = re.sub(r'\(+as .*\)','',line) </code></pre> <p>But this yields:</p> <p><code>line = "haha afneoifew"</code></p>
howto
2016-06-02T06:42:09Z
37,605,612
PyImport_ImportModule, possible to load module from memory?
<p>I embedded python in my C++ program.</p> <p>I use PyImport_ImportModule to load my module written in a .py file. But how can I load it from memory? Let's say my .py file is encrypted, so I need to first decrypt it and feed the code to python to execute. </p> <p>Moreover, it'd be nice if I could bypass/intercept or modify the import mechanism, so that doesn't load modules from the filesystem but my own memory blocks, how/can I do that?</p>
howto
2016-06-03T03:51:46Z
37,619,348
Summing 2nd list items in a list of lists of lists
<p>My data is a list of lists of lists of varying size:</p> <pre><code>data = [[[1, 3],[2, 5],[3, 7]],[[1,11],[2,15]],.....]]] </code></pre> <p>What I want to do is return a list of lists with the values of the 2nd element of each list of lists summed - so, 3+5+7 is a list, so is 11+15, etc:</p> <pre><code>newdata = [[15],[26],...] </code></pre> <p>Or even just a list of the sums would be fine as I can take it from there:</p> <pre><code>newdata2 = [15,26,...] </code></pre> <p>I've tried accessing the items in the list through different forms and structures of list comprehensions, but I can't get seem to get it to the format I want. </p>
howto
2016-06-03T16:28:34Z
37,630,714
Creating a slice object in python
<p>If I have an array <code>a</code>, I understand how to slice it in various ways. Specifically, to slice from an arbitrary first index to the end of the array I would do <code>a[2:]</code>.</p> <p>But how would I create a slice object to achieve the same thing? The two ways to create slice objects that are <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow">documented</a> are <code>slice(start, stop, step)</code> and <code>slice(stop)</code>. </p> <p>So if I pass a single argument like I would in <code>a[2:]</code> the <code>slice</code> object would interpret it as the stopping index rather than the starting index.</p> <p><strong>Question:</strong> How do I pass an index to the <code>slice</code> object with a starting index and get a slice object that slices all the way to the end? I don't know the total size of the list. </p>
howto
2016-06-04T13:20:04Z
37,682,284
Mask a 3d array with a 2d mask in numpy
<p>I have a 3-dimensional array that I want to mask using a 2-dimensional array that has the same dimensions as the two rightmost of the 3-dimensional array. Is there a way to do this without writing the following loop?</p> <pre><code>import numpy as np nx = 2 nt = 4 field3d = np.random.rand(nt, nx, nx) field2d = np.random.rand(nx, nx) field3d_mask = np.zeros(field3d.shape, dtype=bool) for t in range(nt): field3d_mask[t,:,:] = field2d &gt; 0.3 field3d = np.ma.array(field3d, mask=field3d_mask) print field2d print field3d </code></pre>
howto
2016-06-07T14:35:48Z
37,685,718
Finding specific links with Beautiful Soup
<p>I'm using Beautiful Soup for Python to parse a webpage in order to download data from certain files and aggregate them into one file. The webpages I'm parsing contain tons of different download links, and I'm having trouble getting the specific links that I want.</p> <p>The HTML is essentially set up like this:</p> <pre><code>&lt;li&gt; &lt;b&gt;data I dont care about: &lt;/b&gt; &lt;a href ="/id#____dontcare2010"&gt;2010&lt;/a&gt; &lt;a href = "/id#____dontcare2011"&gt;2011&lt;/a&gt; (and so on) &lt;/li&gt; &lt;li&gt; &lt;b&gt;data I DO care about: &lt;/b&gt; &lt; a href ="/id#___data2010"&gt;2010&lt;/a&gt; &lt;a href= "/id#____data2011"&gt;2011&lt;/a&gt; .... &lt;/li&gt; </code></pre> <p>(the id#____ is just an id number for the specific object that that webpage contains information on, not too important for the question but I figured I'd get as accurate as possible)</p> <p>What I want is for BeautifulSoup to find the list ("<code>&lt;li&gt;</code>" tags) that contains the String "Links I DO care about: " and then store every link in that list into a Python list. I've tried,</p> <pre><code>soup.findAll('li', text = 'Links I DO care about: ") </code></pre> <p>but that doesn't seem to work...</p> <p>I would imagine there's some way to do these steps with Beautiful Soup: 1. find specific list that I want 2. find all the <code>&lt;a href="..."&gt;&lt;/a&gt;</code>tags in that list 3. store all those links in a Python list.</p> <p>I can't seem to find a way to do this though.</p> <p>Any advice?</p>
howto
2016-06-07T17:31:51Z
37,760,124
How to input a line word by word in Python?
<p>I have multiple files, each with a line with, say ~10M numbers each. I want to check each file and print a 0 for each file that has numbers repeated and 1 for each that doesn't.</p> <p>I am using a list for counting frequency. Because of the large amount of numbers per line I want to update the frequency after accepting each number and break as soon as I find a repeated number. While this is simple in C, I have no idea how to do this in Python.</p> <p>How do I input a line in a word-by-word manner without storing (or taking as input) the whole line?</p> <p>EDIT: I also need a way for doing this from live input rather than a file.</p>
howto
2016-06-11T05:04:39Z
37,855,490
adding new key inside a new key and assigning value in python dictionary
<p>I have a dictionary:</p> <p><code>dic = {'Test': {'cities':5}}</code></p> <p>it is easy to add a key-value pair inside </p> <p><code>dic['test']</code></p> <p>by simply writing</p> <p><code>dic['Test']['random'] = 1</code> </p> <p>which gives output like this. </p> <p><code>{'Test' : {'cities':5, 'random':1}}</code> </p> <p>but what if I want to add a key:key- value. i.e from above step I want a output like:</p> <p><code>{'Test' : {'cities':5, 'random':1, 'class':{ section : 5}}}</code></p> <p>This doesn't works which I thought may work. <code>dic['Test']['class']['section'] = 5</code> </p> <p>it gives a key error 'class' .</p> <p>For my specific case I am assigning data frame's row as key in iteration somthing like this. <code>dic[df.iloc[i]['column1']][df.iloc[i]['column2']] = df.iloc[i]['column3']</code></p> <p>where column1 itself is not in the key. How to do it, I am using python 2.7</p>
howto
2016-06-16T09:41:57Z
37,878,946
Indexing one array by another in numpy
<p>Suppose I have a matrix <strong>A</strong> with some arbitrary values:</p> <pre><code>array([[ 2, 4, 5, 3], [ 1, 6, 8, 9], [ 8, 7, 0, 2]]) </code></pre> <p>And a matrix <strong>B</strong> which contains indices of elements in A:</p> <pre><code>array([[0, 0, 1, 2], [0, 3, 2, 1], [3, 2, 1, 0]]) </code></pre> <p>How do I select values from <strong>A</strong> pointed by <strong>B</strong>, i.e.:</p> <pre><code>A[B] = [[2, 2, 4, 5], [1, 9, 8, 6], [2, 0, 7, 8]] </code></pre>
howto
2016-06-17T10:15:22Z
37,934,969
Creating a Pandas dataframe from elements of a dictionary
<p>I'm trying to create a pandas dataframe from a dictionary. The dictionary is set up as</p> <pre><code>nvalues = {"y1": [1, 2, 3, 4], "y2": [5, 6, 7, 8], "y3": [a, b, c, d]} </code></pre> <p>I would like the dataframe to include only <code>"y1"</code> and "<code>y2"</code>. So far I can accomplish this using </p> <pre><code>df = pd.DataFrame.from_dict(nvalues) df.drop("y3", axis=1, inplace=True) </code></pre> <p>I would like to know if it is possible to accomplish this without having <code>df.drop()</code></p>
howto
2016-06-21T03:14:10Z
38,147,259
How to work with surrogate pairs in Python?
<p>This is a follow-up to <a href="http://stackoverflow.com/questions/38106422/converting-to-emoji">Converting to Emoji</a>. In that question, the OP had a <code>json.dumps()</code>-encoded file with an emoji represented as a surrogate pair - <code>\ud83d\ude4f</code>. S/he was having problems reading the file and translating the emoji correctly, and the correct <a href="http://stackoverflow.com/a/38145581/1426065">answer</a> was to <code>json.loads()</code> each line from the file, and the <code>json</code> module would handle the conversion from surrogate pair back to (I'm assuming UTF8-encoded) emoji.</p> <p>So here is my situation: say I have just a regular Python 3 unicode string with a surrogate pair in it:</p> <pre><code>emoji = "This is \ud83d\ude4f, an emoji." </code></pre> <p>How do I process this string to get a representation of the <a href="http://apps.timwhitlock.info/unicode/inspect?s=%F0%9F%99%8F">emoji</a> out of it? I'm looking to get something like this:</p> <pre><code>"This is 🙏, an emoji." # or "This is \U0001f64f, an emoji." </code></pre> <p>I've tried:</p> <pre><code>print(emoji) print(emoji.encode("utf-8")) # also tried "ascii", "utf-16", and "utf-16-le" json.loads(emoji) # and `.encode()` with various codecs </code></pre> <p>Generally I get an error similar to <code>UnicodeEncodeError: XXX codec can't encode character '\ud83d' in position 8: surrogates no allowed</code>.</p> <p>I'm running Python 3.5.1 on Linux, with <code>$LANG</code> set to <code>en_US.UTF-8</code>. I've run these samples both in the Python interpreter on the command line, and within IPython running in Sublime Text - there don't appear to be any differences.</p>
howto
2016-07-01T13:55:31Z