question_id
int64
1.48k
40.1M
title
stringlengths
15
142
question_body
stringlengths
46
12.1k
question_type
stringclasses
5 values
question_date
stringlengths
20
20
38,147,447
How to remove square bracket from pandas dataframe
<p>I came up with values in square bracket(more like a <code>list</code>) after applying <code>str.findall()</code> to column of a pandas dataframe. How can I remove the square bracket ?</p> <pre><code>print df id value 1 [63] 2 [65] 3 [64] 4 [53] 5 [13] 6 [34] </code></pre>
howto
2016-07-01T14:03:28Z
38,231,591
Splitting dictionary/list inside a Pandas Column into Separate Columns
<p>I have data saved in a postgreSQL database. I am querying this data using Python2.7 and turning it into a Pandas DataFrame. However, the last column of this dataframe has a dictionary (or list?) of values within it. The DataFrame looks like this:</p> <pre><code>[1] df Station ID Pollutants 8809 {"a": "46", "b": "3", "c": "12"} 8810 {"a": "36", "b": "5", "c": "8"} 8811 {"b": "2", "c": "7"} 8812 {"c": "11"} 8813 {"a": "82", "c": "15"} </code></pre> <p>I need to split this column into separate columns so that the DataFrame looks like this:</p> <pre><code>[2] df2 Station ID a b c 8809 46 3 12 8810 36 5 8 8811 NaN 2 7 8812 NaN NaN 11 8813 82 NaN 15 </code></pre> <p>The major issue I'm having is that the lists are not the same lengths. But all of the lists only contain up to the same 3 values: a, b, and c. And they always appear in the same order (a first, b second, c third). </p> <p>The following code USED to work and return exactly what I wanted (df2). </p> <pre><code>[3] df [4] objs = [df, pandas.DataFrame(df['Pollutant Levels'].tolist()).iloc[:, :3]] [5] df2 = pandas.concat(objs, axis=1).drop('Pollutant Levels', axis=1) [6] print(df2) </code></pre> <p>I was running this code just last week and it was working fine. But now my code is broken and I get this error from line [4]: </p> <pre><code>IndexError: out-of-bounds on slice (end) </code></pre> <p>I made no changes to the code but am now getting the error. I feel this is due to my method not being robust or proper. </p> <p>Any suggestions or guidance on how to split this column of lists into separate columns would be super appreciated!</p> <p>EDIT: I think the .tolist() and .apply methods are not working on my code because it is one unicode string, i.e.:</p> <pre><code>#My data format u{'a': '1', 'b': '2', 'c': '3'} #and not {u'a': '1', u'b': '2', u'c': '3'} </code></pre> <p>The data is importing from the postgreSQL database in this format. Any help or ideas with this issue? is there a way to convert the unicode? </p>
howto
2016-07-06T18:47:56Z
38,251,245
Create a list of tuples with adjacent list elements if a condition is true
<p>I am trying to create a list of tuples where the tuple contents are the number <code>9</code> and the number before it in the list. </p> <p><strong>Input List:</strong></p> <pre><code>myList = [1, 8, 9, 2, 4, 9, 6, 7, 9, 8] </code></pre> <p><strong>Desired Output:</strong></p> <pre><code>sets = [(8, 9), (4, 9), (7, 9)] </code></pre> <p><strong>Code:</strong></p> <pre><code>sets = [list(zip(myList[i:i], myList[-1:])) for i in myList if i==9] </code></pre> <p><strong>Current Result:</strong></p> <pre><code>[[], [], []] </code></pre>
howto
2016-07-07T16:57:49Z
38,273,353
How to repeat individual characters in strings in Python
<p>I know that </p> <pre><code>"123abc" * 2 </code></pre> <p>evaluates as <code>"123abc123abc"</code>, but is there an easy way to repeat individual letters N times, e.g. convert <code>"123abc"</code> to <code>"112233aabbcc"</code> or <code>"111222333aaabbbccc"</code>?</p>
howto
2016-07-08T18:36:47Z
38,331,568
Return the column name(s) for a specific value in a pandas dataframe
<p>where I have found this option in other languages such as R or SQL but I am not quite sure how to go about this in Pandas.</p> <p>So I have a file with 1262 columns and 1 row and need the column headers to return for every time that a specific value appears. </p> <p>Say for example this test dataframe:</p> <pre><code>Date col1 col2 col3 col4 col5 col6 col7 01/01/2016 00:00 37.04 36.57 35.77 37.56 36.79 35.90 38.15 </code></pre> <p>And I need to locate the column name for e.g. where value = 38.15. What is the best way of doing so?</p> <p>Thanks</p>
howto
2016-07-12T14:21:56Z
38,379,453
How to read only part of a list of strings in python
<p>I need to find a way to be able to read x bytes of data from a list containing strings. Each item in the list is ~36MB. I need to be able to run through each item in the list, but only grabbing about ~1KB of that item at a time.</p> <p>Essentially it looks like this:</p> <pre><code>for item in list: #grab part of item #do something with that part #Move onto next part, until you've gone through the whole item </code></pre> <p>My current code (which kind of works, but seems to be rather slow and inefficient) is such:</p> <pre><code>for character in bucket: print character packet = "".join(character) if(len(packet.encode("utf8")) &gt;= packetSizeBytes): print "Bytes: " + str(len(packet.encode("utf8"))) return packet </code></pre> <p>I'm wondering if there exists anything like <code>f.read(bufSize)</code>, but for strings.</p> <p>Not sure if it's relevant, but for more context this is what I'm doing:</p> <p>I'm reading data from a very large file (several GB) into much smaller (and more manageable chunks). I chunk the file using <code>f.read(chunkSize)</code>, and store those as <code>buckets</code> However, even those buckets are still too large for what I ultimately need to do with the data, so I want to grab only parts of the bucket at a time. </p> <p>Originally, I bypassed the whole bucket thing, and just chunked the file into chunks that were small enough for my purposes. However, this led to me having to chunk the file hundreds of thousands of times, which got kind of slow. My hope now is to be able to have buckets queued up so that while I'm doing something with one bucket, I can begin reading from others. If any of this sounds confusing, let me know and I'll try to clarify.</p> <p>Thanks</p>
howto
2016-07-14T16:25:48Z
38,426,168
How to remove multiple columns that end with same text in Pandas?
<p>I'm trying to remove a group of columns from a dataset. All of the variables to remove end with the text "prefix".</p> <p>I did manage to "collect' them into a group using the following: <a href="http://i.stack.imgur.com/w8AZ5.jpg"><img src="http://i.stack.imgur.com/w8AZ5.jpg" alt="enter image description here"></a></p> <p>and then tried a series of ways to drop that group that resulted in a variety of errors. Can anyone please, propose a way to remove these columns?</p>
howto
2016-07-17T21:35:43Z
38,457,059
Pandas changing cell values based on another cell
<p>I am currently formatting data from two different data sets. One of the dataset reflects an observation count of people in room on hour basis, the second one is a count of people based on wifi logs generated in 5 minutes interval.</p> <p>After merging these two dataframes into one, I run into the issue where each hour (as "10:00:00") has the data from the original set, but the other data (every 5min like "10:47:14") does not include this data.</p> <p>Here is how the merge dataframe looks:</p> <pre><code> room time con auth capacity % Count module size 0 B002 Mon Nov 02 10:32:06 23 23 90 NaN NaN NaN NaN` 1 B002 Mon Nov 02 10:37:10 25 25 90 NaN NaN NaN NaN` 12527 B002 Mon Nov 02 10:00:00 NaN NaN 90 50% 45.0 COMP30520 60` 12528 B002 Mon Nov 02 11:00:00 NaN NaN 90 0% 0.0 COMP30520 60` </code></pre> <p>Is there a way for me to go through the dataframe and find all the information regarding the "occupancy", "occupancyCount", "module" and "size" from 11:00:00 and write it to all the cells that are of the same day and where the hour is between 10:00:00 and 10:59:59?</p> <p>That would allow me to have all the information on each row and then allow me to gather the <code>min()</code>, <code>max()</code> and <code>median()</code> based on 'day' and 'hour'.</p> <p>To answer the comment for the original dataframes, here there are:<br> <strong>first dataframe:</strong></p> <pre><code> time room module size 0 Mon Nov 02 09:00:00 B002 COMP30190 29 1 Mon Nov 02 10:00:00 B002 COMP40660 53 </code></pre> <p><strong>second dataframe:</strong></p> <pre><code> room time con auth capacity % Count 0 B002 Mon Nov 02 20:32:06 0 0 NaN NaN NaN 1 B002 Mon Nov 02 20:37:10 0 0 NaN NaN NaN 2 B002 Mon Nov 02 20:42:12 0 0 NaN NaN NaN 12797 B008 Wed Nov 11 13:00:00 NaN NaN 40 25 10.0 12798 B008 Wed Nov 11 14:00:00 NaN NaN 40 50 20.0 12799 B008 Wed Nov 11 15:00:00 NaN NaN 40 25 10.0 </code></pre> <p>this is how these two dataframes were merged together:</p> <pre><code>DFinal = pd.merge(DF, d3, left_on=["room", "time"], right_on=["room", "time"], how="outer", left_index=False, right_index=False) </code></pre> <p>Any help with this would be greatly appreciated.</p> <p>Thanks a lot,</p> <p>-Romain</p>
howto
2016-07-19T11:19:54Z
38,549,915
Merging data frame columns of strings into one single column in Pandas
<p>I have columns in a dataframe (imported from a CSV) containing text like this.</p> <pre><code>"New york", "Atlanta", "Mumbai" "Beijing", "Paris", "Budapest" "Brussels", "Oslo", "Singapore" </code></pre> <p>I want to collapse/merge all the columns into one single column, like this</p> <pre><code>New york Atlanta Beijing Paris Budapest Brussels Oslo Singapore </code></pre> <p>How to do it in pandas?</p>
howto
2016-07-24T07:57:26Z
38,704,545
How to binarize the values in a pandas DataFrame?
<p>I have the following DataFrame:</p> <pre><code>df = pd.DataFrame(['Male','Female', 'Female', 'Unknown', 'Male'], columns = ['Gender']) </code></pre> <p>I want to convert this to a DataFrame with columns 'Male','Female' and 'Unknown' the values 0 and 1 indicated the Gender. </p> <pre><code>Gender Male Female Male 1 0 Female 0 1 . . . . </code></pre> <p>To do this, I wrote a function and called the function using map.</p> <pre><code>def isValue(x , value): if(x == value): return 1 else: return 0 for value in df['Gender'].unique(): df[str(value)] = df['Gender'].map( lambda x: isValue(str(x) , str(value))) </code></pre> <p>Which works perfectly. But is there a better way to do this? Is there an inbuilt function in any of sklearn package that I can use? </p>
howto
2016-08-01T17:15:57Z
38,708,621
How to calculate percentage of sparsity for a numpy array/matrix?
<p>I have the following 10 by 5 numpy array/matrix, which has a number of <code>NaN</code> values:</p> <pre><code>array([[ 0., 0., 0., 0., 1.], [ 1., 1., 0., nan, nan], [ 0., nan, 1., nan, nan], [ 1., 1., 1., 1., 0.], [ 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., nan], [ nan, nan, 1., 1., 1.], [ 0., 1., 0., 1., 0.], [ 1., 0., 1., 0., 0.], [ 0., 1., 0., 0., 0.]]) </code></pre> <p>How does one measure exactly how sparse this array is? Is there a simply function in numpy for measuring the percentage of missing values? </p>
howto
2016-08-01T21:44:34Z
38,831,808
Reading hex to double-precision float python
<p>I am trying to <code>unpack</code> a hex string to a double in Python. </p> <p>When I try to unpack the following: </p> <pre><code>unpack('d', "4081637ef7d0424a"); </code></pre> <p>I get the following error: </p> <pre><code>struct.error: unpack requires a string argument of length 8 </code></pre> <p>This doesn't make very much sense to me because a double is 8 bytes long, and</p> <p>2 character <strong>=</strong> 1 hex value <strong>=</strong> 1 byte </p> <p>So in essence, a double of 8 bytes long would be a 16 character hex string.</p> <p>Any pointers of unpacking this hex to a double would be super appreciated. </p>
howto
2016-08-08T14:25:41Z
38,862,349
Regex: How to match words without consecutive vowels?
<p>I'm really new to regex and I've been able to find regex which can match this quite easily, but I am unsure how to only match words without it.</p> <p>I have a .txt file with words like</p> <pre><code>sheep fleece eggs meat potato </code></pre> <p>I want to make a regular expression that matches words in which vowels are not repeated consecutively, so it would return <code>eggs meat potato</code>.</p> <p>I'm not very experienced with regex and I've been unable to find anything about how to do this online, so it'd be awesome if someone with more experience could help me out. Thanks!</p> <p>I'm using python and have been testing my regex with <a href="http://regex101.com" rel="nofollow">https://regex101.com</a>.</p> <p>Thanks!</p> <p>EDIT: provided incorrect examples of results for the regular expression. Fixed.</p>
howto
2016-08-10T00:15:54Z
39,129,846
Sort list of mixed strings based on digits
<p>How do I sort this list via the numerical values? Is a regex required to remove the numbers or is there a more Pythonic way to do this?</p> <pre><code>to_sort ['12-foo', '1-bar', '2-bar', 'foo-11', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar'] </code></pre> <p>Desired output is as follows:</p> <pre><code>1-bar 2-bar bar-3 foo-4 foobar-5 6-foo 7-bar foo-11 12-foo </code></pre>
howto
2016-08-24T17:43:34Z
39,159,475
pandas: how to do multiple groupby-apply operations
<p>I have more experience with R’s <code>data.table</code>, but am trying to learn <code>pandas</code>. In <code>data.table</code>, I can do something like this:</p> <pre><code>&gt; head(dt_m) event_id device_id longitude latitude time_ category 1: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free 2: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free 3: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free 4: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free 5: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free 6: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free app_id is_active 1: -5305696816021977482 0 2: -7164737313972860089 0 3: -8504475857937456387 0 4: -8807740666788515175 0 5: 5302560163370202064 0 6: 5521284031585796822 0 dt_m_summary &lt;- dt_m[, .( mean_active = mean(is_active, na.rm = TRUE) , median_lat = median(latitude, na.rm = TRUE) , median_lon = median(longitude, na.rm = TRUE) , mean_time = mean(time_) , new_col = your_function(latitude, longitude, time_) ) , by = list(device_id, category) ] </code></pre> <p>The new columns (<code>mean_active</code> through <code>new_col</code>), as well as <code>device_id</code> and <code>category</code>, will appear in <code>dt_m_summary</code>. I could also do a similar <code>by</code> transformation in the original table if I want a new column that has the results of the groupby-apply:</p> <p><code>dt_m[, mean_active := mean(is_active, na.rm = TRUE), by = list(device_id, category)]</code></p> <p>(in case I wanted, e.g., to select rows where <code>mean_active</code> is greater than some threshold, or do something else).</p> <p>I know there is <code>groupby</code> in <code>pandas</code>, but I haven’t found a way of doing the sort of easy transformations as above. The best I could think of was doing a series of groupby-apply’s and then merging the results into one <code>dataframe</code>, but that seems very clunky. Is there a better way of doing that?</p>
howto
2016-08-26T06:14:58Z
39,187,788
Find rows with non zero values in a subset of columns in pandas dataframe
<p>I have a datframe with 4 columns of strings and others as integers. Now I need to find out those rows of data where at least one of the column is a non-zero value (or > 0).</p> <pre><code>manwra,sahAyaH,T7,0,0,0,0,T manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>My output should be</p> <pre><code>manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>I have tried the following to obtain the answer. The string values are in colums 0,1,2 and -1 (last column).</p> <pre><code>KT[KT.ix[:,3:-2] != 0] </code></pre> <p>What I am receiving as output is </p> <pre><code>NaN,NaNNaN,NaN,NaN,NaN,NaN,NaN NaN,NaN,NaN,NaN,NaN,1,NaN,NaN NaN,NaN,NaN,NaN,1,1,NaN,NaN </code></pre> <p>How to obtain the desired output</p>
howto
2016-08-28T03:33:08Z
39,268,928
Python: how to get rid of spaces in str(dict)?
<p>For example, if you use str() on a dict, you get:</p> <pre><code>&gt;&gt;&gt; str({'a': 1, 'b': 'as df'}) "{'a': 1, 'b': 'as df'}" </code></pre> <p>However, I want the string to be like:</p> <pre><code>"{'a':1,'b':'as df'}" </code></pre> <p>How can I accomplish this?</p>
howto
2016-09-01T10:20:57Z
39,277,638
Element-wise minimum of multiple vectors in numpy
<p>I know that in numpy I can compute the element-wise minimum of two vectors with</p> <pre><code>numpy.minimum(v1, v2) </code></pre> <p>What if I have a list of vectors of equal dimension, <code>V = [v1, v2, v3, v4]</code> (but a list, not an array)? Taking <code>numpy.minimum(*V)</code> doesn't work. What's the preferred thing to do instead?</p>
howto
2016-09-01T17:30:49Z
39,299,703
How to check if character exists in DataFrame cell
<p>After creating the three-rows DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame({'a': ['1-2', '3-4', '5-6']}) </code></pre> <p>I check if there is any cell equal to '3-4':</p> <pre><code>df['a']=='3-4' </code></pre> <p><a href="http://i.stack.imgur.com/7UYJY.png" rel="nofollow"><img src="http://i.stack.imgur.com/7UYJY.png" alt="enter image description here"></a></p> <p>Since <code>df['a']=='3-4'</code> command results to <code>pandas.core.series.Series</code> object I can use it to create a "filtered" version of the original DataFrame like so:</p> <pre><code>filtered = df[ df['a']=='3-4' ] </code></pre> <p><a href="http://i.stack.imgur.com/AGU3O.png" rel="nofollow"><img src="http://i.stack.imgur.com/AGU3O.png" alt="enter image description here"></a></p> <p>In Python I can check for the occurrence of the string character in another string using:</p> <pre><code>string_value = '3-4' print('-' in string_value) </code></pre> <p>What would be a way to accomplish the same while working with DataFrames?</p> <p>So, I could create the filtered version of the original DataFrame by checking if '-' character in every row's cell, like:</p> <pre><code>filtered = df['-' in df['a']] </code></pre> <p>But this syntax above is invalid and throws <code>KeyError: False</code> error message. </p>
howto
2016-09-02T19:44:11Z
39,353,758
pandas pivot table of sales
<p>I have a list like below:</p> <pre><code> saleid upc 0 155_02127453_20090616_135212_0021 02317639000000 1 155_02127453_20090616_135212_0021 00000000000888 2 155_01605733_20090616_135221_0016 00264850000000 3 155_01072401_20090616_135224_0010 02316877000000 4 155_01072401_20090616_135224_0010 05051969277205 </code></pre> <p>It represents one customer (saleid) and the items he/she got (upc of the item)</p> <p>What I want is to pivot this table to a form like below:</p> <pre><code> 02317639000000 00000000000888 00264850000000 02316877000000 155_02127453_20090616_135212_0021 1 1 0 0 155_01605733_20090616_135221_0016 0 0 1 0 155_01072401_20090616_135224_0010 0 0 0 0 </code></pre> <p>So, columns are unique UPCs and rows are unique SALEIDs.</p> <p>i read it like this:</p> <pre><code>tbl = pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str}) tbl.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 18570726 entries, 0 to 18570725 Data columns (total 2 columns): saleid object upc object dtypes: object(2) memory usage: 283.4+ MB </code></pre> <p>I have done some steps but not the correct ones!</p> <pre><code>tbl.pivot_table(columns=['upc'],aggfunc=pd.Series.nunique) upc 00000000000000 00000000000109 00000000000116 00000000000123 00000000000130 00000000000147 00000000000154 00000000000161 00000000000178 00000000000185 ... saleid 44950 287 26180 4881 1839 623 3347 7 </code></pre> <p>EDIT: Im using the solution variation below:</p> <pre><code>chunksize = 1000000 f = 0 for chunk in pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str}, chunksize=chunksize): print(f) t = pd.crosstab(chunk.saleid, chunk.upc) t.head(3) t.to_csv('tbl_sales_index_converted_' + str(f) + '.csv.bz2',header=True,sep=';',compression='bz2') f = f+1 </code></pre> <p>the original file is extremely big to fit to memory after conversion. The above solution has the problem on not having all the columns on all the files as I'm reading chunks from the original file.</p> <p>Question 2: is there a way to force all chunks to have the same columns?</p>
howto
2016-09-06T16:28:14Z
39,373,620
How to get a max string length in nested lists
<p>They is a follow up question to my earlier post (re printing table from a list of lists)</p> <p>I'm trying t get a string max value of the following nested list:</p> <pre><code>tableData = [['apples', 'oranges', 'cherries', 'banana'], ['Alice', 'Bob', 'Carol', 'David'], ['dogs', 'cats', 'moose', 'goose']] for i in tableData: print(len(max(i))) </code></pre> <p>which gives me 7, 5, 5. But "cherries" is 8 </p> <p>what a my missing here? Thanks.</p>
howto
2016-09-07T15:13:49Z
39,381,222
How to print/show an expression in rational number form in python
<p>I've been developing a Tkinter app and at some label i need to put formula that should look like a rational number(expression1/expression2, like a numerator and denominator and a bar between them). I did some digging and couldnt find anything related to it. Any suggestions on how this can be done ?</p> <p>I even couldnt find anything on printing a fraction in a rational number format on the console. I oonly care about the looks and no calculation will be made with it, its just a label</p>
howto
2016-09-08T01:22:22Z
39,414,085
How to convert the following string in python?
<p>Input : UserID/ContactNumber </p> <p>Output: user-id/contact-number</p> <p>I have tried the following code:</p> <pre><code>s ="UserID/ContactNumber" list = [x for x in s] for char in list: if char != list[0] and char.isupper(): list[list.index(char)] = '-' + char fin_list=''.join((list)) print(fin_list.lower()) </code></pre> <p>but the output i got is:</p> <pre><code> user-i-d/-contact-number </code></pre>
howto
2016-09-09T14:36:55Z
39,532,974
Remove final characters from string recursively - What's the best way to do this?
<p>I am reading lines from a file one at a time and before i store each, i wanna modify them according to the following simple rule:</p> <ul> <li>if the last character is not any of, e.g., <code>{'a', 'b', 'c'}</code> store the line.</li> <li>if that is not the case, remove the character (pop-like) and check again.</li> </ul> <p>What i currently have (felt like the obvious thing to do) is this:</p> <pre><code>bad_chars = {'a', 'b', 'c'} def remove_end_del(line_string, chars_to_remove): while any(line_string[-1] == x for x in chars_to_remove): line_string = line_string[:-1] return line_string example_line = 'jkhasdkjashdasjkd|abbbabbababcbccc' modified_line = remove_end_del(example_line, bad_chars) print(modified_line) # prints -&gt; jkhasdkjashdasjkd| </code></pre> <p>Which of course works, but the string slicing\reconstruction seems a bit too excessive to my untrained eyes. So i was wondering a couple of things:</p> <ol> <li>is there a better way to do this? like a <code>pop</code> type of function for strings?</li> <li>how is <code>rstrip()</code> or <code>strip()</code> in general implemented? is it also with a <strong>while</strong>?</li> <li>would it be worthwhile making <code>rstrip()</code> recursive for this example?</li> <li>Finally, how much better is the following:</li> </ol> <hr> <pre><code>def remove_end_del_2(line_string, chars_to_remove): i = 1 while line_string[-i] in chars_to_remove: i += 1 return line_string[:-i+1] </code></pre> <p>Any comment on any of the points made above would be appreciated ☺.</p> <p><strong>Note: the separator ("|") is only there for visualization.</strong></p>
howto
2016-09-16T13:40:23Z
39,538,010
How can I execute Python code in a virtualenv from Matlab
<p>I am creating a Matlab toolbox for research and I need to execute Matlab code but also Python code. </p> <p>I want to allow the user to execute Python code from Matlab. The problem is that if I do it right away, I would have to install everything on the Python's environment and I want to avoid this using virtualenv. The problem is that I don't know how to tell Matlab to user the virtual enviornment created.</p>
howto
2016-09-16T18:30:22Z
39,600,161
Regular expression matching all but a string
<p>I need to find all the strings matching a pattern with the exception of two given strings. </p> <p>For example, find all groups of letters with the exception of <code>aa</code> and <code>bb</code>. Starting from this string:</p> <pre><code>-a-bc-aa-def-bb-ghij- </code></pre> <p>Should return:</p> <pre><code>('a', 'bc', 'def', 'ghij') </code></pre> <p>I tried with <a href="http://pythex.org/?regex=-(%5Cw.*%3F)(%3F%3D-)&amp;test_string=-a-bc-def-ghij-&amp;ignorecase=0&amp;multiline=0&amp;dotall=0&amp;verbose=0" rel="nofollow">this regular</a> expression that captures 4 strings. I thought I was getting close, but (1) it doesn't work in Python and (2) I can't figure out how to exclude a few strings from the search. (Yes, I could remove them later, but my real regular expression does everything in one shot and I would like to include this last step in it.)</p> <p>I said it doesn't work in Python because I tried this, expecting the exact same result, but instead I get only the first group:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; re.search('-(\w.*?)(?=-)', '-a-bc-def-ghij-').groups() ('a',) </code></pre> <p>I tried with negative look ahead, but I couldn't find a working solution for this case.</p>
howto
2016-09-20T17:17:43Z
39,602,824
pandas: replace string with another string
<p>I have the following data frame</p> <pre><code> prod_type 0 responsive 1 responsive 2 respon 3 r 4 respon 5 r 6 responsive </code></pre> <p>I would like to replace <code>respon</code> and <code>r</code> with <code>responsive</code>, so the final data frame is</p> <pre><code> prod_type 0 responsive 1 responsive 2 responsive 3 responsive 4 responsive 5 responsive 6 responsive </code></pre> <p>I tried the following but it did not work:</p> <pre><code>df['prod_type'] = df['prod_type'].replace({'respon' : 'responsvie'}, regex=True) df['prod_type'] = df['prod_type'].replace({'r' : 'responsive'}, regex=True) </code></pre>
howto
2016-09-20T20:00:39Z
39,604,780
How can I print the Truth value of a variable?
<p>In Python, variables have truthy values based on their content. For example:</p> <pre><code>&gt;&gt;&gt; def a(x): ... if x: ... print (True) ... &gt;&gt;&gt; a('') &gt;&gt;&gt; a(0) &gt;&gt;&gt; a('a') True &gt;&gt;&gt; &gt;&gt;&gt; a([]) &gt;&gt;&gt; a([1]) True &gt;&gt;&gt; a([None]) True &gt;&gt;&gt; a([0]) True </code></pre> <p>I also know I can print the truthy value of a comparison without the if operator at all:</p> <pre><code>&gt;&gt;&gt; print (1==1) True &gt;&gt;&gt; print (1&lt;5) True &gt;&gt;&gt; print (5&lt;1) False </code></pre> <p>But how can I print the <code>True</code> / <code>False</code> value of a variable? Currently, I'm doing this:</p> <pre><code>print (not not a) </code></pre> <p>but that looks a little inelegant. Is there a preferred way?</p>
howto
2016-09-20T22:36:06Z
39,605,640
How do I pull a recurring key from a JSON?
<p>I'm new to python (and coding in general), I've gotten this far but I'm having trouble. I'm querying against a web service that returns a json file with information on every employee. I would like to pull just a couple of attributes for each employee, but I'm having some trouble.</p> <p>I have this script so far:</p> <pre><code>import json import urllib2 req = urllib2.Request('http://server.company.com/api') response = urllib2.urlopen(req) the_page = response.read() j = json.loads(the_page) print j[1]['name'] </code></pre> <p>The JSON that it returns looks like this...</p> <pre><code>{ "name": bill jones, "address": "123 something st", "city": "somewhere", "state": "somestate", "zip": "12345", "phone_number": "800-555-1234", }, { "name": jane doe, "address": "456 another ave", "city": "metropolis", "state": "ny", "zip": "10001", "phone_number": "555-555-5554", }, </code></pre> <p>You can see that with the script I can return the name of employee in index 1. But I would like to have something more along the lines of: <code>print j[**0 through len(j)**]['name']</code> so it will print out the name (and preferably the phone number too) of every employee in the json list.</p> <p>I'm fairly sure I'm approaching something wrong, but I need some feedback and direction.</p>
howto
2016-09-21T00:19:21Z
39,607,540
Count the number of Occurrence of Values based on another column
<p>I have a question regarding creating pandas dataframe according to the sum of other column.</p> <p>For example, I have this dataframe</p> <pre><code> Country | Accident England Car England Car England Car USA Car USA Bike USA Plane Germany Car Thailand Plane </code></pre> <p>I want to make another dataframe based on the sum value of all accident based on the country. We will disregard the type of the accident, while summing them all based on the country.</p> <p>My desire dataframe would look like this</p> <pre><code> Country | Sum of Accidents England 3 USA 3 Germany 1 Thailand 1 </code></pre>
howto
2016-09-21T04:25:44Z
39,646,401
How to merge the elements in a list sequentially in python
<p>I have a list <code>[ 'a' , 'b' , 'c' , 'd']</code>. How do I get the list which joins two letters sequentially i.e the ouptut should be <code>[ 'ab', 'bc' , 'cd']</code> in python easily instead of manually looping and joining</p>
howto
2016-09-22T18:29:43Z
39,719,140
Pyhon - Best way to find the 1d center of mass in a binary numpy array
<p>Suppose I have the following Numpy array, in which I have one and only one continuous slice of <code>1</code>s:</p> <pre><code>import numpy as np x = np.array([0,0,0,0,1,1,1,0,0,0], dtype=1) </code></pre> <p>and I want to find the index of the 1D center of mass of the <code>1</code> elements. I could type the following:</p> <pre><code>idx = np.where( x )[0] idx_center_of_mass = int(0.5*(idx.max() + idx.min())) # this would give 5 </code></pre> <p>(Of course this would lead to rough approximation when the number of elements of the <code>1</code>s slice is even.) Is there any better way to do this, like a computationally more efficient oneliner?</p>
howto
2016-09-27T07:55:13Z
39,804,375
Python - Sort a list of dics by value of dict`s dict value
<p>I have a list that looks like this:</p> <pre><code>persons = [{'id': 11, 'passport': {'id': 11, 'birth_info':{'date': 10/10/2016...}}},{'id': 22, 'passport': {'id': 22, 'birth_info':{'date': 11/11/2016...}}}] </code></pre> <p>I need to sort the list of persons by their sub key of sub key - their birth_info date.</p> <p>How should I do it ? Thanks</p>
howto
2016-10-01T08:07:57Z
39,816,795
How to add a specific number of characters to the end of string in Pandas?
<p>I am using the Pandas library within Python and I am trying to increase the length of a column with text in it to all be the same length. I am trying to do this by adding a specific character (this will be white space normally, in this example I will use "_") a number of times until it reaches the maximum length of that column. </p> <p>For example:</p> <p><strong>Col1_Before</strong></p> <pre><code>A B A1R B2 AABB4 </code></pre> <p><strong>Col1_After</strong></p> <pre><code>A____ B____ A1R__ B2___ AABB4 </code></pre> <p>So far I have got this far (using the above table as the example). It is the next part (and the part that does it that I am stuck on).</p> <pre><code>df['Col1_Max'] = df.Col1.map(lambda x: len(x)).max() df['Col1_Len'] = df.Col1.map(lambda x: len(x)) df['Difference_Len'] = df ['Col1_Max'] - df ['Col1_Len'] </code></pre> <p>I may have not explained myself well as I am still learning. If this is confusing let me know and I will clarify. </p>
howto
2016-10-02T12:03:14Z
39,821,166
How to reverse the elements in a sublist?
<p>I'm trying to create a function that reverses the order of the elements in a list, and also reverses the elements in a sublist. for example:</p> <p>For example, if L = [[1, 2], [3, 4], [5, 6, 7]] then deep_reverse(L) mutates L to be [[7, 6, 5], [4, 3], [2, 1]]</p> <p>I figured out how to reverse the order of one list, but I am having troubles with reversing the order of elements in a sublist. This is what I have so far:</p> <pre><code>def deep_reverse(L) """ assumes L is a list of lists whose elements are ints Mutates L such that it reverses its elements and also reverses the order of the int elements in every element of L. It does not return anything. """ for i in reversed(L): print(i) </code></pre> <p>In the example above, my code would just print <code>[5,6,7], [3,4], [1,2]</code>, which is not what i'm trying to accomplish. It's just reversing the order of the lists, the not actual elements in the lists.</p> <p>What should I add to the code so that it also reverses the order of the elements in a sublist?</p> <p>[<strong>EDIT</strong>: my code <strong>needs</strong> to mutate the list; I don't want it just to print it, it actually needs to change the list.]</p>
howto
2016-10-02T20:06:54Z
39,870,642
Matplotlib - How to plot a high resolution graph?
<p>I've used matplotlib for plotting some experimental results (discussed it in here: <a href="http://stackoverflow.com/questions/39676294/looping-over-files-and-plotting-python/" title="Looping over files and plotting &#40;Python&#41;">Looping over files and plotting</a>. However, saving the picture by clicking right to the image gives very bad quality / low resolution images.</p> <pre><code>from glob import glob import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl # loop over all files in the current directory ending with .txt for fname in glob("./*.txt"): # read file, skip header (1 line) and unpack into 3 variables WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True) # first plot plt.plot(WL, T, label='BN', color='blue') plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylim(0,100) plt.ylabel('Transmittance, %') mpl.rcParams.update({'font.size': 14}) #plt.legend(loc='lower center') plt.title('') plt.show() plt.clf() # second plot plt.plot(WL, ABS, label='BN', color='red') plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylabel('Absorbance, A') mpl.rcParams.update({'font.size': 14}) #plt.legend() plt.title('') plt.show() plt.clf() </code></pre> <p>Example graph of what I'm looking for: <a href="http://i.stack.imgur.com/CNSoO.png" rel="nofollow">example graph</a></p>
howto
2016-10-05T09:45:40Z
39,988,589
How to pass through a list of queries to a pandas dataframe, and output the list of results?
<p>When selecting rows whose column value <code>column_name</code> equals a scalar, <code>some_value</code>, we use <code>==:</code></p> <pre><code>df.loc[df['column_name'] == some_value] </code></pre> <p>or use <code>.query()</code> </p> <pre><code>df.query('column_name == some_value') </code></pre> <p>In a concrete example:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Col1': 'what are men to rocks and mountains'.split(), 'Col2': 'the curves of your lips rewrite history.'.split(), 'Col3': np.arange(7), 'Col4': np.arange(7) * 8}) print(df) Col1 Col2 Col3 Col4 0 what the 0 0 1 are curves 1 8 2 men of 2 16 3 to your 3 24 4 rocks lips 4 32 5 and rewrite 5 40 6 mountains history 6 48 </code></pre> <p>A query could be</p> <pre><code>rocks_row = df.loc[df['Col1'] == "rocks"] </code></pre> <p>which outputs</p> <pre><code>print(rocks_row) Col1 Col2 Col3 Col4 4 rocks lips 4 32 </code></pre> <p>I would like to pass through a list of values to query against a dataframe, which outputs a list of "correct queries". </p> <p>The queries to execute would be in a list, e.g.</p> <pre><code>list_match = ['men', 'curves', 'history'] </code></pre> <p>which would output all rows which meet this condition, i.e. </p> <pre><code>matches = pd.concat([df1, df2, df3]) </code></pre> <p>where </p> <pre><code>df1 = df.loc[df['Col1'] == "men"] df2 = df.loc[df['Col1'] == "curves"] df3 = df.loc[df['Col1'] == "history"] </code></pre> <p>My idea would be to create a function that takes in a </p> <pre><code>output = [] def find_queries(dataframe, column, value, output): for scalar in value: query = dataframe.loc[dataframe[column] == scalar]] output.append(query) # append all query results to a list return pd.concat(output) # return concatenated list of dataframes </code></pre> <p>However, this appears to be exceptionally slow, and doesn't actually take advantage of the pandas data structure. What is the "standard" way to pass through a list of queries through a pandas dataframe? </p> <p>EDIT: How does this translate into "more complex" queries in pandas? e.g. <code>where</code> with an HDF5 document? </p> <pre><code>df.to_hdf('test.h5','df',mode='w',format='table',data_columns=['A','B']) pd.read_hdf('test.h5','df') pd.read_hdf('test.h5','df',where='A=["foo","bar"] &amp; B=1') </code></pre>
howto
2016-10-12T00:06:20Z
39,998,424
How to delete a file without an extension?
<p>I have made a function for deleting files:</p> <pre><code>def deleteFile(deleteFile): if os.path.isfile(deleteFile): os.remove(deleteFile) </code></pre> <p>However, when passing a FIFO-filename (without file-extension), this is not accepted by the os-module. Specifically I have a subprocess create a FIFO-file named 'Testpipe'. When calling:</p> <pre><code>os.path.isfile('Testpipe') </code></pre> <p>It results to <code>False</code>. The file is not in use/open or anything like that. Python runs under Linux.</p> <p>How can you correctly delete a file like that?</p>
howto
2016-10-12T12:17:51Z
40,016,359
Index the first and the last n elements of a list
<p>the first n and the last n element of the python list</p> <pre><code>l=[1,2,3,4,5,6,7,8,9,10] </code></pre> <p>can be indexed by the expressions</p> <pre><code>print l[:3] [1, 2, 3] </code></pre> <p>and</p> <pre><code>print l[-3:] [8, 9, 10] </code></pre> <p>is there a way to combine both in a single expression, i.e index the first n and the last n elements using one indexing expression?</p>
howto
2016-10-13T08:50:17Z
40,055,835
Removing elements from an array that are in another array
<p>Say I have these 2D arrays A and B.</p> <p>How can I remove elements from A that are in B.</p> <pre><code>A=np.asarray([[1,1,1], [1,1,2], [1,1,3], [1,1,4]]) B=np.asarray([[0,0,0], [1,0,2], [1,0,3], [1,0,4], [1,1,0], [1,1,1], [1,1,4]]) #output = [[1,1,2], [1,1,3]] </code></pre> <hr> <p>To be more precise, I would like to do something like this.</p> <pre><code>data = some numpy array label = some numpy array A = np.argwhere(label==0) #[[1 1 1], [1 1 2], [1 1 3], [1 1 4]] B = np.argwhere(data&gt;1.5) #[[0 0 0], [1 0 2], [1 0 3], [1 0 4], [1 1 0], [1 1 1], [1 1 4]] out = np.argwhere(label==0 and data&gt;1.5) #[[1 1 2], [1 1 3]] </code></pre>
howto
2016-10-15T06:36:16Z
40,076,861
How to merge two DataFrames into single matching the column values
<p>Two DataFrames have matching values stored in their corresponding 'names' and 'flights' columns. While the first DataFrame stores the distances the other stores the dates:</p> <pre><code>import pandas as pd distances = {'names': ['A', 'B','C'] ,'distances':[100, 200, 300]} dates = {'flights': ['C', 'B', 'A'] ,'dates':['1/1/16', '1/2/16', '1/3/16']} distancesDF = pd.DataFrame(distances) datesDF = pd.DataFrame(dates) </code></pre> <p>distancesDF:</p> <pre><code> distances names 0 100 A 1 200 B 2 300 C </code></pre> <p>datesDF:</p> <pre><code> dates flights 0 1/1/16 A 1 1/2/16 B 2 1/3/16 C </code></pre> <p>I would like to merge them into single Dataframe in a such a way that the matching entities are synced with the corresponding distances and dates. So the resulted DataFame would look like this:</p> <p>resultDF:</p> <pre><code> distances names dates 0 100 A 1/1/16 1 200 B 1/2/16 2 300 C 1/3/16 </code></pre> <p>What would be the way of accomplishing it?</p>
howto
2016-10-17T00:09:22Z
40,079,728
Get models ordered by an attribute that belongs to its OneToOne model
<p>Let's say there is one model named <code>User</code> and the other named <code>Pet</code> which has a <code>OneToOne</code> relationship with <code>User</code>, the <code>Pet</code> model has an attribute <code>age</code>, how to get the ten <code>User</code> that owns the top ten <code>oldest</code> dog?</p> <pre><code>class User(models.Model): name = models.CharField(max_length=50, null=False, blank=False) class Pet(models.Model): name = models.CharField(max_length=50, null=False, blank=False) owner = models.OneToOneField(User, on_delete=models.CASCADE) age = models.IntegerField(null=False) </code></pre> <hr> <p>In <code>User</code>, there is an attribute <code>friends</code> that has a <code>ManyToMany</code> relationship with <code>User</code>, how to get the ten <code>friends</code> of <code>User Tom</code> that owns the top ten <code>oldest</code> dog?</p> <pre><code>class User(models.Model): name = models.CharField(max_length=50, null=False, blank=False) friends = models.ManyToManyField(self, ...) class Pet(models.Model): name = models.CharField(max_length=50, null=False, blank=False) owner = models.OneToOneField(User, on_delete=models.CASCADE) age = models.IntegerField(null=False) </code></pre>
howto
2016-10-17T06:24:09Z
40,094,588
How to get a list of matchable characters from a regex class
<p>Given a regex character class/set, how can i get a list of all matchable characters (in python 3). E.g.:</p> <pre><code>[\dA-C] </code></pre> <p>should give</p> <pre><code>['0','1','2','3','4','5','6','7','8','9','A','B','C'] </code></pre>
howto
2016-10-17T19:54:32Z
40,101,094
Deploying Python Flask App on Apache with Python version installed in Virtual Environment Only
<p>I am working on a CentOS7 development environment. The machine came with Python 2.7.5 pre-installed. I have developed a web application using Python 3.5.1 which along with it's dependencies was installed in the virtual environment only. Python 3 is not installed machine-wide. I am now trying to deploy the application on an Apache server but have run into trouble. Here is what I have done.</p> <p>I installed mod_wsgi using yum.</p> <p>I configured the virtualhost as shown below:</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName myapp.myserver.com WSGIDaemonProcess myapp user=myuser group=mygroup threads=5 python-path=/var/www/myapp.myserver.com/html:/var/www/myapp.myserver.com/venv/lib:/var/www/myapp.myserver.com/venv/lib/python3.5/site-packages python-home=/var/www/myapp.myserver.com/html/venv WSGIScriptAlias / /var/www/myapp.myserver.com/html/myapp.wsgi &lt;Directory /var/www/myapp.myserver.com/html&gt; WSGIProcessGroup smex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre> <p>My wsgi file is configured as shown below:</p> <pre><code>import sys sys.path.insert(0, '/var/www/myapp.myserver.com/html') activate_this = '/var/www/myapp.myserver.com/html/venv/bin/activate_this.py' with open(activate_this) as file_: exec(file_.read(), dict(__file__=activate_this)) from myapp import app as application </code></pre> <p>However, I am getting an internal server error when I try to open the site. The error log reveals the following:</p> <pre><code>Tue Oct 18 14:24:50.174740 2016] [mpm_prefork:notice] [pid 27810] AH00163: Apache/2.4.6 (CentOS) PHP/5.4.16 mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations [Tue Oct 18 14:24:50.174784 2016] [core:notice] [pid 27810] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' ImportError: No module named site ImportError: No module named site ImportError: No module named site ImportError: No module named site </code></pre> <p>The last error keeps repeating for most of the log file. The first thing that catches my eye is the Python version which seems to be 2.7.5. This brings me to my questions:</p> <ol> <li>Do I need to have Python 3.5.1 installed in /usr/local or can I just have it in the virtual environment.</li> <li>Do I need to install a specific mod_wsgi version for this version of Python? If so should I install it via pip instead of yum?</li> <li>What else am I missing to get this to work?</li> </ol> <p>Thanks in advance for your help.</p>
other
2016-10-18T06:32:17Z
40,101,371
How to replace each array element by 4 copies in Python?
<p>How do I use numpy / python array routines to do this ?</p> <p>E.g. If I have array <code>[ [1,2,3,4,]]</code> , the output should be </p> <pre><code>[[1,1,2,2,], [1,1,2,2,], [3,3,4,4,], [3,3,4,4]] </code></pre> <p>Thus, the output is array of double the row and column dimensions. And each element from original array is repeated three times. </p> <p>What I have so far is this</p> <pre><code>def operation(mat,step=2): result = np.array(mat,copy=True) result[::2,::2] = mat return result </code></pre> <p>This gives me array </p> <pre><code>[[ 98.+0.j 0.+0.j 40.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j 0.+0.j] [ 29.+0.j 0.+0.j 54.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j 0.+0.j]] </code></pre> <p>for the input</p> <pre><code>[[98 40] [29 54]] </code></pre> <p>The array will always be of even dimensions.</p>
howto
2016-10-18T06:49:08Z
40,101,372
Test setup and teardown for each testcase in a test suite in robot frame work using python
<p>Hi I'm new to robot frame-work. Can someone help me to find if it's possible to have to a test setup and a teardown for each test case in test suite containing around 20 testcases. </p> <p>Can someone explain this with a example?</p>
needtoknow
2016-10-18T06:49:12Z
40,101,544
Python error with flattening list of lists
<p>Hi I'm trying to flatten the following list of lists but I always get the following error:</p> <p>'int' object is not iterable </p> <p>I also tried chain from itertools but still not working. I guess the solution is easy but I really cannot see it! Anybody can help?</p> <p>Thanks</p> <pre><code> from itertools import chain import operator lista = [[1,2,3],[4,5,6],[7,8,9],[10,11,12]] listone = lista[0][0],[-x[0] for x in lista[:2]] #sumlistone = chain.from_iterable(listone) sumlistone = [x for sublist in listone for x in sublist] print listone print sumlistone </code></pre>
debug
2016-10-18T06:58:03Z
40,101,598
Python combinations with list and items in other lists
<p>I try the code below, is there a efficent way to do this?</p> <pre><code>c = [] l = [['A1','A2'], ['B1','B2'], ['C1','C2'] ] for i in range(0, len(l) - 1): for j in range(i+1, len(l)): c.append(sorted([l[i][0],l[i][1],l[j][0]])) c.append(sorted([l[i][0],l[i][1],l[j][1]])) c.append(sorted([l[i][0],l[j][0],l[j][1]])) c.append(sorted([l[i][1],l[j][0],l[j][1]])) print(c) </code></pre> <p>Out put:</p> <pre><code>[['A1', 'A2', 'B1'], ['A1', 'A2', 'B2'], ['A1', 'B1', 'B2'], ['A2', 'B1', 'B2'], ['A1', 'A2', 'C1'], ['A1', 'A2', 'C2'], ['A1', 'C1', 'C2'], ['A2', 'C1', 'C2'], ['B1', 'B2', 'C1'], ['B1', 'B2', 'C2'], ['B1', 'C1', 'C2'], ['B2', 'C1', 'C2'] </code></pre>
seeking
2016-10-18T07:01:18Z
40,101,662
Python (win10): python35.dll and VCRUNTIME140.dll missing
<p>I have installed <code>WinPython-64bit-3.5.2.2Qt5</code> I try to access Python from the command line. It accesses the <code>symlink</code> </p> <pre><code>C:\Users\usr&gt;where python C:\Windows\System32\python.exe </code></pre> <p>when I execute</p> <pre><code>C:\Users\usr&gt;python </code></pre> <p>I got the error:</p> <blockquote> <p>The program can't start because VCRUNTIME140.dll is missing from your computer. Try reinstalling to fix this problem The program can't start because python35.dll is missing from your computer. Try reinstalling to fix this problem</p> </blockquote> <p>if I execute </p> <pre><code>C:\Users\usr\Documents\MyExes\WinPython-64bit-3.5.2.2Qt5\python-3.5.2.amd64\python.exe </code></pre> <p>everything runs smoothly</p> <p>what can I do to call Python simply by python instead of <code>python_path\python</code>?</p>
other
2016-10-18T07:04:50Z