title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
sequence
How to solve "No module named 'cStringIO'" when importing the logging module in Python 3
40,134,421
<p>I'm trying to run the following script, named <code>msgpack_checker.py</code>, in Python 3:</p> <pre><code>import msgpack from faker import Faker import logging from logging.handlers import RotatingFileHandler fake = Faker() fake.seed(0) data_file = "my_log.log" logger = logging.getLogger('my_logger') logger.setLevel(logging.DEBUG) handler = RotatingFileHandler(data_file, maxBytes=2000, backupCount=10) handler.terminator = "" # Suppress the newline character (only works in Python 3) logger.addHandler(handler) fake_dicts = [{'name': fake.name()} for _ in range(100)] for item in fake_dicts: dump_string = msgpack.packb(item) # print dump_string logger.debug(dump_string) unpacker = msgpack.Unpacker(open(data_file)) print("Printing unpacked contents:") for unpacked in unpacker: print(unpacked) </code></pre> <p>when I run it with Python 2, it prints the following output:</p> <pre><code>Printing unpacked contents: {'name': 'Joshua Carter'} 10 {'name': 'David Williams'} 10 {'name': 'Joseph Jones'} 10 {'name': 'Gary Perry'} 10 {'name': 'Terry Wells'} 10 {'name': 'Vanessa Cooper'} 10 {'name': 'Michael Simmons'} 10 {'name': 'Nicholas Kline'} 10 {'name': 'Lori Bennett'} 10 </code></pre> <p>The numbers "10" I believe come from the logger, and should be removed in Python 3 by the <code>handler.terminator = ""</code> command. However, if I try to run the script using <code>python3 msgpack_checker.py</code>, I get the following error:</p> <pre><code>Traceback (most recent call last): File "msgpack_checker.py", line 3, in &lt;module&gt; import logging File "/home/kurt/Documents/Scratch/logging/__init__.py", line 26, in &lt;module&gt; import sys, os, time, cStringIO, traceback, warnings, weakref ImportError: No module named 'cStringIO' </code></pre> <p>Apparently the <code>logging</code> module tries to import <code>cStringIO</code> directly, which no longer exists in Python 3. I've seen fixes which involve importing <code>StringIO</code> from <code>io</code> instead of <code>StringIO</code>, but I not sure they would work here. Any suggestions on how to get this script to work in Python 3?</p>
0
2016-10-19T14:33:30Z
40,134,620
<p>As pointed out in several comments, I accidentally left a directory <code>logging</code> in the same directory which is what the error message refers to. After removing that directory, I get a different error message,</p> <pre><code>Printing unpacked contents: Traceback (most recent call last): File "msgpack_checker.py", line 27, in &lt;module&gt; for unpacked in unpacker: File "msgpack/_unpacker.pyx", line 459, in msgpack._unpacker.Unpacker.__next__ (msgpack/_unpacker.cpp:459) File "msgpack/_unpacker.pyx", line 380, in msgpack._unpacker.Unpacker._unpack (msgpack/_unpacker.cpp:380) File "msgpack/_unpacker.pyx", line 370, in msgpack._unpacker.Unpacker.read_from_file (msgpack/_unpacker.cpp:370) TypeError: expected bytes, str found </code></pre> <p>but that is a separate issue; at least the importing of <code>logging</code> was successful.</p>
0
2016-10-19T14:41:28Z
[ "python", "python-3.x" ]
Most efficient way to set value in column based on prefix of the index
40,134,453
<p>I have a dataframe like this:</p> <pre><code>df = pd.DataFrame(index=['pre1_xyz', 'pre1_foo', 'pre3_bar', 'pre3_foo', 'pre10_foo', 'pre10_bar', 'pre10_xyz']) </code></pre> <p>to which I want to add a column <code>values</code> whereby the value is determined based on the prefix of the index of the respective row using a function <code>return_something(pref)</code>. Right now I implement that as follows:</p> <pre><code>import pandas as pd import numpy as np # this just returns a random value for the sake of simplicity def return_something(pref): return np.random.choice(len(pref)+10) df = pd.DataFrame(index=['pre1_xyz', 'pre1_foo', 'pre3_bar', 'pre3_foo', 'pre10_foo', 'pre10_bar', 'pre10_xyz']) # get all the unique prefixes unique_pref = set([pi.partition('_')[0] for pi in df.index]) # determine the value for each prefix val_pref = {pref: return_something(pref) for pref in unique_pref} # add the values to the dataframe for prefi, vali in val_pref.items(): # determine all rows with the same prefix rows = [rowi for rowi in df.index if rowi.startswith(prefi+'_')] df.loc[rows, 'values'] = vali </code></pre> <p>That then gives me the desired output:</p> <pre><code> values pre1_xyz 0 pre1_foo 0 pre3_bar 7 pre3_foo 7 pre10_foo 13 pre10_bar 13 pre10_xyz 13 </code></pre> <p>Question is whether there is anything smarter than this e.g. a solution which avoids creating <code>unique_pref</code> and/or <code>val_pref</code> and/or makes use of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_value.html" rel="nofollow"><code>set_value</code></a> which seems to be the fastest solution to add values to a dataframe as discussed in <a href="http://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe">this question</a>.</p>
1
2016-10-19T14:34:33Z
40,134,970
<p>Because you have repeats of the prefix, you want to first separate out the prefix to make sure you don't generate a new random number for the same prefix. Therefore the removal of duplicates is necessary from your prefix list. I did this in a more condensed way by making a new column for the prefix and then using df.prefix.unique(). </p> <pre><code>df['prefix'] = [i.split('_')[0] for i in df.index] df['values'] = df.prefix.map(dict(zip(df.prefix.unique(),[return_something(i) for i in df.prefix.unique()]))) </code></pre>
3
2016-10-19T14:56:00Z
[ "python", "pandas", "optimization" ]
How to copy content of a numpy matrix to another?
40,134,604
<p>I have a simple question about basics of <code>python</code> and <code>numpy</code> module. I have a function as following:</p> <pre><code>def update_x_last(self, x): self.x_last = x </code></pre> <p>The class attribute x_last and function argument x are both initialized as of type <code>numpy.matrix</code> and of the same shape. (<code>x.shape = x_last.shape = (4,1</code>)</p> <p>I have noticed that the code above does not copy the content of the argument <code>x</code> to <code>x_last</code>, but it makes the object <code>x_last</code> point to the address of <code>x</code>.</p> <p>However what I want to do is the following:</p> <ul> <li>Don't change the address of <code>self.x_last</code></li> <li>Copy only the content of <code>x</code> to <code>self.x_last</code></li> </ul> <p>What is the best way to do this?</p> <p><strong>Edit:</strong> the requirement 'Don't change the address of 'self.x_last' was unimportant for me. The only required behaviour is the second requirement to copy only the content.</p>
0
2016-10-19T14:40:51Z
40,134,704
<pre><code>import numpy as np self.x_last = np.copy(x) </code></pre>
3
2016-10-19T14:44:41Z
[ "python", "numpy", "matrix" ]
How to copy content of a numpy matrix to another?
40,134,604
<p>I have a simple question about basics of <code>python</code> and <code>numpy</code> module. I have a function as following:</p> <pre><code>def update_x_last(self, x): self.x_last = x </code></pre> <p>The class attribute x_last and function argument x are both initialized as of type <code>numpy.matrix</code> and of the same shape. (<code>x.shape = x_last.shape = (4,1</code>)</p> <p>I have noticed that the code above does not copy the content of the argument <code>x</code> to <code>x_last</code>, but it makes the object <code>x_last</code> point to the address of <code>x</code>.</p> <p>However what I want to do is the following:</p> <ul> <li>Don't change the address of <code>self.x_last</code></li> <li>Copy only the content of <code>x</code> to <code>self.x_last</code></li> </ul> <p>What is the best way to do this?</p> <p><strong>Edit:</strong> the requirement 'Don't change the address of 'self.x_last' was unimportant for me. The only required behaviour is the second requirement to copy only the content.</p>
0
2016-10-19T14:40:51Z
40,136,875
<p>If the shapes are the same, then any of these meet both of your requirements:</p> <pre><code>self.x_last[...] = x # or self.x_last[()] = x # or self.x_last[:] = x </code></pre> <p>I'd argue that the first one is probably most clear</p> <hr> <p>Let's take a look at your requirements quickly:</p> <blockquote> <p>Copy only the content of x to self.x_last</p> </blockquote> <p>Seems reasonable. This means if that if <code>x</code> continues to change, then <code>x_last</code> won't change with it</p> <blockquote> <p>Don't change the address of <code>self.x_last</code></p> </blockquote> <p>This doesn't buy you anything. IMO, this is actively worse, because functions using <code>x_last</code> in another thread will see it change underneath them unexpectedly, and worse still, could work with the data when it is incompletely copied from <code>x</code></p>
1
2016-10-19T16:25:07Z
[ "python", "numpy", "matrix" ]
Getting column values from multi index data frame pandas
40,134,637
<p>I have a multi index data frame shown below:</p> <pre><code> 1 2 panning sec panning sec None 5.0 None 0.0 None 6.0 None 1.0 Panning 7.0 None 2.0 None 8.0 Panning 3.0 None 9.0 None 4.0 Panning 10.0 None 5.0 </code></pre> <p>I am iterating over the rows and getting the index wherever there is a value 'panning' in the panning column by</p> <pre><code> ide=[] for index,row in dfs.iterrows(): if [row[:, 'Panning'][row[:, 'Panning'] == 'Panning']]: ide.append(row[:, 'Panning'][row[:, 'Panning'] == 'Panning'].index.tolist()) print ide </code></pre> <p>If I execute the above code I get the output </p> <pre><code>[[],[],[1],[2],[],[1]] </code></pre> <p>which represents the index where the value is panning</p> <p>Now, I also want to get the corresponding sec value also like, for example for row 3 for value panning I would like to get sec value 7.0 along with index 1. I would like O\P to be</p> <pre><code>[[],[],[1,7.0],[2,3.0],[],[1,10]] </code></pre> <p>Basically I need the O/P as combination of the index where the value is panning and the subsequent value in the seconds column.</p>
2
2016-10-19T14:41:57Z
40,135,270
<p><code>df.iterrows()</code> return a <code>Series</code>, if you want the original <code>index</code> you need to call the <code>name</code> of that <code>Series</code> such has:</p> <pre><code>for index,row in df.iterrows(): print row.name </code></pre>
0
2016-10-19T15:07:20Z
[ "python", "pandas" ]
Getting column values from multi index data frame pandas
40,134,637
<p>I have a multi index data frame shown below:</p> <pre><code> 1 2 panning sec panning sec None 5.0 None 0.0 None 6.0 None 1.0 Panning 7.0 None 2.0 None 8.0 Panning 3.0 None 9.0 None 4.0 Panning 10.0 None 5.0 </code></pre> <p>I am iterating over the rows and getting the index wherever there is a value 'panning' in the panning column by</p> <pre><code> ide=[] for index,row in dfs.iterrows(): if [row[:, 'Panning'][row[:, 'Panning'] == 'Panning']]: ide.append(row[:, 'Panning'][row[:, 'Panning'] == 'Panning'].index.tolist()) print ide </code></pre> <p>If I execute the above code I get the output </p> <pre><code>[[],[],[1],[2],[],[1]] </code></pre> <p>which represents the index where the value is panning</p> <p>Now, I also want to get the corresponding sec value also like, for example for row 3 for value panning I would like to get sec value 7.0 along with index 1. I would like O\P to be</p> <pre><code>[[],[],[1,7.0],[2,3.0],[],[1,10]] </code></pre> <p>Basically I need the O/P as combination of the index where the value is panning and the subsequent value in the seconds column.</p>
2
2016-10-19T14:41:57Z
40,135,849
<p>consider the <code>pd.DataFrame</code> <code>df</code> in the setup reference below</p> <p><strong><em>method 1</em></strong> </p> <ul> <li><code>xs</code> for cross section</li> <li><code>any(1)</code> to check if any in row</li> </ul> <hr> <pre><code>df.loc[df.xs('Panning', axis=1, level=1).eq('Panning').any(1)] </code></pre> <p><a href="https://i.stack.imgur.com/61CAw.png" rel="nofollow"><img src="https://i.stack.imgur.com/61CAw.png" alt="enter image description here"></a></p> <p><strong><em>method 2</em></strong> </p> <ul> <li><code>stack</code></li> <li><code>query</code></li> <li><code>unstack</code></li> </ul> <hr> <pre><code>df.stack(0).query('Panning == "Panning"').stack().unstack([-2, -1]) </code></pre> <p><a href="https://i.stack.imgur.com/w9lYo.png" rel="nofollow"><img src="https://i.stack.imgur.com/w9lYo.png" alt="enter image description here"></a></p> <hr> <p>To return just the <code>sec</code> columns</p> <pre><code>df.xs('sec', axis=1, level=1)[df.xs('Panning', axis=1, level=1).eq('Panning').any(1)] </code></pre> <p><a href="https://i.stack.imgur.com/xUmiy.png" rel="nofollow"><img src="https://i.stack.imgur.com/xUmiy.png" alt="enter image description here"></a></p> <p><strong><em>setup</em></strong><br> Reference</p> <pre><code>from StringIO import StringIO import pandas as pd txt = """None 5.0 None 0.0 None 6.0 None 1.0 Panning 7.0 None 2.0 None 8.0 Panning 3.0 None 9.0 None 4.0 Panning 10.0 None 5.0""" df = pd.read_csv(StringIO(txt), delim_whitespace=True, header=None) df.columns = pd.MultiIndex.from_product([[1, 2], ['Panning', 'sec']]) df </code></pre> <p><a href="https://i.stack.imgur.com/6dowL.png" rel="nofollow"><img src="https://i.stack.imgur.com/6dowL.png" alt="enter image description here"></a></p>
3
2016-10-19T15:33:35Z
[ "python", "pandas" ]
problems dealing with pandas read csv
40,134,664
<p>I've got a problem with pandas read_csv. I had a many txt files that associate with stock market.It's like this:</p> <pre><code>SecCode,SecName,Tdate,Ttime,LastClose,OP,CP,Tq,Tm,Tt,Cq,Cm,Ct,HiP,LoP,SYL1,SYL2,Rf1,Rf2,bs,s5,s4,s3,s2,s1,b1,b2,b3,b4,b5,sv5,sv4,sv3,sv2,sv1,bv1,bv2,bv3,bv4,bv5,bsratio,spd,rpd,depth1,depth2 600000,浦发银行,20120104,091501,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.600,8.600,.000,.000,.000,.000,0,0,0,0,1100,1100,38900,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091506,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,56795,56795,33605,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091511,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,56795,56795,34605,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091551,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,56795,56795,35205,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091621,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,57795,57795,34205,0,0,0,.00,.000,.00,.00,.00 </code></pre> <p>while I use this code to read it :</p> <pre><code>fields = ['SecCode', 'Tdate','Ttime','LastClose','OP','CP','Rf1','Rf2'] df = pd.read_csv('SHL1_TAQ_600000_201201.txt',usecols=fields) </code></pre> <p>But I got a problem: </p> <pre><code>Traceback (most recent call last): File "E:/workspace/Senti/highlevel/highlevel.py", line 8, in &lt;module&gt; df = pd.read_csv('SHL1_TAQ_600000_201201.txt',usecols=fields,header=1) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 562, in parser_f return _read(filepath_or_buffer, kwds) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 315, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 645, in __init__ self._make_engine(self.engine) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 799, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 1257, in __init__ raise ValueError("Usecols do not match names.") ValueError: Usecols do not match names. </code></pre> <p>I can't find any problem similar to mine.And also it's wired when I copy the txt file into another one ,the code runs well,but the original one cause the above problem.How can I solve it ?</p>
5
2016-10-19T14:43:04Z
40,136,108
<p>Use <code>names</code> instead of <code>usecols</code> while specifying parameter.</p>
0
2016-10-19T15:45:53Z
[ "python", "pandas" ]
problems dealing with pandas read csv
40,134,664
<p>I've got a problem with pandas read_csv. I had a many txt files that associate with stock market.It's like this:</p> <pre><code>SecCode,SecName,Tdate,Ttime,LastClose,OP,CP,Tq,Tm,Tt,Cq,Cm,Ct,HiP,LoP,SYL1,SYL2,Rf1,Rf2,bs,s5,s4,s3,s2,s1,b1,b2,b3,b4,b5,sv5,sv4,sv3,sv2,sv1,bv1,bv2,bv3,bv4,bv5,bsratio,spd,rpd,depth1,depth2 600000,浦发银行,20120104,091501,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.600,8.600,.000,.000,.000,.000,0,0,0,0,1100,1100,38900,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091506,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,56795,56795,33605,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091511,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,56795,56795,34605,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091551,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,56795,56795,35205,0,0,0,.00,.000,.00,.00,.00 600000,浦发银行,20120104,091621,8.490,.000,.000,0,.000,0,0,.000,0,.000,.000,.000,.000,.000,.000, ,.000,.000,.000,.000,8.520,8.520,.000,.000,.000,.000,0,0,0,0,57795,57795,34205,0,0,0,.00,.000,.00,.00,.00 </code></pre> <p>while I use this code to read it :</p> <pre><code>fields = ['SecCode', 'Tdate','Ttime','LastClose','OP','CP','Rf1','Rf2'] df = pd.read_csv('SHL1_TAQ_600000_201201.txt',usecols=fields) </code></pre> <p>But I got a problem: </p> <pre><code>Traceback (most recent call last): File "E:/workspace/Senti/highlevel/highlevel.py", line 8, in &lt;module&gt; df = pd.read_csv('SHL1_TAQ_600000_201201.txt',usecols=fields,header=1) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 562, in parser_f return _read(filepath_or_buffer, kwds) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 315, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 645, in __init__ self._make_engine(self.engine) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 799, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers.py", line 1257, in __init__ raise ValueError("Usecols do not match names.") ValueError: Usecols do not match names. </code></pre> <p>I can't find any problem similar to mine.And also it's wired when I copy the txt file into another one ,the code runs well,but the original one cause the above problem.How can I solve it ?</p>
5
2016-10-19T14:43:04Z
40,138,275
<p>In your message, you said that you're a running:</p> <pre><code>df = pd.read_csv('SHL1_TAQ_600000_201201.txt',usecols=fields) </code></pre> <p>Which did not throw an error for me and @Anil_M. But from your traceback, it is possible to see that the command used is another one:</p> <pre><code>df = pd.read_csv('SHL1_TAQ_600000_201201.txt',usecols=fields, header=1) </code></pre> <p>which includes a <code>header=1</code> and it throws the error mentioned.</p> <p>So, I would guess that the error comes from some confusion on your code.</p>
1
2016-10-19T17:47:52Z
[ "python", "pandas" ]
Python/Flask: UnicodeDecodeError/ UnicodeEncodeError: 'ascii' codec can't decode/encode
40,134,690
<p>Sorry for the millionth question about this, but I've read so much about the topic and still don't get this error fixed (newbie to all of this). I'm trying to display the content of a postgres table on a website with flask (using Ubuntu 16.04/python 2.7.12). There are non-ascii characters in the table ('ü' in this case) and the result is a UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in range(128).</p> <p>This is what my <strong>init</strong>.py looks like:</p> <pre><code> #-*- coding: utf-8 -*- from flask import Blueprint, render_template import psycopg2 from .forms import Form from datetime import datetime from .table import Item, ItemTable test = Blueprint('test', __name__) def init_test(app): app.register_blueprint(test) def createTable(cur): cmd = "select * from table1 order by start desc;" cur.execute(cmd) queryResult = cur.fetchall() items = [] table = 'table could not be read' if queryResult is not None: for row in range(0, len(queryResult)): items.append(Item(queryResult[row][0], queryResult[row][1].strftime("%d.%m.%Y"), queryResult[row][2].strftime("%d.%m.%Y"), \ queryResult[row][1].strftime("%H:%M"), queryResult[row][2].strftime("%H:%M"), \ queryResult[row][3], queryResult[row][4], queryResult[row][5], queryResult[row][6])) table = ItemTable(items) return table @test.route('/test') def index(): dbcon = psycopg2.connect("dbname=testdb user=postgres host=localhost") cur = dbcon.cursor() table = createTable(cur) cur.close() return render_template('test_index.html', table=table) </code></pre> <p>And part of the html-file:</p> <pre><code>{% extends "layout.html" %} {% block head %}Title{% endblock %} {% block body %} &lt;script type="text/javascript" src="{{ url_for('static', filename='js/bootstrap.js') }}"&gt;&lt;/script&gt; &lt;link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='css/custom.css') }}"&gt; &lt;div class="row" id="testid"&gt; {{table}} &lt;/div&gt; {% endblock %}{# Local Variables: coding: utf-8 End: #} </code></pre> <p>The problem is in queryResult[row][6] which is the only row in the table with strings, the rest is integers. The encoding of the postgres database is utf-8. The type of queryResult[row][6] returns type 'str'. What I read <a href="http://initd.org/psycopg/docs/usage.html#unicode-handling" rel="nofollow">here</a> is that the string should be encoded in utf-8, as that is the encoding of the database client. Well, that doesn't seem to work!? Then I added the line</p> <pre><code>psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) </code></pre> <p>to force the result to be unicode (type of queryResult[row][6] returned type 'unicode'), because as was recommended <a href="http://stackoverflow.com/questions/5120302/avoiding-python-unicodedecodeerror-in-jinjas-nl2br-filter">here</a>, I tried to stick to unicode everywhere. Well that resulted in a UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 2: ordinal not in range(128). Then I thought, maybe something went wrong with converting to string (bytes) before and I tried to do it myself then with writing</p> <pre><code>queryResult[row][6].encode('utf-8', 'replace') </code></pre> <p>which led to an UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in range(128). Didn't even work with 'ignore' instead of 'replace'. What is going on here? I checked if the render_template() has a problem with unicode by creating and passing a variable v=u'ü', but that was no problem and was displayed correctly. Yeah, I read the usual recommended stuff like nedbatchelder.com/text/unipain.html and Unicode Demystified, but that didn't help me solve my problem here, I'm obviously missing something.</p> <p>Here is a traceback of the first UnicodeDecodeError:</p> <pre><code>File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 2000, in __call__ return self.wsgi_app(environ, start_response) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1991, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1567, in handle_exception reraise(exc_type, exc_value, tb) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app response = self.full_dispatch_request() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/name/Desktop/testFlask/app/test/__init__.py", line 95, in index return render_template('test_index.html', table=table) #, var=var File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/templating.py", line 134, in render_template context, ctx.app) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/templating.py", line 116, in _render rv = template.render(context) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/jinja2/environment.py", line 989, in render return self.environment.handle_exception(exc_info, True) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/jinja2/environment.py", line 754, in handle_exception reraise(exc_type, exc_value, tb) File "/home/name/Desktop/testFlask/app/templates/test_index.html", line 1, in top-level template code {% extends "layout.html" %} File "/home/name/Desktop/testFlask/app/templates/layout.html", line 40, in top-level template code {% block body %}{% endblock %} File "/home/name/Desktop/testFlask/app/templates/test_index.html", line 7, in block "body" {{table}} File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 86, in __html__ tbody = self.tbody() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 103, in tbody out = [self.tr(item) for item in self.items] File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 120, in tr ''.join(c.td(item, attr) for attr, c in self._cols.items() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 121, in &lt;genexpr&gt; if c.show)) File "/home/name/Desktop/testFlask/app/test/table.py", line 7, in td self.td_contents(item, self.get_attr_list(attr))) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/columns.py", line 99, in td_contents return self.td_format(self.from_attr_list(item, attr_list)) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/columns.py", line 114, in td_format return Markup.escape(content) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/markupsafe/__init__.py", line 165, in escape rv = escape(s) </code></pre> <p>Any help is greatly appreciated...</p>
1
2016-10-19T14:44:06Z
40,134,981
<p>since in Python 2 bytecode is not enforced, one can get confused with them. Encoding and Decoding works as far as i know from string to bytecode and reverse. So if your resultset is a string, there should be no need to encode it again. If you get wrong representations for special characters like "§", i would try something like this:</p> <p>repr(queryResult[row][6])).</p> <p>Does that work?</p>
0
2016-10-19T14:56:24Z
[ "python", "unicode", "utf-8", "flask", "psycopg2" ]
Python/Flask: UnicodeDecodeError/ UnicodeEncodeError: 'ascii' codec can't decode/encode
40,134,690
<p>Sorry for the millionth question about this, but I've read so much about the topic and still don't get this error fixed (newbie to all of this). I'm trying to display the content of a postgres table on a website with flask (using Ubuntu 16.04/python 2.7.12). There are non-ascii characters in the table ('ü' in this case) and the result is a UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in range(128).</p> <p>This is what my <strong>init</strong>.py looks like:</p> <pre><code> #-*- coding: utf-8 -*- from flask import Blueprint, render_template import psycopg2 from .forms import Form from datetime import datetime from .table import Item, ItemTable test = Blueprint('test', __name__) def init_test(app): app.register_blueprint(test) def createTable(cur): cmd = "select * from table1 order by start desc;" cur.execute(cmd) queryResult = cur.fetchall() items = [] table = 'table could not be read' if queryResult is not None: for row in range(0, len(queryResult)): items.append(Item(queryResult[row][0], queryResult[row][1].strftime("%d.%m.%Y"), queryResult[row][2].strftime("%d.%m.%Y"), \ queryResult[row][1].strftime("%H:%M"), queryResult[row][2].strftime("%H:%M"), \ queryResult[row][3], queryResult[row][4], queryResult[row][5], queryResult[row][6])) table = ItemTable(items) return table @test.route('/test') def index(): dbcon = psycopg2.connect("dbname=testdb user=postgres host=localhost") cur = dbcon.cursor() table = createTable(cur) cur.close() return render_template('test_index.html', table=table) </code></pre> <p>And part of the html-file:</p> <pre><code>{% extends "layout.html" %} {% block head %}Title{% endblock %} {% block body %} &lt;script type="text/javascript" src="{{ url_for('static', filename='js/bootstrap.js') }}"&gt;&lt;/script&gt; &lt;link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='css/custom.css') }}"&gt; &lt;div class="row" id="testid"&gt; {{table}} &lt;/div&gt; {% endblock %}{# Local Variables: coding: utf-8 End: #} </code></pre> <p>The problem is in queryResult[row][6] which is the only row in the table with strings, the rest is integers. The encoding of the postgres database is utf-8. The type of queryResult[row][6] returns type 'str'. What I read <a href="http://initd.org/psycopg/docs/usage.html#unicode-handling" rel="nofollow">here</a> is that the string should be encoded in utf-8, as that is the encoding of the database client. Well, that doesn't seem to work!? Then I added the line</p> <pre><code>psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) </code></pre> <p>to force the result to be unicode (type of queryResult[row][6] returned type 'unicode'), because as was recommended <a href="http://stackoverflow.com/questions/5120302/avoiding-python-unicodedecodeerror-in-jinjas-nl2br-filter">here</a>, I tried to stick to unicode everywhere. Well that resulted in a UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 2: ordinal not in range(128). Then I thought, maybe something went wrong with converting to string (bytes) before and I tried to do it myself then with writing</p> <pre><code>queryResult[row][6].encode('utf-8', 'replace') </code></pre> <p>which led to an UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in range(128). Didn't even work with 'ignore' instead of 'replace'. What is going on here? I checked if the render_template() has a problem with unicode by creating and passing a variable v=u'ü', but that was no problem and was displayed correctly. Yeah, I read the usual recommended stuff like nedbatchelder.com/text/unipain.html and Unicode Demystified, but that didn't help me solve my problem here, I'm obviously missing something.</p> <p>Here is a traceback of the first UnicodeDecodeError:</p> <pre><code>File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 2000, in __call__ return self.wsgi_app(environ, start_response) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1991, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1567, in handle_exception reraise(exc_type, exc_value, tb) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app response = self.full_dispatch_request() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/name/Desktop/testFlask/app/test/__init__.py", line 95, in index return render_template('test_index.html', table=table) #, var=var File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/templating.py", line 134, in render_template context, ctx.app) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/templating.py", line 116, in _render rv = template.render(context) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/jinja2/environment.py", line 989, in render return self.environment.handle_exception(exc_info, True) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/jinja2/environment.py", line 754, in handle_exception reraise(exc_type, exc_value, tb) File "/home/name/Desktop/testFlask/app/templates/test_index.html", line 1, in top-level template code {% extends "layout.html" %} File "/home/name/Desktop/testFlask/app/templates/layout.html", line 40, in top-level template code {% block body %}{% endblock %} File "/home/name/Desktop/testFlask/app/templates/test_index.html", line 7, in block "body" {{table}} File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 86, in __html__ tbody = self.tbody() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 103, in tbody out = [self.tr(item) for item in self.items] File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 120, in tr ''.join(c.td(item, attr) for attr, c in self._cols.items() File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 121, in &lt;genexpr&gt; if c.show)) File "/home/name/Desktop/testFlask/app/test/table.py", line 7, in td self.td_contents(item, self.get_attr_list(attr))) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/columns.py", line 99, in td_contents return self.td_format(self.from_attr_list(item, attr_list)) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/columns.py", line 114, in td_format return Markup.escape(content) File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/markupsafe/__init__.py", line 165, in escape rv = escape(s) </code></pre> <p>Any help is greatly appreciated...</p>
1
2016-10-19T14:44:06Z
40,135,563
<p>See: <a href="https://wiki.python.org/moin/UnicodeEncodeError" rel="nofollow">https://wiki.python.org/moin/UnicodeEncodeError</a></p> <blockquote> <p>The encoding of the postgres database is utf-8. The type of queryResult[row][6] returns type 'str'. </p> </blockquote> <p>You've got it right so far. Remember, in Python 2.7, a <code>str</code> is a string of bytes. So you've got a string of bytes from the database, that probably looks like <code>'gl\xc3\xbce'</code> (<code>'glüe'</code>).</p> <p>What happens next is that some part of the program is calling <code>.decode</code> on your string, but using the default 'ascii' codec. It's probably some part of the Item() API that needs the string as a unicode object, or maybe Flask itself. Either way, you need to call <code>.decode</code> yourself on your string, since you know that it's actually in utf-8:</p> <pre><code>col_6 = queryResult[row][6].decode('utf-8') Item(..., ..., col_6, ...) </code></pre> <p>Then you will provide all the downstream APIs with a <code>unicode</code> which is apparently what they want.</p> <p>The way I remember it is this: Unicode is a an abstraction, where everything is represented as "code points". If we want to create real bytes that we can print on a screen or send as an HTML file, we need to ENcode to bytes. If you have some bytes, they could mean any letters, who knows? You need to DEcode the mysterious bytes in order to get Unicode.</p> <p>Hope this helps.</p>
0
2016-10-19T15:20:32Z
[ "python", "unicode", "utf-8", "flask", "psycopg2" ]
Python Global Variable Not Defined - Declared inside Class
40,134,743
<p>I've seen a lot of questions on global variables, but for some reason I still can't get mine to work.</p> <p>Here is my scenario - I have my individual test cases and a separate python script that includes different functions for the various error messages you can get in the application I'm testing. If one of the validations fails, I want the function to increment a failure variable and then the main test script will check to see if it's a pass or fail.</p> <pre><code>class ErrorValidations: failures = 0 def CheckforError1(driver): global failures try: if error1.is_displayed(): failures += 1 def CheckforError2(driver): global failures try: if error2.is_displayed(): failures += 1 def CheckforError3(driver): global failures try: if error3.is_displayed(): failures += 1 </code></pre> <p>This is a heavily edited example of where the validations get used:</p> <pre><code>from functionslist import ErrorValidations def test(driver, browser, test_state): _modules = driver.find_elements_by_xpath('//div[@class="navlink"]') for i in _modules: i.click() ErrorValidations.CheckforError1(driver) ErrorValidations.CheckforError2(driver) ErrorValidations.CheckforError3(driver) if ErrorValidations.failures &gt; 0: driver.report.AppendToReport( i.text, "The " + i.text + "page was not able to load without errors.", "fail", "") else: driver.report.AppendToReport( i.text, "The " + i.text + "page was able to load without errors.", "pass", "") </code></pre> <p>The test is not properly incrementing the failures variable and I get the error: name 'failures' is not defined, but I'm not sure where else to define it.</p>
1
2016-10-19T14:46:18Z
40,134,932
<p>You're declaring a class attribute 'failures', not a global, within the ErrorValidations</p> <p>Instead of using global failures try:</p> <pre><code>class ErrorValidations: failures = 0 def CheckforError1(driver): try: if error1.is_displayed(): ErrorValidations.failures += 1 </code></pre> <p>A true global would be declared outside of the class</p>
1
2016-10-19T14:54:40Z
[ "python", "selenium", "testing", "qa" ]
Making a quiver plot from .dat files
40,134,745
<p>Hi I am trying to make a quiver (vector field) plot from data that is stored in .dat files. I have 4 .dat files which are 1D arrays, one for the x axis, y axis, f(x,y) along x and f(x,y) along y.</p> <p>Note, I am able to construct a quiver plot without importing data from .dat files, I just followed this basic example <a href="http://www.scipy-lectures.org/intro/matplotlib/auto_examples/plot_quiver_ex.html" rel="nofollow">here</a>. </p> <p>However, I am unable to apply this basic example to my example in which I need to import the data from .dat files. My code is below, I am not getting any error messages but I am getting a blank quiver plot. Any help/suggestions would be greatly appreciated, thanks!</p> <pre><code>import numpy as np import matplotlib.pyplot as plt n=12 data0 = np.genfromtxt('xaxis.dat') data1 = np.genfromtxt('yaxis.dat') data2 = np.genfromtxt('fx.dat') data3 = np.genfromtxt('fy.dat') x = data0[0] y = data1[0] fx = data2[0] fy = data3[0] plt.axes([0.025, 0.025, 0.95, 0.95]) plt.quiver(x,y,fx,fy, alpha=.5) plt.quiver(x,y,fx,fy,edgecolor='k',facecolor='none', linewidth=.5) plt.xlim(-1,n) plt.xticks(()) plt.ylim(-1,n) plt.yticks(()) plt.show() </code></pre>
1
2016-10-19T14:46:33Z
40,135,501
<p>In the <a href="http://www.scipy-lectures.org/intro/matplotlib/auto_examples/plot_quiver_ex.html" rel="nofollow">example for the quiver plot</a> you provided all <code>X</code>, <code>Y</code>, <code>U</code> and <code>V</code> are 2D arrays, with shape <code>(n,n)</code>.</p> <p>In your example you are importing an array of values for <code>x</code>, <code>y</code>, <code>fx</code> and <code>fy</code>, and then selecting only the first line with <code>[0]</code>.</p> <p>When using the code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt n=3 # number of points, changed it data0 = np.genfromtxt('xaxis.dat') data1 = np.genfromtxt('yaxis.dat') data2 = np.genfromtxt('fx.dat') data3 = np.genfromtxt('fy.dat') x = data0[0] y = data1[0] fx = data2[0] fy = data3[0] plt.axes([0.025, 0.025, 0.95, 0.95]) # position of bottom left point of graph inside window and its size plt.quiver(x,y,fx,fy, alpha=.5) # draw inside of arrows, half transparent plt.quiver(x,y,fx,fy,edgecolor='k',facecolor='none', linewidth=.5) # draw contours of arrows plt.xlim(-1,n) # left and right most values in the x axis plt.xticks(()) # remove the numbers from the x axis plt.ylim(-1,n) # ... plt.yticks(()) # ... plt.show() </code></pre> <p>I get: <a href="https://i.stack.imgur.com/W9HoF.png" rel="nofollow"><img src="https://i.stack.imgur.com/W9HoF.png" alt="only one point"></a> With <code>0 1 2 0 1 2 0 1 2</code> in xaxis.dat and fx.dat, <code>0 0 0 1 1 1 2 2 2</code> in yaxis.dat and <code>1 1 1 2 2 2 3 3 3</code> in fy.dat. If I just remove the <code>[0]</code> from the arrays assignment, I get: <a href="https://i.stack.imgur.com/Vcbvv.png" rel="nofollow"><img src="https://i.stack.imgur.com/Vcbvv.png" alt="all points"></a> with all points shown.</p> <p>One change I would make is to use <code>plt.xlim(min(x)-1,max(x)+1)</code> and <code>plt.ylim(min(y)-1,max(y)+1)</code>, to ensure you get to view the right area of the graph. For instance, if I make all four arrays equal to <code>np.random.rand(10)</code> (a 1D array with 10 random elements between 0 and 1), I get: <a href="https://i.stack.imgur.com/NQkAv.png" rel="nofollow"><img src="https://i.stack.imgur.com/NQkAv.png" alt="random points"></a></p> <h2>Notes on array format</h2> <p>The <code>plt.quiver</code> will also accept the arrays in the format:</p> <pre><code>x = [0, 1, 2] # 1D array (list, actually...) y = [0, 1, 2] fx = [[0, 1, 2], [0, 1, 2], [0, 1, 2]] # 2D array fy = [[0, 0, 0], [1, 1, 1], [2, 2, 2]] </code></pre> <p><a href="https://i.stack.imgur.com/q95SE.png" rel="nofollow"><img src="https://i.stack.imgur.com/q95SE.png" alt="enter image description here"></a> But not if all arrays are 1D:</p> <pre><code>fx = np.array(fx).flatten() fy = np.array(fy).flatten() </code></pre> <p><a href="https://i.stack.imgur.com/yiqCj.png" rel="nofollow"><img src="https://i.stack.imgur.com/yiqCj.png" alt="enter image description here"></a></p> <h2>Previous answer (wrong)</h2> <p>[first two paragraphs]...</p> <p>This means you probably noticed <code>genfromtxt</code> returns a 2D array (as it is able to import several columns from a single file, so the returned array will mimic the 2D structure of your file if nothing else is told), making <code>data0[0]</code> the first line on your document xaxis.dat.</p> <p><strong>EDIT:</strong> the sentence below is erroneous, <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.quiver" rel="nofollow">plt.quiver</a> can receive 1D arrays, just in the right shape.</p> <p>However the <code>quiver</code> expects 2D arrays, from where it will retrieve the values for each point: for point <code>i,j</code> the position will be <code>(X[i,j], Y[i,j])</code> and the arrow will be <code>(U[i,j], V[i,j])</code>.</p> <p>If you have the repeated values for x and y in the file like this:</p> <ul> <li><p>xaxis.dat:</p> <p>0, 1, 2, 0, 1, 2, 0, 1, 2</p></li> <li><p>yaxix.dat:</p> <p>0, 0, 0, 1, 1, 1, 2, 2, 2</p></li> </ul> <p>You can just <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow">reshape</a> all four of your arrays to (# points in x, # points in y) and it should work out.</p> <p>If you don't you will have to use something similar to <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html" rel="nofollow"><code>np.mgrid</code></a> (or <a href="http://louistiao.me/posts/numpy-mgrid-vs-meshgrid/" rel="nofollow"><code>np.meshgrid</code></a>) to make a valid combination of <code>X</code> and <code>Y</code> arrays, and format <code>fx</code> and <code>fy</code> accordingly.</p>
1
2016-10-19T15:17:22Z
[ "python", "matplotlib", "plot" ]
Identify drive letter of USB composite device using Python
40,134,760
<p>I have a USB composite device that has an SD card. Using Python, I need a way to find the drive letter of the SD card when the device is connected. Does anyone have experience with this? Initially it needs to work in Windows, but I'll eventually need to port it to Mac and Linux.</p>
-1
2016-10-19T14:47:27Z
40,137,725
<p>I don't have an SD card attached to a USB port. To get you started, you could <em>try</em> this on Windows. Install <a href="http://timgolden.me.uk/python/wmi/index.html" rel="nofollow">Golden's WMI</a>. I found that the Windows .zip wouldn't install but the pip version works fine, or at least it does on Win7. Then you can list logical disks with code like this.</p> <pre><code>&gt;&gt;&gt; import wmi &gt;&gt;&gt; c=wmi.WMI() ... &gt;&gt;&gt; for disk in c.Win32_LogicalDisk(): ... print(disk) </code></pre> <p>This code provided a listing that included mention of a NAS which is why I have hopes for your SD card. Various refinements are possible.</p>
0
2016-10-19T17:16:04Z
[ "python" ]
Convert .fbx to .obj with Python FBX SDK
40,134,800
<p>I have a ten frame .fbx file of an animal walking. This file includes a rigged model with textures, but I am only interested in the mesh of the model at each frame. </p> <p>How can I use Python FBX SDK or Python Blender SDK to export each frame of the fbx file into an obj file?</p> <p>Am I approaching this the wrong way? Should I try to find a way to do this manually in Maya/Blender first?</p>
1
2016-10-19T14:49:20Z
40,135,741
<p>its a example for fbx to obj import fbx</p> <pre><code># Create an SDK manager manager = fbx.FbxManager.Create() # Create a scene scene = fbx.FbxScene.Create(manager, "") # Create an importer object importer = fbx.FbxImporter.Create(manager, "") # Path to the .obj file milfalcon = "samples/millenium-falcon/millenium-falcon.fbx" # Specify the path and name of the file to be imported importstat = importer.Initialize(milfalcon, -1) importstat = importer.Import(scene) # Create an exporter object exporter = fbx.FbxExporter.Create(manager, "") save_path = "samples/millenium-falcon/millenium-falcon.obj" # Specify the path and name of the file to be imported exportstat = exporter.Initialize(save_path, -1) exportstat = exporter.Export(scene) </code></pre>
2
2016-10-19T15:28:07Z
[ "python", "blender", "maya", ".obj", "fbx" ]
cast numpy array into memmap
40,134,810
<p>I generate some data in my memory and I want to cast it into numpy.memmap to save up RAM. What should I do? my data is in:</p> <pre><code> X_list_total_standardized=np.array(X_list_total_standardized) </code></pre> <p>I know that I could initialize an empty numpy.memmap:</p> <pre><code>X_list_total_standardized_memmap=np.memmap(self._prepared_data_location_npmemmap_X,dtype='float32',mode='w+') </code></pre> <p>What is the most convenient way to store X_list_total_standardized into the memmap? Thank you</p> <p>PS: would the following command be ok?</p> <pre><code> X_list_total_standardized_memmap[:]=X_list_total_standardized[:] </code></pre>
0
2016-10-19T14:49:32Z
40,136,066
<p>I found next example in numpy documentation :</p> <pre><code>data = np.arange(12, dtype='float32') data.resize((3,4)) fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4)) fp[:] = data[:] </code></pre> <p>So your last command is ok.</p>
1
2016-10-19T15:44:09Z
[ "python", "arrays", "numpy", "numpy-memmap" ]
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
40,134,811
<p>I made a function to clean up any HTML code/tags from strings in my dataframe. The function takes every value from the data frame, cleans it with the remove_html function, and returns a clean df. After converting the data frame to string values and cleaning it up I'm attempting to convert where possible the values in the data frame back to integers. I have tried try/except but don't get the result that I want. This is what I have at the moment:</p> <pre><code>def clean_df(df): df = df.astype(str) list_of_columns = list(df.columns) for col in list_of_columns: column = [] for row in list(df[col]): column.append(remove_html(row)) try: return int(row) except ValueError: pass del df[col] df[col] = column return df </code></pre> <p>Without the try/except statements the function returns a clean df where the integers are strings. So its just the try/except statement that seems to be an issue. I've tried the try/except statements in multiple ways and none of them return a df. The current code for example returns an 'int' object.</p>
4
2016-10-19T14:49:33Z
40,134,973
<p>insert the <code>columm.append</code> into the <code>try:</code></p> <pre><code>for col in list_of_columns: column = [] for row in list(df[col]): try: column.append(remove_html(row)) except ValueError: pass del df[col] df[col] = column return df </code></pre>
2
2016-10-19T14:56:04Z
[ "python", "pandas", "try-except" ]
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
40,134,811
<p>I made a function to clean up any HTML code/tags from strings in my dataframe. The function takes every value from the data frame, cleans it with the remove_html function, and returns a clean df. After converting the data frame to string values and cleaning it up I'm attempting to convert where possible the values in the data frame back to integers. I have tried try/except but don't get the result that I want. This is what I have at the moment:</p> <pre><code>def clean_df(df): df = df.astype(str) list_of_columns = list(df.columns) for col in list_of_columns: column = [] for row in list(df[col]): column.append(remove_html(row)) try: return int(row) except ValueError: pass del df[col] df[col] = column return df </code></pre> <p>Without the try/except statements the function returns a clean df where the integers are strings. So its just the try/except statement that seems to be an issue. I've tried the try/except statements in multiple ways and none of them return a df. The current code for example returns an 'int' object.</p>
4
2016-10-19T14:49:33Z
40,135,527
<p>consider the <code>pd.DataFrame</code> <code>df</code></p> <pre><code>df = pd.DataFrame(dict(A=[1, '2', '_', '4'])) </code></pre> <p><a href="https://i.stack.imgur.com/m05NY.png" rel="nofollow"><img src="https://i.stack.imgur.com/m05NY.png" alt="enter image description here"></a></p> <p>You want to use the function <code>pd.to_numeric</code>...<br> <strong><em>Note</em></strong><br> <code>pd.to_numeric</code> operates on scalars and <code>pd.Series</code>. It doesn't operate on a <code>pd.DataFrame</code><br> <strong><em>Also</em></strong><br> Use the parameter <code>errors='coerce'</code> to get numbers where you can and <code>NaN</code> elsewhere.</p> <pre><code>pd.to_numeric(df['A'], 'coerce') 0 1.0 1 2.0 2 NaN 3 4.0 Name: A, dtype: float6 </code></pre> <p>Or, to get numbers where you can, and what you already had elsewhere</p> <pre><code>pd.to_numeric(df['A'], 'coerce').combine_first(df['A']) 0 1 1 2 2 _ 3 4 Name: A, dtype: object </code></pre> <p>you can then assign it back to your <code>df</code></p> <pre><code>df['A'] = pd.to_numeric(df['A'], 'coerce').combine_first(df['A']) </code></pre>
0
2016-10-19T15:18:32Z
[ "python", "pandas", "try-except" ]
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
40,134,811
<p>I made a function to clean up any HTML code/tags from strings in my dataframe. The function takes every value from the data frame, cleans it with the remove_html function, and returns a clean df. After converting the data frame to string values and cleaning it up I'm attempting to convert where possible the values in the data frame back to integers. I have tried try/except but don't get the result that I want. This is what I have at the moment:</p> <pre><code>def clean_df(df): df = df.astype(str) list_of_columns = list(df.columns) for col in list_of_columns: column = [] for row in list(df[col]): column.append(remove_html(row)) try: return int(row) except ValueError: pass del df[col] df[col] = column return df </code></pre> <p>Without the try/except statements the function returns a clean df where the integers are strings. So its just the try/except statement that seems to be an issue. I've tried the try/except statements in multiple ways and none of them return a df. The current code for example returns an 'int' object.</p>
4
2016-10-19T14:49:33Z
40,135,720
<p>Works like this:</p> <pre><code>def clean_df(df): df = df.astype(str) list_of_columns = list(df.columns) for col in list_of_columns: column = [] for row in list(df[col]): try: column.append(int(remove_html(row))) except ValueError: column.append(remove_html(row)) del df[col] df[col] = column return df </code></pre>
0
2016-10-19T15:27:24Z
[ "python", "pandas", "try-except" ]
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
40,134,811
<p>I made a function to clean up any HTML code/tags from strings in my dataframe. The function takes every value from the data frame, cleans it with the remove_html function, and returns a clean df. After converting the data frame to string values and cleaning it up I'm attempting to convert where possible the values in the data frame back to integers. I have tried try/except but don't get the result that I want. This is what I have at the moment:</p> <pre><code>def clean_df(df): df = df.astype(str) list_of_columns = list(df.columns) for col in list_of_columns: column = [] for row in list(df[col]): column.append(remove_html(row)) try: return int(row) except ValueError: pass del df[col] df[col] = column return df </code></pre> <p>Without the try/except statements the function returns a clean df where the integers are strings. So its just the try/except statement that seems to be an issue. I've tried the try/except statements in multiple ways and none of them return a df. The current code for example returns an 'int' object.</p>
4
2016-10-19T14:49:33Z
40,135,964
<p>Use the try/except in a function and use that function with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow"><code>DataFrame.applymap()</code></a></p> <pre><code>df = pd.DataFrame([['a','b','1'], ['2','c','d'], ['e','3','f']]) def foo(thing): try: return int(thing) except ValueError as e: return thing &gt;&gt;&gt; df[0][2] 'e' &gt;&gt;&gt; df[0][1] '2' &gt;&gt;&gt; df = df.applymap(foo) &gt;&gt;&gt; df[0][2] 'e' &gt;&gt;&gt; df[0][1] 2 &gt;&gt;&gt; </code></pre>
0
2016-10-19T15:38:50Z
[ "python", "pandas", "try-except" ]
Django says there are no changes to be made when I migrate
40,134,859
<p>I'm attempted to create a database for a fictional school. Unfortunatley when I try to migrate the tables this happens:</p> <p>C:\Python34\Scripts\schoolDatabase>manage.py makemigrations school</p> <p>C:\Python34\Scripts\schoolDatabase>python manage.py makemigrations school No changes detected in app 'school'</p> <p>This is the model I am referring to:</p> <pre><code>TYPE_OF_PERSON = ( ('T', 'Teacher'), ('S', 'Student'),) DETENTION_COMPLETED = ( ('Y', 'Yes'), 'N', 'No' OUTCOME = ( ('P', 'Pass'), ('F', 'Fail') ) class Person: first_name = models.CharField(max_length = 25) surname = models.CharField(max_length = 25) address = models.CharField(max_length = 45) year_group = models.CharField(max_length = 10) form = models.CharField(max_length = 15) type_of_person = models.CharField(choices = TYPE_OF_PERSON) person_id = models.CharField(primary_key = True) class Subject: name = models.CharField(max_length = 25) class SchoolClass: class_id = models.IntegerField(primary_key = True) person_id = models.ForeignKey('Person') subject = models.ForeignKey('Subject') year_group = models.ForeignKey('Person') class Attendance: school_class = models.ForeignKey('SchoolClass') date = models.DateField() start_time = models.TimeField() end_time = models.TimeField() person_id = models.ForeignKey('Person') class Assignment: assignment_id = models.IntegerField(primary_key = True) subject = models.ForeignKey('Subject') school_class = models.ForeignKey('SchoolClass') teacher = models.ForeignKey('Person') description =models.TextField() date_set = models.DateField() due_date = models.DateField() mark = models.CharField(max_length = 20) comments = models.TextField() class Detention: detention_date = models.DateField() student_id = models.ForeignKey('Person') reason = models.CharField(max_length = 30) completed = models.CharField(choices = DETENTION_COMPLETED) class Exam: exam_id = models.IntegerField(primary_key = True) subject = models.ForeignKey('Subject') paper = models.CharField(max_length = 30) score = models.CharField(max_length = 20) outcome = models.CharField(choices = OUTCOME) </code></pre> <p>I heard that if managed was set to False then Django won't create tables when you migrate, but I don't know how to set it to True. </p> <p>When I typed in:</p> <p>manage.py inspectdb it showed me that managed was set to False but how do I change it to True so that my database will be migrated?</p> <p>Here is the traceback:</p> <p>C:\Python34\Scripts\schoolDatabase>manage.py makemigrations school</p> <p>C:\Python34\Scripts\schoolDatabase>python manage.py makemigrations school No changes detected in app 'school'</p> <pre><code> class DjangoMigrations(models.Model): id = models.IntegerField(primary_key=True) # AutoField? app = models.CharField(max_length=255) name = models.CharField(max_length=255) applied = models.DateTimeField() class Meta: managed = False db_table = 'django_migrations' </code></pre> <p>Here is the tree:</p> <p><a href="https://i.stack.imgur.com/yUQqU.png" rel="nofollow"><img src="https://i.stack.imgur.com/yUQqU.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/ZAevZ.png" rel="nofollow"><img src="https://i.stack.imgur.com/ZAevZ.png" alt="enter image description here"></a></p> <p>Sorry about the poor formatting but stackoverflow won't let me put it in the code block. </p> <p>Any help would be appreciated. </p> <p>Settings.py:</p> <pre><code>INSTALLED_APPS =[ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages.', 'django.contribe.staticfiles', 'school' ] </code></pre>
0
2016-10-19T14:51:39Z
40,136,214
<p>Your models should derive from models.Model:</p> <pre><code> class Person(models.Model): ... class Subject(models.Model): ... ... </code></pre>
0
2016-10-19T15:50:13Z
[ "python", "django", "migration" ]
getting an average of values from dictionaries with keys with a list of values
40,135,001
<p>For my final python assignment at my university I need to create functions within Jupyter Notebook to conduct a small research. I need to create dictionaries and lists from .csv files and build functions for the dictionaries that I get from my read_csv() function. For this assignment I am allowed to ask and google because the functions I have to make are fairly common problems people walk into.</p> <p>The way these dictionaries look like after my read_csv() returns them is as follows:</p> <pre><code>data_dict = { "abc" : [1, 2, 3, 4], "def" : [4, 5, 6, 7], "ghi" : [8, 9, 10, 11] } </code></pre> <p>So basically a dictionary with a large amount of keys with each a list of values. What I need to do is sum up all the numbers of the first index of each list and get the average from the sum, then the second index, third index and so on, returning a list of all averages. With the result being something like:</p> <pre><code>averages = [4.333, 5.333, 6.333, 7.333] </code></pre> <p>How would one go about this without importing anything? In the past weeks we haven't really talked about working with dictionaries and I've tried looking for solutions on the internet but couldn't find any dealing with summing up integers or floats at specific indexes from different lists.</p>
-1
2016-10-19T14:57:21Z
40,135,158
<p><code>zip</code> the values to get the columns, and divide each column's <code>sum</code> by its <code>len</code>.</p>
0
2016-10-19T15:03:33Z
[ "python", "list", "python-2.7", "dictionary", "jupyter-notebook" ]
getting an average of values from dictionaries with keys with a list of values
40,135,001
<p>For my final python assignment at my university I need to create functions within Jupyter Notebook to conduct a small research. I need to create dictionaries and lists from .csv files and build functions for the dictionaries that I get from my read_csv() function. For this assignment I am allowed to ask and google because the functions I have to make are fairly common problems people walk into.</p> <p>The way these dictionaries look like after my read_csv() returns them is as follows:</p> <pre><code>data_dict = { "abc" : [1, 2, 3, 4], "def" : [4, 5, 6, 7], "ghi" : [8, 9, 10, 11] } </code></pre> <p>So basically a dictionary with a large amount of keys with each a list of values. What I need to do is sum up all the numbers of the first index of each list and get the average from the sum, then the second index, third index and so on, returning a list of all averages. With the result being something like:</p> <pre><code>averages = [4.333, 5.333, 6.333, 7.333] </code></pre> <p>How would one go about this without importing anything? In the past weeks we haven't really talked about working with dictionaries and I've tried looking for solutions on the internet but couldn't find any dealing with summing up integers or floats at specific indexes from different lists.</p>
-1
2016-10-19T14:57:21Z
40,135,182
<p>First collect the values, transpose them and then its easy:</p> <pre><code># values of the dict values = data_dict.values() # transposed average averages = [sum(x)/float(len(x)) for x in zip(*values)] print (averages) </code></pre> <p>returns:</p> <pre><code>[4.333333333333333, 5.333333333333333, 6.333333333333333, 7.333333333333333] </code></pre> <p>A shorter <em>'less-explanatory'</em> one-liner would be:</p> <pre><code>averages = [sum(x)/float(len(x)) for x in zip(*data_dict.values())] </code></pre>
3
2016-10-19T15:04:31Z
[ "python", "list", "python-2.7", "dictionary", "jupyter-notebook" ]
getting an average of values from dictionaries with keys with a list of values
40,135,001
<p>For my final python assignment at my university I need to create functions within Jupyter Notebook to conduct a small research. I need to create dictionaries and lists from .csv files and build functions for the dictionaries that I get from my read_csv() function. For this assignment I am allowed to ask and google because the functions I have to make are fairly common problems people walk into.</p> <p>The way these dictionaries look like after my read_csv() returns them is as follows:</p> <pre><code>data_dict = { "abc" : [1, 2, 3, 4], "def" : [4, 5, 6, 7], "ghi" : [8, 9, 10, 11] } </code></pre> <p>So basically a dictionary with a large amount of keys with each a list of values. What I need to do is sum up all the numbers of the first index of each list and get the average from the sum, then the second index, third index and so on, returning a list of all averages. With the result being something like:</p> <pre><code>averages = [4.333, 5.333, 6.333, 7.333] </code></pre> <p>How would one go about this without importing anything? In the past weeks we haven't really talked about working with dictionaries and I've tried looking for solutions on the internet but couldn't find any dealing with summing up integers or floats at specific indexes from different lists.</p>
-1
2016-10-19T14:57:21Z
40,135,200
<p>One approach could be:</p> <pre><code>data_dict = { "abc" : [1, 2, 3, 4], "def" : [4, 5, 6, 7], "ghi" : [8, 9, 10, 11] } print data_dict for i in data_dict: sum_items = 0 num_items = 0 for j in data_dict[i]: num_items += 1 sum_items += j print data_dict[i] print sum_items print sum_items/num_items </code></pre>
0
2016-10-19T15:05:06Z
[ "python", "list", "python-2.7", "dictionary", "jupyter-notebook" ]
String formatting of floats
40,135,080
<p>I would like to convert a number to a string in such a way that it only shows a certain number of significant digits, without superfluous zeros. The following is an example of some desired in/outputs, given that I want 5 significant digits:</p> <pre><code>0.0000123456789 &gt; 1.2346e-5 0.00123456789 &gt; 1.2346e-3 0.123456789 &gt; 1.2346e-1 1.23456789 &gt; 1.2346 1234.56789 &gt; 1234.6 1234567.89 &gt; 1.2346e6 </code></pre> <p>The <code>g</code> option of string formatting (<a href="https://docs.python.org/2/library/string.html#format-specification-mini-language" rel="nofollow">https://docs.python.org/2/library/string.html#format-specification-mini-language</a>) comes pretty close, but its behaviour isn't quite what I'm looking for for numbers smaller than 1, but not <em>much</em> smaller than 1:</p> <pre><code>"{:.5g}".format(0.000123456789) # returns '0.00012346', I want '1.2346e-4' </code></pre> <p>Is it possible to manipulate the behaviour of one of the existing formatters to do this?</p>
0
2016-10-19T15:00:47Z
40,135,232
<pre><code>'{:.4g}'.format(float(input)) if x&lt;=1000 or x&gt;=.0001 else '{:.4e}'.format(float(input)) </code></pre>
2
2016-10-19T15:06:33Z
[ "python" ]
String formatting of floats
40,135,080
<p>I would like to convert a number to a string in such a way that it only shows a certain number of significant digits, without superfluous zeros. The following is an example of some desired in/outputs, given that I want 5 significant digits:</p> <pre><code>0.0000123456789 &gt; 1.2346e-5 0.00123456789 &gt; 1.2346e-3 0.123456789 &gt; 1.2346e-1 1.23456789 &gt; 1.2346 1234.56789 &gt; 1234.6 1234567.89 &gt; 1.2346e6 </code></pre> <p>The <code>g</code> option of string formatting (<a href="https://docs.python.org/2/library/string.html#format-specification-mini-language" rel="nofollow">https://docs.python.org/2/library/string.html#format-specification-mini-language</a>) comes pretty close, but its behaviour isn't quite what I'm looking for for numbers smaller than 1, but not <em>much</em> smaller than 1:</p> <pre><code>"{:.5g}".format(0.000123456789) # returns '0.00012346', I want '1.2346e-4' </code></pre> <p>Is it possible to manipulate the behaviour of one of the existing formatters to do this?</p>
0
2016-10-19T15:00:47Z
40,135,307
<p>You're almost there, you just need <code>e</code>, not <code>g</code>:</p> <pre><code>"{:.5g}".format(0.000123456789) # '1.23457e-04' </code></pre> <p>Though the number in the format string indicates the amount of decimal points, so you'll want 4 (plus the one digit to the left of the decimal point):</p> <pre><code>"{:.4e}".format(0.000123456789) '1.2346e-04' </code></pre>
1
2016-10-19T15:08:39Z
[ "python" ]
String formatting of floats
40,135,080
<p>I would like to convert a number to a string in such a way that it only shows a certain number of significant digits, without superfluous zeros. The following is an example of some desired in/outputs, given that I want 5 significant digits:</p> <pre><code>0.0000123456789 &gt; 1.2346e-5 0.00123456789 &gt; 1.2346e-3 0.123456789 &gt; 1.2346e-1 1.23456789 &gt; 1.2346 1234.56789 &gt; 1234.6 1234567.89 &gt; 1.2346e6 </code></pre> <p>The <code>g</code> option of string formatting (<a href="https://docs.python.org/2/library/string.html#format-specification-mini-language" rel="nofollow">https://docs.python.org/2/library/string.html#format-specification-mini-language</a>) comes pretty close, but its behaviour isn't quite what I'm looking for for numbers smaller than 1, but not <em>much</em> smaller than 1:</p> <pre><code>"{:.5g}".format(0.000123456789) # returns '0.00012346', I want '1.2346e-4' </code></pre> <p>Is it possible to manipulate the behaviour of one of the existing formatters to do this?</p>
0
2016-10-19T15:00:47Z
40,136,096
<pre><code>if 1 &lt;= x &lt;10000: print '{:.5g}'.format(x) elif 1 &gt; x or x &gt;= 10000: print '{:.4e}'.format(x) </code></pre> <p>Similar to A.Kot's answer but not a one liner and outputs what you want given your sample. </p>
0
2016-10-19T15:45:31Z
[ "python" ]
pylint: getting it to understand decorators
40,135,129
<p>pylint doesn't seem to take into account decorators.</p> <p>I have a decorator such that</p> <pre><code>@decorator def foo(arg1, arg2): pass </code></pre> <p>becomes</p> <pre><code>def foo(arg2): pass </code></pre> <p>but pylint keeps complaining that when I call foo I'm missing an argument. I'd rather not disable this warning as it's quite useful even for those decorated functions. Is there a way to <strong>just make it understand, man</strong>?</p>
0
2016-10-19T15:02:28Z
40,135,962
<p>If you have something like that </p> <pre><code>def decorator(f): def wrapper(*args, **kwargs): return f(1, *args, **kwargs) return wrapper @decorator def z(a, b): return a + b print( z(5) ) </code></pre> <p>A simple solution that don't ask for too much change in your code is to just forget the @, that is just syntactic sugar. It works for me.</p> <pre><code>def z(a, b): return a + b z = decorator(z) print( z(5) ) </code></pre>
0
2016-10-19T15:38:49Z
[ "python", "python-decorators", "pylint" ]
Django: create database tables programmatically/dynamically
40,135,179
<p>I've been working on a Django app for some time now and have encountered a need for dynamic model and database table generation. I've searched far and wide and it seems as though the Django API does not include this function. From what I have gathered, <a href="http://south.readthedocs.io/en/latest/databaseapi.html#db-create-table" rel="nofollow">South</a> has a function to do this (i.e. south.db.create_table). However, from what I have gathered from South's release notes, South is not compatible with Django 1.7 and higher and my project was built using Django 1.9. </p> <p>I have already written a script that creates model instances of the schema I would like to migrate to my database using the following method:</p> <pre><code>attrs = {'__module__':model_location, 'Meta':Meta} model = type(table_name, (models.Model,), attrs) </code></pre> <p>p.s. please note that this is not the entirety of the mentioned script. If you think this would be useful for me to provide I can add it upon request.</p> <p>Has anyone found a workaround for using South 1.0.2 with Django 1.9? If not does anyone have any ideas on how I could achieve this functionality without South? I have tried to think of alternative methods (rather than dynamically generating tables) but this really seems like it would provide the most concise and clean results given the scope of my project.</p> <p>Thank you!</p>
1
2016-10-19T15:04:21Z
40,135,599
<p>The reason South is incompatible with recent Django versions is that it has been <a href="http://south.readthedocs.io/en/latest/releasenotes/1.0.html" rel="nofollow">rolled into Django</a> as of Django 1.7, under the name "migrations". If you are looking for similar functionality the starting point would be the <a href="https://docs.djangoproject.com/en/dev/topics/migrations/" rel="nofollow">documentation on migrations</a>. In particular you may be interested in the section on <a href="https://docs.djangoproject.com/en/dev/ref/migration-operations/#runsql" rel="nofollow">RunSQL</a>.</p> <p>If you wish to avoid the migrations module you can also <a href="https://docs.djangoproject.com/en/1.10/topics/db/sql/" rel="nofollow">perform raw SQL queries</a>.</p>
2
2016-10-19T15:22:07Z
[ "python", "mysql", "django", "database", "django-south" ]
Executing C++ code from python
40,135,225
<p>I am a beginner to python, and I have no idea if this seems to be a doable thing.</p> <p>I have a simple loop in python that gives me all the files in the current directory. What I want to do is to execute a C++ code I wrote before on all those files in the directory from python</p> <p>The proposed python loop should be something like this</p> <pre><code>import os for filename in os.listdir(os.getcwd()): print filename (Execute the code.cpp on each file with each iteration) </code></pre> <p>Is there any chance to do this? </p>
1
2016-10-19T15:06:20Z
40,136,244
<p>Fairly easy to execute an external program from Python - regardless of the language:</p> <pre><code>import os import subprocess for filename in os.listdir(os.getcwd()): print filename proc = subprocess.Popen(["./myprog", filename]) proc.wait() </code></pre> <p>The list used for arguments is platform specific, but it should work OK. You should alter <code>"./myprog"</code> to your own program (it doesn't have to be in the current directory, it will use the PATH environment variable to find it).</p>
1
2016-10-19T15:51:28Z
[ "python", "c++", "file", "directory" ]
How to create a subset of document using lxml?
40,135,280
<p>Suppose you have an lmxl.etree element with the contents like:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;element2&gt; &lt;subelement2&gt;blibli&lt;/sublement2&gt; &lt;/element2&gt; &lt;/root&gt; </code></pre> <p>I can use find or xpath methods to get something an element rendering something like:</p> <pre><code>&lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; </code></pre> <p>Is there a way <em>simple</em> to get:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;/root&gt; </code></pre> <p>i.e The element of interest plus all it's ancestors up to the document root?</p>
1
2016-10-19T15:07:39Z
40,136,567
<p>I am not sure there is something built-in for it, but here is a terrible, "don't ever use it in real life" type of a workaround using the <a href="http://lxml.de/api/lxml.etree._Element-class.html#iterancestors" rel="nofollow"><code>iterancestors()</code> parent iterator</a>:</p> <pre><code>from lxml import etree as ET data = """&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;element2&gt; &lt;subelement2&gt;blibli&lt;/subelement2&gt; &lt;/element2&gt; &lt;/root&gt;""" root = ET.fromstring(data) element = root.find(".//subelement1") result = ET.tostring(element) for node in element.iterancestors(): result = "&lt;{name}&gt;{text}&lt;/{name}&gt;".format(name=node.tag, text=result) print(ET.tostring(ET.fromstring(result), pretty_print=True)) </code></pre> <p>Prints:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;/root&gt; </code></pre>
2
2016-10-19T16:07:50Z
[ "python", "python-2.7", "lxml" ]
How to create a subset of document using lxml?
40,135,280
<p>Suppose you have an lmxl.etree element with the contents like:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;element2&gt; &lt;subelement2&gt;blibli&lt;/sublement2&gt; &lt;/element2&gt; &lt;/root&gt; </code></pre> <p>I can use find or xpath methods to get something an element rendering something like:</p> <pre><code>&lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; </code></pre> <p>Is there a way <em>simple</em> to get:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;/root&gt; </code></pre> <p>i.e The element of interest plus all it's ancestors up to the document root?</p>
1
2016-10-19T15:07:39Z
40,137,106
<p>The following code removes elements that don't have any <code>subelement1</code> descendants and are not named <code>subelement1</code>.</p> <pre><code>from lxml import etree tree = etree.parse("input.xml") # First XML document in question for elem in tree.iter(): if elem.xpath("not(.//subelement1)") and not(elem.tag == "subelement1"): if elem.getparent() is not None: elem.getparent().remove(elem) print etree.tostring(tree) </code></pre> <p>Output:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;/root&gt; </code></pre>
1
2016-10-19T16:39:42Z
[ "python", "python-2.7", "lxml" ]
Find a certain difference between members of a list (or set) of numbers
40,135,439
<p>my first question, so please be gentle, I hope I get the formatting right :) I think the question is self explaining. I am looking for a better/faster way to find a difference in a set of numbers... maybe I want a tolerance with it. All I came up with is:</p> <pre><code> def difference(numbers,diff,tol): '''diff is the searched difference,numbers is a list \ of numbers and tol the tolerance''' numbers.sort() match=set() for i in numbers: low = i+diff-tol high= i+diff+tol for k in numbers: if k &gt; high: break if k &lt; low: continue match.add(i) match.add(k) return match </code></pre> <p>But I bet there are way better ways to achieve the result.</p> <p>Any idea is welcome,</p> <p>Christian</p>
-1
2016-10-19T15:14:40Z
40,135,660
<p>You could avoid running the lowest part of the numbers in the second loop (no need for <code>low</code>, just check numbers ahead)</p> <p>With that you can drop the <code>set</code> and use a <code>list</code> instead: less hashing, less processing. Also, don't change the <code>numbers</code> input by sorting it, the caller may not expect it. Use a locally sorted list instead (the other advantage is that <code>numbers</code> can now be a <code>set</code>, a <code>deque</code> ...:</p> <pre><code>def difference(numbers,diff,tol): '''diff is the searched difference,numbers is a list of numbers and tol the tolerance''' snum = sorted(numbers) match=list() for i,n in enumerate(snum): high= n+diff+tol for j in range(i+1,len(snum)): k = snum[j] if k &gt; high: break match.append(n) match.append(k) return match </code></pre> <p>(maybe that would be a better question for code review, the boundary is thin)</p>
0
2016-10-19T15:24:44Z
[ "python" ]
Find a certain difference between members of a list (or set) of numbers
40,135,439
<p>my first question, so please be gentle, I hope I get the formatting right :) I think the question is self explaining. I am looking for a better/faster way to find a difference in a set of numbers... maybe I want a tolerance with it. All I came up with is:</p> <pre><code> def difference(numbers,diff,tol): '''diff is the searched difference,numbers is a list \ of numbers and tol the tolerance''' numbers.sort() match=set() for i in numbers: low = i+diff-tol high= i+diff+tol for k in numbers: if k &gt; high: break if k &lt; low: continue match.add(i) match.add(k) return match </code></pre> <p>But I bet there are way better ways to achieve the result.</p> <p>Any idea is welcome,</p> <p>Christian</p>
-1
2016-10-19T15:14:40Z
40,136,323
<pre><code> count = len(numbers) numbers1 = numbers[:count - 1] numbers2 = numbers[1:] for i in range(0, count - 1): dif = numbers2[i] - numbers1[i] if abs(dif) &lt;= tol: match.add(numbers1[i]) match.add(numbers2[i]) </code></pre>
0
2016-10-19T15:54:59Z
[ "python" ]
Reading a text file using Pandas where some rows have empty elements?
40,135,459
<p>I have a dataset in a textfile that looks like this.</p> <pre><code> 0 0CF00400 X 8 66 7D 91 6E 22 03 0F 7D 0.021650 R 0 18EA0080 X 3 E9 FE 00 0.022550 R 0 00000003 X 8 D5 64 22 E1 FF FF FF F0 0.023120 R </code></pre> <p>I read this using</p> <pre><code>file_pandas = pd.read_csv(fileName, delim_whitespace = True, header = None, engine = 'python') </code></pre> <p>And got the output</p> <pre><code> 0 0 0CF00400 X 8 66 7D 91 6E 22 03 0F 7D 0.02165 1 0 18EA0080 X 3 E9 FE 0 0.022550 R None None None NaN 2 0 00000003 X 8 D5 64 22 E1 FF FF FF F0 0.02312 </code></pre> <p>But I want this read as</p> <pre><code> 0 0 0CF00400 X 8 66 7D 91 6E 22 03 0F 7D 0.021650 R 1 0 18EA0080 X 3 E9 FE 00 0.022550 R 2 0 00000003 X 8 D5 64 22 E1 FF FF FF F0 0.023120 R </code></pre> <p>I've tried removing <code>delim_whitespace = True</code> and replacing it with <code>delimiter = " "</code> but that just combined the first four columns in the output shown above, but it did parse the rest of the data correctly, meaning that the rest of the columns were like the origin txt file (barring the NaN values in whitespaces).</p> <p>I'm not sure how to proceed from here. </p> <p>Side note: the <code>00</code> is being parsed as only <code>0</code>. Is there a way to display <code>00</code> instead?</p>
5
2016-10-19T15:15:38Z
40,135,692
<p>It seems like your data is fixed width columns, you can try <code>pandas.read_fwf()</code>:</p> <pre><code>from io import StringIO import pandas as pd df = pd.read_fwf(StringIO("""0 0CF00400 X 8 66 7D 91 6E 22 03 0F 7D 0.021650 R 0 18EA0080 X 3 E9 FE 00 0.022550 R 0 00000003 X 8 D5 64 22 E1 FF FF FF F0 0.023120 R"""), header = None, widths = [1,12,2,8,4,4,4,4,4,4,4,4,16,2]) </code></pre> <p><a href="https://i.stack.imgur.com/G3bLR.png"><img src="https://i.stack.imgur.com/G3bLR.png" alt="enter image description here"></a></p>
6
2016-10-19T15:26:28Z
[ "python", "pandas" ]
Django – remove trailing zeroes for a Decimal in a template
40,135,464
<p>Is there a way to remove trailing zeros from a <code>Decimal</code> field in a django template? </p> <p>This is what I have: <code>0.0002559000</code> and this is what I need: <code>0.0002559</code>. </p> <p>There are answers suggesting to do this using <code>floatformat</code> filter:</p> <pre><code>{{ balance.bitcoins|floatformat:3 }} </code></pre> <p>However, <code>floatformat</code> performs rounding (either down or up), which is unwanted in my case, as I only need to remove trailing zeros without any rounding at all.</p>
0
2016-10-19T15:15:47Z
40,135,465
<p>The solution is to use <code>normalize()</code> method of a <code>Decimal</code> field:</p> <pre><code>{{ balance.bitcoins.normalize }} </code></pre>
0
2016-10-19T15:15:47Z
[ "python", "django", "django-templates" ]
How to tell Python to save files in this folder?
40,135,670
<p>I am new to Python and have been assigned the task to clean up the files in Slack. I have to backup the files and save them to the designated folder Z drive Slack Files and I am using the open syntax below but it is producing the permission denied error for it. This script has been prepared by my senior to finish up this job.</p> <pre><code>from slacker import * import sys import time import os from datetime import timedelta, datetime root = 'Z:\Slack_Files' def main(token, weeks=4): slack = Slacker(token) total = slack.files.list(count=1).body['paging']['total'] num_pages = int(total/1000.00 + 1) print("{} files to be processed, across {} pages".format(total, num_pages)) files_to_delete = [] ids = [] count = 1 for page in range(num_pages): print ("Pulling page number {}".format(page + 1)) files = slack.files.list(count=1000, page=page+1).body['files'] for file in files: print("Checking file number {}".format(count)) if file['id'] not in ids: ids.append(file['id']) if datetime.fromtimestamp(file['timestamp']) &lt; datetime.now() - timedelta(weeks=weeks): files_to_delete.append(file) print("File No. {} will be deleted".format(count)) else: print ("File No. {} will not be deleted".format(count)) count+=1 print("All files checked\nProceeding to delete files") print("{} files will be deleted!".format(len(files_to_delete))) count = 1 for file in files_to_delete: # print open('Z:\Slack_Files') print("Deleting file {} of {} - {}".format(count, len(files_to_delete), file["name"])) print(file["name"]) count+=1 return count-1 for fn in os.listdir(r'Z:\Slack_Files'): if os.path.isfile(fn): open(fn,'r') if __name__ == "__main__": try: token = '****' except IndexError: print("Usage: python file_deleter.py api_token\nPlease provide a value for the API Token") sys.exit(2) main(token) </code></pre> <p>The error it displays is:</p> <pre><code>Traceback (most recent call last): File "C:\Users\Slacker.py", line 55, in &lt;module&gt; main(token) File "C:\Users\Slacker.py", line 39, in main print open('Z:\Slack_Files') IOError: [Errno 13] Permission denied: 'Z:\\Slack_Files' </code></pre>
-1
2016-10-19T15:25:17Z
40,136,124
<p>To iterate over files in a particular folder, we can simply use os.listdir() to traverse a single tree.</p> <pre><code>import os for fn in os.listdir(r'Z:\Slack_Files'): if os.path.isfile(fn): open(fn,'r') # mode is r means read mode </code></pre>
-1
2016-10-19T15:46:32Z
[ "python" ]
Python- request.post login credentials for website
40,135,835
<p>So I am trying to write this python script and add it to my Windows Task Scheduler to be executed every time I log on my Work Machine. The script should open a webpage and post my login info.</p> <pre><code>import webbrowser import os url = 'www.example.com' webbrowser.open(url) import requests url = 'www.example.com' values = ["'username': username","'password': 'somepass'"] r = requests.post(url, data=values) print r.content </code></pre> <p>When I run the script it opens my browser and lands on the page I want it to however Nothing is posted and I get these errors on my IDE;</p> <pre><code>`Traceback (most recent call last): File "C:\Users\user\Desktop\Scripts\myscript.py", line 20, in &lt;module&gt; r = requests.post(url, data=values) File "C:\Python27\lib\requests\api.py", line 110, in post return request('post', url, data=data, json=json, **kwargs) File "C:\Python27\lib\requests\api.py", line 56, in request return session.request(method=method, url=url, **kwargs) File "C:\Python27\lib\requests\sessions.py", line 462, in request prep = self.prepare_request(req) File "C:\Python27\lib\requests\sessions.py", line 395, in prepare_request hooks=merge_hooks(request.hooks, self.hooks), File "C:\Python27\lib\requests\models.py", line 302, in prepare self.prepare_body(data, files, json) File "C:\Python27\lib\requests\models.py", line 462, in prepare_body body = self._encode_params(data) File "C:\Python27\lib\requests\models.py", line 95, in _encode_params for k, vs in to_key_val_list(data): ValueError: too many values to unpack*` </code></pre>
1
2016-10-19T15:33:04Z
40,140,052
<p>That is what your dict should look like</p> <pre><code>values = {'username': 'username','password': 'somepass'} </code></pre>
0
2016-10-19T19:33:06Z
[ "python", "request" ]
Continue on exception in Python
40,136,016
<p>I'm working on a series of scripts that pulls URLs from a database and uses the <a href="https://pypi.org/project/textstat/#description" rel="nofollow">textstat package</a> to calculate the readability of the page based on a set of predefined calculations. The function below takes a url (from a CouchDB), calculates the defined readability scores, and then saves the scores back to the same CouchDB document. </p> <p>The issue I'm having is with error handling. As an example, the Flesch Reading Ease score calculation requires a count of the total number of sentences on the page. If this returns as zero, an exception is thrown. Is there a way to catch this exception, save a note of the exception in the database, and move on to the next URL in the list? Can I do this in the function below (preferred), or will I need to edit the package itself?</p> <p>I know variations of this question have been asked before. If you know of one that might answer my question, please point me in that direction. My search has been fruitless thus far. Thanks in advance.</p> <pre><code>def get_readability_data(db, url, doc_id, rank, index): readability_data = {} readability_data['url'] = url readability_data['rank'] = rank user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)' headers = { 'User-Agent' : user_agent } try: req = urllib.request.Request(url) response = urllib.request.urlopen(req) content = response.read() readable_article = Document(content).summary() soup = BeautifulSoup(readable_article, "lxml") text = soup.body.get_text() try: readability_data['flesch_reading_ease'] = textstat.flesch_reading_ease(text) readability_data['smog_index'] = textstat.smog_index(text) readability_data['flesch_kincaid_grade'] = textstat.flesch_kincaid_grade(text) readability_data['coleman_liau'] = textstat.coleman_liau_index(text) readability_data['automated_readability_index'] = textstat.automated_readability_index(text) readability_data['dale_chall_score'] = textstat.dale_chall_readability_score(text) readability_data['linear_write_formula'] = textstat.linsear_write_formula(text) readability_data['gunning_fog'] = textstat.gunning_fog(text) readability_data['total_words'] = textstat.lexicon_count(text) readability_data['difficult_words'] = textstat.difficult_words(text) readability_data['syllables'] = textstat.syllable_count(text) readability_data['sentences'] = textstat.sentence_count(text) readability_data['readability_consensus'] = textstat.text_standard(text) readability_data['readability_scores_date'] = time.strftime("%a %b %d %H:%M:%S %Y") # use the doc_id to make sure we're saving this in the appropriate place readability = json.dumps(readability_data, sort_keys=True, indent=4 * ' ') doc = db.get(doc_id) data = json.loads(readability) doc['search_details']['search_details'][index]['readability'] = data #print(doc['search_details']['search_details'][index]) db.save(doc) time.sleep(.5) except: # catch *all* exceptions e = sys.exc_info()[0] write_to_page( "&lt;p&gt;---ERROR---: %s&lt;/p&gt;" % e ) except urllib.error.HTTPError as err: print(err.code) </code></pre> <p>This is the error I receive:</p> <pre><code>Error(ASL): Sentence Count is Zero, Cannot Divide Error(ASyPW): Number of words are zero, cannot divide Traceback (most recent call last): File "new_get_readability.py", line 114, in get_readability_data readability_data['flesch_reading_ease'] = textstat.flesch_reading_ease(text) File "/Users/jrs/anaconda/lib/python3.5/site-packages/textstat/textstat.py", line 118, in flesch_reading_ease FRE = 206.835 - float(1.015 * ASL) - float(84.6 * ASW) TypeError: unsupported operand type(s) for *: 'float' and 'NoneType' </code></pre> <p>This is the code that calls the function:</p> <pre><code>if __name__ == '__main__': db = connect_to_db(parse_args()) print("~~~~~~~~~~" + " GETTING IDs " + "~~~~~~~~~~") ids = get_ids(db) for i in ids: details = get_urls(db, i) for d in details: get_readability_data(db, d['url'], d['id'], d['rank'], d['index']) </code></pre>
0
2016-10-19T15:41:43Z
40,136,717
<p>It is generally good practice to keep <code>try: except:</code> blocks as small as possible. I would wrap your <code>textstat</code> functions in some sort of decorator that catches the exception you expect, and returns the function output and the exception caught.</p> <p>for example:</p> <pre><code>def catchExceptions(exception): #decorator with args (sorta boilerplate) def decorator(func): def wrapper(*args, **kwargs): try: retval = func(*args, **kwargs) except exception as e: return None, e else: return retval, None return wrapper return decorator @catchExceptions(ZeroDivisionError) def testfunc(x): return 11/x print testfunc(0) print '-----' print testfunc(3) </code></pre> <p>prints:</p> <pre><code>(None, ZeroDivisionError('integer division or modulo by zero',)) ----- (3, None) </code></pre>
0
2016-10-19T16:16:30Z
[ "python", "exception-handling" ]
Ipython cv2.imwrite() not saving image
40,136,070
<p>I have written a code in python opencv. I am trying to write the processed image back to disk but the image is not getting saved and it is not showing any error(runtime and compilation) The code is</p> <pre><code>""" Created on Wed Oct 19 18:07:34 2016 @author: Niladri """ import numpy as np import cv2 if __name__ == '__main__': import sys img = cv2.imread('C:\Users\Niladri\Desktop\TexturesCom_LandscapeTropical0080_2_S.jpg') if img is None: print 'Failed to load image file:' sys.exit(1) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) h, w = img.shape[:2] eigen = cv2.cornerEigenValsAndVecs(gray, 15, 3) eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2] #flow = eigen[:,:,2] iter_n = 10 sigma = 5 str_sigma = 3*sigma blend = 0.5 img2 = img for i in xrange(iter_n): print i, gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) eigen = cv2.cornerEigenValsAndVecs(gray, str_sigma, 3) eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2] x, y = eigen[:,:,1,0], eigen[:,:,1,1] print eigen gxx = cv2.Sobel(gray, cv2.CV_32F, 2, 0, ksize=sigma) gxy = cv2.Sobel(gray, cv2.CV_32F, 1, 1, ksize=sigma) gyy = cv2.Sobel(gray, cv2.CV_32F, 0, 2, ksize=sigma) gvv = x*x*gxx + 2*x*y*gxy + y*y*gyy m = gvv &lt; 0 ero = cv2.erode(img, None) dil = cv2.dilate(img, None) img1 = ero img1[m] = dil[m] img2 = np.uint8(img2*(1.0 - blend) + img1*blend) #print 'done' cv2.imshow('dst_rt', img2) cv2.waitKey(0) cv2.destroyAllWindows() #cv2.imwrite('C:\Users\Niladri\Desktop\leaf_image_shock_filtered.jpg', img2) for i in xrange(iter_n): print i, gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) eigen = cv2.cornerEigenValsAndVecs(gray, str_sigma, 3) eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2] x, y = eigen[:,:,1,0], eigen[:,:,1,1] print eigen gxx = cv2.Sobel(gray, cv2.CV_32F, 2, 0, ksize=sigma) gxy = cv2.Sobel(gray, cv2.CV_32F, 1, 1, ksize=sigma) gyy = cv2.Sobel(gray, cv2.CV_32F, 0, 2, ksize=sigma) gvv = x*x*gxx + 2*x*y*gxy + y*y*gyy m = gvv &lt; 0 ero = cv2.erode(img, None) dil = cv2.dilate(img, None) img1 = dil img1[m] = ero[m] img2 = np.uint8(img2*(1.0 - blend) + img1*blend) print 'done' #cv2.imwrite('D:\IP\tropical_image_sig5.bmp', img2) cv2.imshow('dst_rt', img2) cv2.waitKey(0) cv2.destroyAllWindows() #cv2.imshow('dst_rt', img2) cv2.imwrite('C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2) </code></pre> <p>Can anyone please tell me why it is not working. cv2.imshow is working properly(as it is showing the correct image). Thanks and Regards Niladri</p>
0
2016-10-19T15:44:23Z
40,136,150
<p>As a general and absolute rule, you <em>have</em> to protect your windows path strings (containing backslashes) with <code>r</code> prefix or some characters are interpreted (ex: <code>\n,\b,\v,\x</code> aaaaand <code>\t</code> !):</p> <p>so when doing this:</p> <pre><code>cv2.imwrite('C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2) </code></pre> <p>you're trying to save to <code>C:\Users\Niladri\Desktop&lt;TAB&gt;ropical_image_sig5.bmp</code></p> <p>(and I really don't know what it does :))</p> <p>Do this:</p> <pre><code>cv2.imwrite(r'C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2) </code></pre> <p>Note: the read works fine because "escaped" uppercase letters have no particular meaning in python 2 (<code>\U</code> has a meaning in python 3)</p>
1
2016-10-19T15:47:37Z
[ "python", "opencv" ]
Ipython cv2.imwrite() not saving image
40,136,070
<p>I have written a code in python opencv. I am trying to write the processed image back to disk but the image is not getting saved and it is not showing any error(runtime and compilation) The code is</p> <pre><code>""" Created on Wed Oct 19 18:07:34 2016 @author: Niladri """ import numpy as np import cv2 if __name__ == '__main__': import sys img = cv2.imread('C:\Users\Niladri\Desktop\TexturesCom_LandscapeTropical0080_2_S.jpg') if img is None: print 'Failed to load image file:' sys.exit(1) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) h, w = img.shape[:2] eigen = cv2.cornerEigenValsAndVecs(gray, 15, 3) eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2] #flow = eigen[:,:,2] iter_n = 10 sigma = 5 str_sigma = 3*sigma blend = 0.5 img2 = img for i in xrange(iter_n): print i, gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) eigen = cv2.cornerEigenValsAndVecs(gray, str_sigma, 3) eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2] x, y = eigen[:,:,1,0], eigen[:,:,1,1] print eigen gxx = cv2.Sobel(gray, cv2.CV_32F, 2, 0, ksize=sigma) gxy = cv2.Sobel(gray, cv2.CV_32F, 1, 1, ksize=sigma) gyy = cv2.Sobel(gray, cv2.CV_32F, 0, 2, ksize=sigma) gvv = x*x*gxx + 2*x*y*gxy + y*y*gyy m = gvv &lt; 0 ero = cv2.erode(img, None) dil = cv2.dilate(img, None) img1 = ero img1[m] = dil[m] img2 = np.uint8(img2*(1.0 - blend) + img1*blend) #print 'done' cv2.imshow('dst_rt', img2) cv2.waitKey(0) cv2.destroyAllWindows() #cv2.imwrite('C:\Users\Niladri\Desktop\leaf_image_shock_filtered.jpg', img2) for i in xrange(iter_n): print i, gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) eigen = cv2.cornerEigenValsAndVecs(gray, str_sigma, 3) eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2] x, y = eigen[:,:,1,0], eigen[:,:,1,1] print eigen gxx = cv2.Sobel(gray, cv2.CV_32F, 2, 0, ksize=sigma) gxy = cv2.Sobel(gray, cv2.CV_32F, 1, 1, ksize=sigma) gyy = cv2.Sobel(gray, cv2.CV_32F, 0, 2, ksize=sigma) gvv = x*x*gxx + 2*x*y*gxy + y*y*gyy m = gvv &lt; 0 ero = cv2.erode(img, None) dil = cv2.dilate(img, None) img1 = dil img1[m] = ero[m] img2 = np.uint8(img2*(1.0 - blend) + img1*blend) print 'done' #cv2.imwrite('D:\IP\tropical_image_sig5.bmp', img2) cv2.imshow('dst_rt', img2) cv2.waitKey(0) cv2.destroyAllWindows() #cv2.imshow('dst_rt', img2) cv2.imwrite('C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2) </code></pre> <p>Can anyone please tell me why it is not working. cv2.imshow is working properly(as it is showing the correct image). Thanks and Regards Niladri</p>
0
2016-10-19T15:44:23Z
40,138,289
<p>As Jean suggested, the error is due to the \ being interpretted as an escape sequence. It is hence always safer to use <code>os.path.join()</code> as it is more cross platform and you need not worry about the escape sequence problem. For instance, in your case, you further need not worry about the first few arguments, as that is your home directory</p> <pre><code>import os cv2.imwrite(os.path.join(os.path.expanduser('~'),'Desktop','tropical_image_sig5.bmp'), img2) </code></pre> <p><code>os.path.expanduser('~')</code> will directly return your home directory.</p>
0
2016-10-19T17:48:41Z
[ "python", "opencv" ]
Read a CSV to insert data into Postgres SQL with Pyhton
40,136,162
<p>I want to read a csv file to insert data into postgres SQL with Python but I have these error :</p> <pre><code> cursor.execute(passdata) psycopg2.IntegrityError: duplicate key value violates unique constraint "prk_constraint_project" DETAIL: Key (project_code)=(%s) already exists. </code></pre> <p>My code is:</p> <pre><code> clinicalCSVINSERT = open(clinicalname, 'r') reader = csv.reader(clinicalCSVINSERT, delimiter='\t') passdata = "INSERT INTO project (project_code, program_name ) VALUES ('%s', '%s')"; cursor.execute(passdata) conn.commit()` </code></pre> <p>What does this error mean ? Is it possible to have a working script ?</p>
0
2016-10-19T15:47:57Z
40,136,634
<p>The immediate problem with your code is that you are trying to include the literal <code>%s</code>. Since you probably did run it more than once you already have a literal <code>%s</code> in that unique column hence the exception.</p> <p>It is necessary to pass the values wrapped in an iterable as parameters to the <code>execute</code> method. The <code>%s</code> is just a value place holder.</p> <pre><code>passdata = """ INSERT INTO project (project_code, program_name ) VALUES (%s, %s) """ cursor.execute(passdata, (the_project_code, the_program_name)) </code></pre> <p>Do not quote the <code>%s</code>. Psycopg will do it if necessary.</p> <p>As your code does not include a loop it will only insert one row from the csv. There are some patterns to insert the whole file. If the requirements allow just use <a href="http://initd.org/psycopg/docs/usage.html#using-copy-to-and-copy-from" rel="nofollow"><code>copy_from</code></a> which is simpler and faster.</p>
0
2016-10-19T16:11:43Z
[ "python", "postgresql", "csv", "psycopg2" ]
Writing rows in a csv using dictionaries in a loop (python 3)
40,136,283
<p>I´m writing on a csv file by adding each row in a loop and using dictionaries. The following is the code:</p> <pre><code>fieldnames = ['id', 'variable1', 'variable2'] f = open('file.csv', 'w') my_writer = csv.DictWriter(f, fieldnames) my_writer.writeheader() f.close() for i in something: something where I get data for mydict with open('file.csv', 'a+b') as f: header = next(csv.reader(f)) dict_writer = csv.DictWriter(f, header) dict_writer.writerow(mydict) </code></pre> <p>I was sure this code worked for me some years ago but probably I was using python2. Now I´m using python 3 and it shows the following error:</p> <pre><code>header = next(csv.reader(f)) StopIteration </code></pre> <p>What may be the problem? Thanks</p>
0
2016-10-19T15:53:14Z
40,138,021
<p>My solution was:</p> <pre><code>fieldnames = ['id', 'variable1', 'variable2'] f= open('file.csv', 'w', newline='') my_writer = csv.DictWriter(f, fieldnames) my_writer.writeheader() for i in something: something where I get data for mydict writer.writerow(mydict) f.close() </code></pre>
0
2016-10-19T17:34:45Z
[ "python", "python-3.x", "csv", "for-loop", "dictionary" ]
How to expose user passwords in the most "secure" way in django?
40,136,285
<p>I am working on Django 1.9 project and I have been asked to enable some users to print a page with a list of a set of users and their passwords. Of course passwords are encrypted and there is no out-of-the-box ways of doing this. I know this would imply a security breach so my question is kind of contradictory, but is there any logical way of doing this that doesn't imply a huge security breach in the software?</p>
1
2016-10-19T15:53:22Z
40,136,359
<p>No, there is no logical way of doing this that doesn't imply a huge security breach in the software. </p> <p>If the passwords are stored correctly (salted and hashed), then even site admins with unrestricted access on the database can not tell you what the passwords are in plain text. </p> <p>You should push back against this unreasonable request. If you have a working "password reset" functionality, then nobody but the user ever needs to know a user's password. <strong>If you don't have a reliable "password reset" feature, then try and steer the conversation and development effort in this direction</strong>. There is rarely any real business need for knowing/printing user passwords, and these kind of feature requests may be coming from non-technical people who have misunderstandings (or no understanding) about the implementation detail of authentication and authorization. </p>
3
2016-10-19T15:56:51Z
[ "python", "django", "passwords", "password-encryption" ]
Invalid Syntax from except
40,136,295
<p>I am receiving an invalid syntax from the line that says except: from this code...</p> <pre><code>from .utils.dataIO import fileIO from .utils import checks from __main__ import send_cmd_help from __main__ import settings as bot_settings import discord from discord.ext import commands import aiohttp import asyncio import json import os class Transformice: """Transformice""" def __init__(self, bot): self.bot = bot @checks.is_owner() @commands.group(name="tfm", pass_context=True, invoke_without_command=True) async def tfm(self, ctx): """Get Transformice Stats""" await send_cmd_help(ctx) @checks.is_owner() @tfm.command(pass_context=True) async def mouse(self, ctx, *name): """Get mouse info""" if name == (): mouse = "+".join(name) link = "http://api.micetigri.fr/json/player/" + mouse async with aiohttp.get(link) as r: result = await r.json() name = result['name'] msg = "**Mouse:** {}".format(name) await self.bot.say(msg) except: await self.bot.say("Invalid username!") def setup(bot): n = Transformice(bot) bot.add_cog(n) </code></pre> <p>Can someone explain why I am getting this error and how to fix it. I am confuse about some errors in python and how to fix that.</p>
-2
2016-10-19T15:53:41Z
40,136,379
<p>An <code>except</code> clause only makes sense after a <code>try</code> block, and there isn't one. It seems you're not looking for exception handling but simply an <code>else</code> clause.</p> <p>Either</p> <pre><code>try: code_that_might_fail() except ValueError: print("ouch.") </code></pre> <p>or</p> <pre><code>if condition: do_this() else: do_that() </code></pre>
2
2016-10-19T15:58:15Z
[ "python" ]
Invalid Syntax from except
40,136,295
<p>I am receiving an invalid syntax from the line that says except: from this code...</p> <pre><code>from .utils.dataIO import fileIO from .utils import checks from __main__ import send_cmd_help from __main__ import settings as bot_settings import discord from discord.ext import commands import aiohttp import asyncio import json import os class Transformice: """Transformice""" def __init__(self, bot): self.bot = bot @checks.is_owner() @commands.group(name="tfm", pass_context=True, invoke_without_command=True) async def tfm(self, ctx): """Get Transformice Stats""" await send_cmd_help(ctx) @checks.is_owner() @tfm.command(pass_context=True) async def mouse(self, ctx, *name): """Get mouse info""" if name == (): mouse = "+".join(name) link = "http://api.micetigri.fr/json/player/" + mouse async with aiohttp.get(link) as r: result = await r.json() name = result['name'] msg = "**Mouse:** {}".format(name) await self.bot.say(msg) except: await self.bot.say("Invalid username!") def setup(bot): n = Transformice(bot) bot.add_cog(n) </code></pre> <p>Can someone explain why I am getting this error and how to fix it. I am confuse about some errors in python and how to fix that.</p>
-2
2016-10-19T15:53:41Z
40,136,383
<p>you should put a</p> <p><code>try-except</code> block together but in your code. You have used only the <code>except block</code>... No <code>try</code> statement.</p> <pre><code> try: if name == (): mouse = "+".join(name) link = "http://api.micetigri.fr/json/player/" + mouse async with aiohttp.get(link) as r: result = await r.json() name = result['name'] msg = "**Mouse:** {}".format(name) await self.bot.say(msg) except: await self.bot.say("Invalid username!") </code></pre> <p>you should use something like above if the error is only because of the <code>except</code> syntax. or there could be you wish to use <code>else:</code> and have used <code>except</code></p>
0
2016-10-19T15:58:27Z
[ "python" ]
how do i find my ipv4 using python?
40,136,310
<p>my server copy it if you want! :) how do i find my ipv4 using python? can i you try to keep it real short?</p> <pre><code>import socket def Main(): host = '127.0.0.1' port = 5000 s = socket.socket() s.bind((host,port)) s.listen(1) c1, addr1 = s.accept() sending = "Connection:" + str(addr1) connection = (sending) print(connection) s.listen(1) c2, addr2 = s.accept() sending = "Connection:" + str(addr2) connection = (sending) print(connection) while True: data1 = c1.recv(1024).decode('utf-8') data2 = c2.recv(1024).decode('utf-8') if not data1: break if not data2: break if data2: c1.send(data2.encode('utf-8')) if data1: c2.send(data1.encode('utf-8')) s.close() if __name__== '__main__': Main() </code></pre> <p>thx for the help i appreciate it!</p>
0
2016-10-19T15:54:07Z
40,136,358
<p>That's all you need for the local address (returns a string):</p> <pre><code>socket.gethostbyname(socket.gethostname()) </code></pre>
0
2016-10-19T15:56:47Z
[ "python", "python-3.x", "ipv4" ]
Python: How to filter a DataFrame of dates in Pandas by a particular date within a window of some days?
40,136,428
<p>I have a DataFrame of dates and would like to filter for a particular date +- some days.</p> <pre><code>import pandas as pd import numpy as np import datetime dates = pd.date_range(start="08/01/2009",end="08/01/2012",freq="D") df = pd.DataFrame(np.random.rand(len(dates), 1)*1500, index=dates, columns=['Power']) </code></pre> <p>If I select lets say date <code>2009-08-03</code> and a window of <code>5</code> days, output would be similar to:</p> <pre><code>&gt;&gt;&gt; Power 2010-07-29 713.108020 2010-07-30 1055.109543 2010-07-31 951.159099 2010-08-01 1350.638983 2010-08-02 453.166697 2010-08-03 1066.859386 2010-08-04 1381.900717 2010-08-05 107.489179 2010-08-06 1195.945723 2010-08-07 1209.762910 2010-08-08 349.554492 </code></pre> <p>N.B.: The original problem I am trying to accomplish is under <a href="http://stackoverflow.com/questions/40117702/python-filter-dataframe-in-pandas-by-hour-day-and-month-grouped-by-year">Python: Filter DataFrame in Pandas by hour, day and month grouped by year</a></p>
0
2016-10-19T16:00:50Z
40,136,429
<p>The function I created to accomplish this is <code>filterDaysWindow</code> and can be used as follows:</p> <pre><code>import pandas as pd import numpy as np import datetime dates = pd.date_range(start="08/01/2009",end="08/01/2012",freq="D") df = pd.DataFrame(np.random.rand(len(dates), 1)*1500, index=dates, columns=['Power']) def filterDaysWindow(df, date, daysWindow): """ Filter a Dataframe by a date within a window of days @type df: DataFrame @param df: DataFrame of dates @type date: datetime.date @param date: date to focus on @type daysWindow: int @param daysWindow: Number of days to perform the days window selection @rtype: DataFrame @return: Returns a DataFrame with dates within date+-daysWindow """ dateStart = date - datetime.timedelta(days=daysWindow) dateEnd = date + datetime.timedelta(days=daysWindow) return df [dateStart:dateEnd] df_filtered = filterDaysWindow(df, datetime.date(2010,8,3), 5) print df_filtered </code></pre>
1
2016-10-19T16:00:50Z
[ "python", "date", "datetime", "pandas", "dataframe" ]
Method like argument in function
40,136,496
<p>I want use method in python / pandas like argument in a function. For example rolling statistics for dataframe:</p> <pre><code>def rolling (df, prefix = 'r', window = 3, method = 'here I wanna choose a method' ): for name in df.columns: df[prefix + name] = df[name].rolling(window).'here this method been called' return df </code></pre> <p>'mean()' or 'sum()' or whatever ... like </p> <pre><code>df.rolling(2).sum() </code></pre> <p>I worked 95% time in R, and in R it's simple (put function as an argument or return any function ). But in python I noob. So I creating package to make things easier for me. Like:</p> <pre><code>def head(x,k = 3): return x.head(k) </code></pre> <p>What function in python help me to use method like argument in a function?</p> <pre><code>#some data import numpy as np import pandas as pd from pandas_datareader.data import DataReader from datetime import datetime ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2016,1,1)) ibm2 = rolling(ibm,'rr', 5, 'sum') # something like this </code></pre>
2
2016-10-19T16:04:20Z
40,136,558
<p>You can use <a href="https://docs.python.org/3.6/library/functions.html#getattr" rel="nofollow"><code>getattr</code></a> with a str of the name of the method. This gets the attribute with that name from the object (In this case, a method)</p> <pre><code>def rolling (df, prefix='r', window=3, method='sum'): for name in df.columns: df[prefix + name] = getattr(df[name].rolling(window), method)() return df </code></pre> <p>Or you could just pass in the method. When calling it, the first argument will be <code>self</code>.</p> <pre><code>def rolling (df, prefix='r', window=3, method=DataReader.sum): for name in df.columns: df[prefix + name] = method(df[name].rolling(window)) return df </code></pre>
3
2016-10-19T16:07:10Z
[ "python", "function", "pandas", "methods" ]
Method like argument in function
40,136,496
<p>I want use method in python / pandas like argument in a function. For example rolling statistics for dataframe:</p> <pre><code>def rolling (df, prefix = 'r', window = 3, method = 'here I wanna choose a method' ): for name in df.columns: df[prefix + name] = df[name].rolling(window).'here this method been called' return df </code></pre> <p>'mean()' or 'sum()' or whatever ... like </p> <pre><code>df.rolling(2).sum() </code></pre> <p>I worked 95% time in R, and in R it's simple (put function as an argument or return any function ). But in python I noob. So I creating package to make things easier for me. Like:</p> <pre><code>def head(x,k = 3): return x.head(k) </code></pre> <p>What function in python help me to use method like argument in a function?</p> <pre><code>#some data import numpy as np import pandas as pd from pandas_datareader.data import DataReader from datetime import datetime ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2016,1,1)) ibm2 = rolling(ibm,'rr', 5, 'sum') # something like this </code></pre>
2
2016-10-19T16:04:20Z
40,136,571
<p>I do this</p> <pre><code>def rolling (df, prefix='r', window=3, method='method_name'): for name in df.columns: df[prefix + name] = df[name].rolling(window).__getattribute__(method)() return df </code></pre>
1
2016-10-19T16:08:05Z
[ "python", "function", "pandas", "methods" ]
Method like argument in function
40,136,496
<p>I want use method in python / pandas like argument in a function. For example rolling statistics for dataframe:</p> <pre><code>def rolling (df, prefix = 'r', window = 3, method = 'here I wanna choose a method' ): for name in df.columns: df[prefix + name] = df[name].rolling(window).'here this method been called' return df </code></pre> <p>'mean()' or 'sum()' or whatever ... like </p> <pre><code>df.rolling(2).sum() </code></pre> <p>I worked 95% time in R, and in R it's simple (put function as an argument or return any function ). But in python I noob. So I creating package to make things easier for me. Like:</p> <pre><code>def head(x,k = 3): return x.head(k) </code></pre> <p>What function in python help me to use method like argument in a function?</p> <pre><code>#some data import numpy as np import pandas as pd from pandas_datareader.data import DataReader from datetime import datetime ibm = DataReader('IBM', 'yahoo', datetime(2000,1,1), datetime(2016,1,1)) ibm2 = rolling(ibm,'rr', 5, 'sum') # something like this </code></pre>
2
2016-10-19T16:04:20Z
40,136,587
<p>A method is an attribute like any other (it just happens to be callable when bound to an object), so you can use <code>getattr</code>. (A default value of <code>None</code> is nonsense, of course, but I didn't want to reorder your signature to make <code>method</code> occur earlier without a default value.)</p> <pre><code>def rolling (df, prefix='r', window=3, method=None): for name in df.columns: obj = df[name].rolling(window) m = getattr(obj, method) df[prefix + name] = m() return df </code></pre>
0
2016-10-19T16:09:07Z
[ "python", "function", "pandas", "methods" ]
Append to dictionary in defaultdict
40,136,544
<p>I have a defaultdict, I want to create a dictionary as the value part, can I append to this dictionary? At the moment I am appending to a list which means that the dictionaries are separated. </p> <pre><code>dates = [datetime.date(2016, 10, 17), datetime.date(2016, 10, 18), datetime.date(2016, 10, 19), datetime.date(2016, 10, 20), datetime.date(2016, 10, 21), datetime.date(2016, 10, 22), datetime.date(2016, 10, 23)] e = defaultdict(list) for key, value in d.iteritems(): value = (sorted(value, key=itemgetter('date'), reverse=False)) for date in dates: for i in value: if i['date'] == str(date) and i['time'] == 'morning': value1 = float(i['value1']) temp = {'val_morning': value1 } e[str(date)].append(temp) elif ii['date'] == str(date) and i['time'] == 'evening': value2 = float(i['value2']) temp = {'val_evening': value2 } e[str(date)].append(temp) </code></pre> <p>which results in:</p> <pre><code>{'2016-10-20': [{'val_morning': 0.0}, {'val_evening': 0.0}], '2016-10-21': [{'val_morning': 0.0}, {'val_evening': 0.0}]} </code></pre> <p><strong>Edit</strong> desired output:</p> <pre><code>{ '2016-10-20': {'val_morning': 0.0, 'val_evening': 0.0}, '2016-10-21': {'val_morning': 0.0, 'val_evening': 0.0} } </code></pre>
0
2016-10-19T16:06:22Z
40,137,056
<p>if i understood you correctly, you want to replace the list with a dict, that you can later add to it values.</p> <p>if so you can do this:</p> <pre><code>dates = [datetime.date(2016, 10, 17), datetime.date(2016, 10, 18), datetime.date(2016, 10, 19), datetime.date(2016, 10, 20), datetime.date(2016, 10, 21), datetime.date(2016, 10, 22), datetime.date(2016, 10, 23)] e = defaultdict(dict) for key, value in d.iteritems(): value = (sorted(value, key=itemgetter('date'), reverse=False)) for date in dates: for i in value: if i['date'] == str(date) and i['time'] == 'morning': value1 = float(i['value1']) temp = {'val_morning': value1 } e[str(date)].update(temp) #### HERE i replaced append with update! elif ii['date'] == str(date) and i['time'] == 'evening': value2 = float(i['value2']) temp = {'val_evening': value2 } e[str(date)].update(temp)#### HERE i replaced append with update! </code></pre> <p>i simply replaced the append with <a href="https://docs.python.org/2/library/stdtypes.html#dict.update" rel="nofollow" title="update">update</a> (and of course made the defaultdict use dict instead of list) </p>
0
2016-10-19T16:36:49Z
[ "python", "defaultdict" ]
Append to dictionary in defaultdict
40,136,544
<p>I have a defaultdict, I want to create a dictionary as the value part, can I append to this dictionary? At the moment I am appending to a list which means that the dictionaries are separated. </p> <pre><code>dates = [datetime.date(2016, 10, 17), datetime.date(2016, 10, 18), datetime.date(2016, 10, 19), datetime.date(2016, 10, 20), datetime.date(2016, 10, 21), datetime.date(2016, 10, 22), datetime.date(2016, 10, 23)] e = defaultdict(list) for key, value in d.iteritems(): value = (sorted(value, key=itemgetter('date'), reverse=False)) for date in dates: for i in value: if i['date'] == str(date) and i['time'] == 'morning': value1 = float(i['value1']) temp = {'val_morning': value1 } e[str(date)].append(temp) elif ii['date'] == str(date) and i['time'] == 'evening': value2 = float(i['value2']) temp = {'val_evening': value2 } e[str(date)].append(temp) </code></pre> <p>which results in:</p> <pre><code>{'2016-10-20': [{'val_morning': 0.0}, {'val_evening': 0.0}], '2016-10-21': [{'val_morning': 0.0}, {'val_evening': 0.0}]} </code></pre> <p><strong>Edit</strong> desired output:</p> <pre><code>{ '2016-10-20': {'val_morning': 0.0, 'val_evening': 0.0}, '2016-10-21': {'val_morning': 0.0, 'val_evening': 0.0} } </code></pre>
0
2016-10-19T16:06:22Z
40,137,304
<p>if i understand correctly</p> <pre><code>import datetime dates = [datetime.date(2016, 10, 17), datetime.date(2016, 10, 18), datetime.date(2016, 10, 19), datetime.date(2016, 10, 20), datetime.date(2016, 10, 21), datetime.date(2016, 10, 22), datetime.date(2016, 10, 23)] dict_x = {} for i in map(str,set(dates)): dict_x[i] = {'val_morning': 0.0, 'val_evening': 0.0} dict_x output : {'2016-10-17': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-18': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-19': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-20': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-21': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-22': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-23': {'val_evening': 0.0, 'val_morning': 0.0}} </code></pre>
0
2016-10-19T16:51:20Z
[ "python", "defaultdict" ]
Learn Python the Hard way ex25 - Want to check my understanding
40,136,550
<p>total noob here confused all to hell about something in "Learn Python the Hard Way." Apologies if this has been covered; I searched and could only find posts about not getting the desired results from the code.</p> <p>My question relates to the interaction of two functions in <a href="https://learnpythonthehardway.org/book/ex25.html" rel="nofollow">exercise 25</a>:</p> <pre><code>def break_words(stuff): words = stuff.split(' ') return words </code></pre> <p>and</p> <pre><code>def sort_sentence(sentence): words = break_words(sentence) return sort_words(words) </code></pre> <p>So, near the end of the exercise Zed has you run this in the terminal:</p> <pre><code>&gt;&gt;&gt; sorted_words = ex25.sort_sentence(sentence) &gt;&gt;&gt; sorted_words ['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’] </code></pre> <p>Now I assume the argument in 'sort_sentence' comes from the following, entered in the terminal at the start of the exercise:</p> <pre><code>&gt;&gt;&gt; sentence = "All good things come to those who wait." </code></pre> <p>But although we now know the above is the argument for 'sort_sentence,' 'sort_sentence' can't complete without running 'break_words', with 'sentence' again as <em>its</em> argument. Here's where I get confused: The argument for 'break_words' is labeled 'stuff.' Does this matter? Can 'sentence' just be passed into 'break_words' from 'sorted_words' no matter what the argument for 'break_words' is labeled?</p> <p>So assuming what I assumed - that the argument label doesn't matter - 'break_words' ought to run with 'sentence' as its argument and return 'words', which is the output of the function 'stuff.split' contained therein. This is where I get <em>really</em> confused - what does the 'words' returned from 'break_words' have to do with the variable 'words' defined as a part of 'sort_sentence'? I simply can't figure out how these functions work together. Thank you in advance for your help!</p>
1
2016-10-19T16:06:40Z
40,137,282
<p>How Python functions more or less work is the following:</p> <pre><code>def function_name(parameter_name_used_locally_within_function_name): #do stuff with parameter_name_used_locally_within_function_name some_new_value = parameter_name_used_locally_within_function_name return some_new_value </code></pre> <p>Notice how the parameter is only with in the scope of the function <code>function_name</code>. As that variable will only be used in that function and not outside of it. When we return a variable from a function, we can assign it to another variable calling the function:</p> <pre><code>my_variable = function_name("hello") </code></pre> <p><code>my_variable</code> now has <code>"hello"</code> as it's value since we called the function, passing in the value <code>"hello"</code>. Notice I didn't call the function with a specify variable name? We don't care what the parameter name is, all we know is it takes one input for the function. That parameter name is only used in the function. Notice how we receive the value of <code>some_new_value</code> with out knowing the name of that variable when we called the function?</p> <p>Let me give you a more broad example of what's going on. Functions can be thought of a task you give someone to do. Lets say the function or task is to as them to cook something for us. The chef or task needs ingredients to cook with (that's our input), and we wish to get food back (our output return). Lets say I want an omelette, I know I have to give the chef eggs to make me one, I don't care how he makes it or what he does to it as long as I get my output/omelette back. He can call the eggs what he wants, he can break the eggs how he wants he can fry it in the pan how he likes, but as long as I get my omelette, I'm happy.</p> <p>Back to our programming world, the function would be something like:</p> <pre><code>def cook_me_something(ingredients): #I don't know how the chef makes things for us nor do I care if ingredients == "eggs": food = "omelette" elif ingredients == "water": food = "boiled water" return food </code></pre> <p>We call it like this: </p> <pre><code>my_food_to_eat = cook_me_something("eggs") </code></pre> <p>Notice I gave him "eggs" and I got some "omelette" back. I didn't say the eggs are the ingredients nor did I know what he called the food that he gave me. He just return <code>food</code> that contain <code>omelettes</code></p> <p>Now let's talk about chaining functions together. </p> <p>So we got the basic down about me giving something to the chef and he giving me food back based on what I gave him. So what if we gave him something that he needs to process before cooking it with. Let's say what if he doesn't know how to grind coffee beans. But his co-chef-worker knows how too. He would pass the beans to that person to grind the coffee beans down and then cook with the return process.</p> <pre><code>def cook_me_something(ingredients): #I don't know how the chef makes things for us nor do I care if ingredients == "eggs": food = "omelette" elif ingredients == "water": food = "boiled water" elif ingredients == "coffee beans" co_worker_finished_product = help_me_co_worker(ingredients) #makes coffee with the co_worker_finished_product which would be coffee grindings food = "coffee" return food #we have to define that function of the co worker helping: help_me_co_worker(chef_passed_ingredients): if chef_passed_ingredients == "coffee beans" ingredients = "coffee grinding" return ingredients </code></pre> <p>Noticed how the co worker has a local variable <code>ingredients</code>? it's different from what the chef has, since the chef has his own ingredients and the co worker has his own. Notice how the chef didn't care what the co worker called his ingredients or how he handle the items. Chef gave something to the co worker and expected the finished product. </p> <p>That's more or less how it's work. As long as functions get's their input, they will do work and maybe give an output. We don't care what they call their variables inside their functions cause it's their own items. </p> <p>So let's go back to your example:</p> <pre><code>def break_words(stuff): words = stuff.split(' ') return words def sort_sentence(sentence): words = break_words(sentence) return sort_words(words) &gt;&gt;&gt; sentence = "All good things come to those who wait." &gt;&gt;&gt; sorted_words = ex25.sort_sentence(sentence) &gt;&gt;&gt; sorted_words ['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’] </code></pre> <p>Let's see if we can break it down for you to understand.</p> <p>You called <code>sorted_words = ex25.sort_sentence(sentence)</code> and set <code>sorted_words</code> to the output of the function <code>sort_sentence()</code> which is <code>['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’]</code>. You passed in the input <code>sentence</code> </p> <p><code>sort_sentence(sentence)</code> get's executed. You passed in the string is now called <code>sentence</code> inside the variable. Note that you could have called the function like this and it will still work:</p> <pre><code>sorted_words = ex25.sort_sentence("All good things come to those who wait.") </code></pre> <p>And the function <code>sort_sentence()</code> will still call that string <code>sentence</code>. The function basically said what ever my input is, I'm calling it sentence. You can pass me your object named sentence, which I'm going to rename it to sentence while I'm working with it. </p> <p>Next on the stack is:</p> <pre><code>words = break_words(sentence) </code></pre> <p>which is now calling the function break_words with that the function <code>sort_sentence</code> called it's input as <code>sentence</code>. So if you follow the trace it's basically doing:</p> <pre><code>words = break_words("All good things come to those who wait.") </code></pre> <p>Next on the stack is:</p> <pre><code>words = stuff.split(' ') return words </code></pre> <p>Note that the function call it's input as <code>stuff</code>. So it took the sort_sentence's input that sort_sentence called <code>sentence</code> and function <code>break_words</code> is now calling it <code>stuff</code>. </p> <p>It splits the "sentence" up into words and stores it in a list and returns the list "words"</p> <p>Notice how the function <code>sort_sentence</code> is storing the output of <code>break_words</code> in the variable <code>words</code>. Notice how the function <code>break_words</code> is returning a variable named <code>words</code>? They are the same in this case but it doesn't matter if one called it differently. <code>sort_sentence</code> can store the output as <code>foo</code> and it still work. We are talking about different scope of variables. Outside of the function <code>break_words</code> the variable <code>words</code> can be anything, and <code>break_words</code> would not care. But inside <code>break_words</code> that variable is the output of the function. </p> <p>Under my house my rules? Outside of my house you can do what ever you want type of thing.</p> <p>Same deal with <code>sort_sentence</code> return variable, and how we store what we got back from it. It doesn't matter how we store it or what we call it. </p> <p>If you wanted you can rename it as:</p> <pre><code>def break_words(stuff): break_words_words = stuff.split(' ') return break_words_words def sort_sentence(sentence): words = break_words(sentence) return sort_words(words) #not sure where this function sort_words is coming from. #return words would work normally. &gt;&gt;&gt; sentence = "All good things come to those who wait." &gt;&gt;&gt; sorted_words = ex25.sort_sentence(sentence) &gt;&gt;&gt; sorted_words ['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’] </code></pre> <p>You just have to think of local variables, and parameters as like just naming things to work with. Like our example with the chef, Chef might called the eggs, ingredients, but I called it what ever I wanted and just passed it "eggs". It's all about the scope of things, think of functions as a house, while you are in the house, you can name what ever objects you want in the house, and outside of the house those same names could be different things but inside the house, they are what you want them to be. And when you throw something out, you naming that item has nothing to do with the outside world, since the outside world will name it something else. Might name it the same thing tho...</p> <p>If I just rambled too much, ask questions I will try to clear it up for you.</p> <p>Edited</p> <p>Coming back from lunch I thought of variable as containers, They hold the values but you don't care what other people's containers are named. You only care about yours and when someone gives you something you put it in a container and name it something you care about that will help you know what inside it. When you give away an item, you don't give the container, cause you need it to store other things..</p>
1
2016-10-19T16:49:58Z
[ "python" ]
Learn Python the Hard way ex25 - Want to check my understanding
40,136,550
<p>total noob here confused all to hell about something in "Learn Python the Hard Way." Apologies if this has been covered; I searched and could only find posts about not getting the desired results from the code.</p> <p>My question relates to the interaction of two functions in <a href="https://learnpythonthehardway.org/book/ex25.html" rel="nofollow">exercise 25</a>:</p> <pre><code>def break_words(stuff): words = stuff.split(' ') return words </code></pre> <p>and</p> <pre><code>def sort_sentence(sentence): words = break_words(sentence) return sort_words(words) </code></pre> <p>So, near the end of the exercise Zed has you run this in the terminal:</p> <pre><code>&gt;&gt;&gt; sorted_words = ex25.sort_sentence(sentence) &gt;&gt;&gt; sorted_words ['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’] </code></pre> <p>Now I assume the argument in 'sort_sentence' comes from the following, entered in the terminal at the start of the exercise:</p> <pre><code>&gt;&gt;&gt; sentence = "All good things come to those who wait." </code></pre> <p>But although we now know the above is the argument for 'sort_sentence,' 'sort_sentence' can't complete without running 'break_words', with 'sentence' again as <em>its</em> argument. Here's where I get confused: The argument for 'break_words' is labeled 'stuff.' Does this matter? Can 'sentence' just be passed into 'break_words' from 'sorted_words' no matter what the argument for 'break_words' is labeled?</p> <p>So assuming what I assumed - that the argument label doesn't matter - 'break_words' ought to run with 'sentence' as its argument and return 'words', which is the output of the function 'stuff.split' contained therein. This is where I get <em>really</em> confused - what does the 'words' returned from 'break_words' have to do with the variable 'words' defined as a part of 'sort_sentence'? I simply can't figure out how these functions work together. Thank you in advance for your help!</p>
1
2016-10-19T16:06:40Z
40,137,638
<blockquote> <p>that the argument label doesn't matter</p> </blockquote> <p>It matters in the sense that it's used "locally" within the function definition. Basically think of it as another local variable you define in the function definition but the values of the arguments are given to the function.</p> <p>Keeping this in mind, your next question is easy to answer:</p> <blockquote> <p>what does the 'words' returned from 'break_words' have to do with the variable 'words' defined as a part of 'sort_sentence'?</p> </blockquote> <p>Nothing. As stated previously, <code>words</code> is a local variable of <code>sort_sentence</code> and so is basically trashed when you leave the function ("falls out of scope" is the lingo). Of course, you can use <code>words</code> as the name of variables elsewhere, such as in another function definition, and that's what's happening here.</p>
0
2016-10-19T17:10:37Z
[ "python" ]
Django: authenticate the user
40,136,636
<p>I have the following code:</p> <p><strong># creating user:</strong></p> <pre><code>def create_user(request): if request.method == 'POST': user_info = forms.UserInfoForm(request.POST) if user_info.is_valid(): cleaned_info = user_info.cleaned_data User.objects.create_user(username=cleaned_info['username'], password=cleaned_info['password']) render(.......) </code></pre> <p>This works. I can check the auth_user and I see the username and password along with all the other fields created and added.</p> <p>Now, I try to authenticate the user with the following code after creating user with <strong>username='testcase' and password='test':</strong> using above code.</p> <p><strong># Authenticate User</strong></p> <pre><code>def get_entry(request): if request.method == 'POST': user = authenticate(username='testcase', password='test') if user: ......... </code></pre> <p>The user is always returned as none. What is going on? I am running django 1.10.2. </p> <p><strong>Update:</strong></p> <p>I can see the user created by create_user function when I log in admin. The status was not staff(as it was supposed to be). I changed that to staff to see if that was causing problem but still the get_entry method yields none for user. It is frustrating. I don't really know what I am doing wrong.</p>
0
2016-10-19T16:11:55Z
40,137,099
<p>Save the user in one var, and then call user.save() because User can't call the method save() try it:</p> <pre><code>def create_user(request): if request.method == 'POST': user_info = forms.UserInfoForm(request.POST) if user_info.is_valid(): cleaned_info = user_info.cleaned_data user = User.objects.create_user(username=cleaned_info['username'], password=cleaned_info['password']) user.save() render(.......) </code></pre> <p>Then you need to call auth.authenticate in your function get_entry:</p> <pre><code>def get_entry(request): if request.method == 'POST': user = auth.authenticate(username='testcase', password='test') if user: ......... </code></pre>
0
2016-10-19T16:39:21Z
[ "python", "django" ]
Django: authenticate the user
40,136,636
<p>I have the following code:</p> <p><strong># creating user:</strong></p> <pre><code>def create_user(request): if request.method == 'POST': user_info = forms.UserInfoForm(request.POST) if user_info.is_valid(): cleaned_info = user_info.cleaned_data User.objects.create_user(username=cleaned_info['username'], password=cleaned_info['password']) render(.......) </code></pre> <p>This works. I can check the auth_user and I see the username and password along with all the other fields created and added.</p> <p>Now, I try to authenticate the user with the following code after creating user with <strong>username='testcase' and password='test':</strong> using above code.</p> <p><strong># Authenticate User</strong></p> <pre><code>def get_entry(request): if request.method == 'POST': user = authenticate(username='testcase', password='test') if user: ......... </code></pre> <p>The user is always returned as none. What is going on? I am running django 1.10.2. </p> <p><strong>Update:</strong></p> <p>I can see the user created by create_user function when I log in admin. The status was not staff(as it was supposed to be). I changed that to staff to see if that was causing problem but still the get_entry method yields none for user. It is frustrating. I don't really know what I am doing wrong.</p>
0
2016-10-19T16:11:55Z
40,137,547
<p>Your code seems to be correct. </p> <p>The problem might be in the way the params are being passed to your <code>create_user</code> view (Param passing in <code>get_entry</code> view highly unlikely to be a problem since the params <code>username</code> and <code>password</code> are hard-coded).</p> <p>Try printing out <code>username</code> and <code>password</code> before passing them to <code>User.objects.create_user()</code>, since it's possible that the <code>password</code> field is not being saved properly and/or empty <code>password</code> is being passed, and Django might be creating a hash for the empty password.</p> <p>P.S.: This is just a speculation, need your response over this for further diagnosis of the issue.</p>
0
2016-10-19T17:05:32Z
[ "python", "django" ]
'Stack()' output with all Individual index's filled in Pandas DataFrame
40,136,651
<p>I have the following DataFrame:</p> <pre><code>import pandas as pd import numpy as np dates = pd.date_range('20130101',periods=6) df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD')) </code></pre> <p>which is:</p> <pre><code>out[]:df A B C D 2013-01-01 0.849638 0.163683 -0.422279 -0.981363 2013-01-02 -0.828562 -0.726762 -0.154431 1.695164 2013-01-03 1.668989 1.057559 -0.958682 -1.443136 2013-01-04 -3.386432 0.115499 -2.095343 -1.887334 2013-01-05 1.595712 0.270327 -0.532860 -0.690501 2013-01-06 -1.734169 0.574431 -0.982097 1.092113 </code></pre> <p>I stacked the dataframe with purpose and it appears as below:</p> <pre><code>2013-01-01 A 0.849638 B 0.163683 C -0.422279 D -0.981363 2013-01-02 A -0.828562 B -0.726762 C -0.154431 D 1.695164 2013-01-03 A 1.668989 B 1.057559 C -0.958682 D -1.443136 2013-01-04 A -3.386432 B 0.115499 C -2.095343 D -1.887334 2013-01-05 A 1.595712 B 0.270327 C -0.532860 D -0.690501 2013-01-06 A -1.734169 B 0.574431 C -0.982097 D 1.092113 dtype: float64 </code></pre> <p>I wish to have the dates printed in all the rows instead of having merged together. I want to have something like this:</p> <pre><code>2013-01-01 A 0.849638 2013-01-01 B 0.163683 2013-01-01 C -0.422279 2013-01-01 D -0.981363 ....... ....... 2013-01-06 A -1.734169 2013-01-06 B 0.574431 2013-01-06 C -0.982097 2013-01-06 D 1.092113 dtype: float64 </code></pre> <p>Can anyone please help me to achieve this goal. Thank you.</p>
1
2016-10-19T16:12:41Z
40,137,007
<p>the relevant pandas option is <code>'display.multi_sparse'</code><br> you can set it yourself with</p> <pre><code>pd.set_option('display.multi_sparse', False) </code></pre> <p>or use <code>pd.option_context</code> to temporarily set it in a <code>with</code> block</p> <pre><code>with pd.option_context('display.multi_sparse', False): dates = pd.date_range('20130101',periods=6) print(pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD')).stack()) 2013-01-01 A 0.074056 2013-01-01 B 0.565971 2013-01-01 C 0.312375 2013-01-01 D 0.000926 2013-01-02 A 0.669702 2013-01-02 B 0.458241 2013-01-02 C 0.854965 2013-01-02 D 1.608542 2013-01-03 A 0.358990 2013-01-03 B 0.194446 2013-01-03 C -0.988489 2013-01-03 D -0.967467 2013-01-04 A -0.768605 2013-01-04 B 0.791746 2013-01-04 C 0.073552 2013-01-04 D -0.604505 2013-01-05 A 0.254031 2013-01-05 B 0.143891 2013-01-05 C -0.351159 2013-01-05 D 0.642623 2013-01-06 A 0.499416 2013-01-06 B -0.588694 2013-01-06 C 1.418078 2013-01-06 D -0.071737 dtype: float64 </code></pre>
1
2016-10-19T16:33:44Z
[ "python", "pandas", "dataframe", "data-munging" ]
Using Google API for Python- where do I get the client_secrets.json file from?
40,136,699
<p>I am looking into using the Google API to allow users to create/ edit calendar entries in a company calendar (Google calendar) from within iCal.</p> <p>I'm following the instructions at: <a href="https://developers.google.com/api-client-library/python/auth/web-app" rel="nofollow">https://developers.google.com/api-client-library/python/auth/web-app</a></p> <p>Step 2 says that I will need the application's <code>client ID</code> and <code>client secret</code>. I can see the <code>client ID</code> in the 'Credentials' page for my app, but I have no idea what the <code>client secret</code> is or where I get that from- anyone know what this is? How do I download it? Where can I get the value from to update the field?</p>
0
2016-10-19T16:15:37Z
40,136,814
<p>If you go to your <a href="https://console.developers.google.com/apis/credentials" rel="nofollow">Google developers console</a> you should see a section titled <strong>OAuth 2.0 client IDs</strong>. Click on an entry in that list, and you will see a number of fields, including <strong>Client secret</strong>. </p> <p>If you have not yet created credentials, click the <strong>Create credentials</strong> button, and follow the instructions to create new credentials, and then follow the steps outlined above to find the <strong>Client secret</strong>.</p>
0
2016-10-19T16:22:24Z
[ "python", "json", "google-api" ]
AWS Lambda sending HTTP request
40,136,746
<p>This is likely a question with an easy answer, but i can't seem to figure it out.</p> <p>Background: I have a python Lambda function to pick up changes in a DB, then using HTTP post the changes in json to a URL. I'm using urllib2 sort of like this:</p> <pre><code># this runs inside a loop, in reality my error handling is much better request = urllib2.Request(url) request.add_header('Content-type', 'application/json') try: response = urllib2.urlopen(request, json_message) except: response = "Failed!" </code></pre> <p>It seems from the logs either the call to send the messages is skipped entirely, or times-out while waiting for a response. </p> <p>Is there a permission setting I'm missing, the outbound rules in AWS appear to be right. [Edit] - The VPC applied to this lambda does have internet access, and the security groups applied appear to allow internet access. [/Edit]</p> <p>I've tested the code locally (connected to the same data source) and it works flawlessly. </p> <p>It appears the other questions related to posting from a lambda is related to node.js, and usually because the url is wrong. In this case, I'm using a requestb.in url, that i know is working as it works when running locally.</p>
0
2016-10-19T16:18:27Z
40,139,889
<p>If you've deployed your Lambda function inside your VPC, it does not obtain a public IP address, even if it's deployed into a subnet with a route to an Internet Gateway. It only obtains a private IP address, and thus can not communicate to the public Internet by itself.</p> <p>To communicate to the public Internet, Lambda functions deployed inside your VPC need to be done so in a private subnet which has a <a href="http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#route-tables-nat" rel="nofollow">route</a> to either a <a href="https://aws.amazon.com/blogs/aws/new-managed-nat-network-address-translation-gateway-for-aws/" rel="nofollow">NAT Gateway</a> or a self-managed <a href="http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html" rel="nofollow">NAT instance</a>.</p>
2
2016-10-19T19:22:55Z
[ "python", "python-2.7", "amazon-web-services", "aws-lambda" ]
How to assert call order and parameters when mocking multiple calls to the same method?
40,136,811
<p>I have multiple calls to the same mock and I want to check each calls parameters and order in which it was called.</p> <p>E.g. if I needed to check just the last call, I would use this:</p> <pre><code>mock.assert_called_once_with( 'GET', 'https://www.foobar.com', params=OrderedDict([ ('email', 'email'), ]), headers=None, data=None) </code></pre> <p>However I want to do this for each call.</p> <p>I've managed to do that, like this:</p> <p>mycode.py</p> <pre><code>from requests import Session class Foo(object): def req(method, url, data, params=None, headers=None): self.session = Session() r = self.session.request(method, url, data=data, params=params, headers=headers) return r </code></pre> <p>test_mycode.py</p> <pre><code>@patch('myapp.mycode.Session') def test_foobar(self, Session): # Set mock. self.request_mock = Session.return_value.request self.request_mock.return_value = MagicMock() data = {'foo': 'bar'} f = Foo() f.req('POST', 'https://www.foobar.com/', data=data) f.req('GET', 'https://www.foobar.com/', data=None) self.assertEqual(self.request_mock.call_count, 2) call1 = self.request_mock._mock_call_args_list[0] call2 = self.request_mock._mock_call_args_list[1] call_params = ( ('POST', 'https://www.foobar.com'), { 'headers': None, 'allow_redirects': False, 'params': None, 'data': json.dumps(data) } ) self.assertEqual(tuple(call1), call_params) call_params = ( ('GET', 'https://www.foobar.com'), { 'headers': None, 'allow_redirects': False, 'params': None, 'data': None } ) self.assertEqual(tuple(call2), call_params) </code></pre> <p>This works, but I'm a little concerned about my assertEqual methods on call parameters. I feel like there's a better way of doing this. I'm still fairly new to mocking so any suggestions would be appreciated.</p>
0
2016-10-19T16:22:12Z
40,136,908
<p>You probably want to use the <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_has_calls" rel="nofollow"><code>Mock.assert_has_calls</code></a> method.</p> <pre><code>self.assertEqual(self.request_mock.call_count, 2) self.request_mock.assert_has_calls([ mock.call( 'POST', 'https://www.foobar.com', headers=None, allow_redirects=False, params=None, data=json.dumps(data)), mock.call( 'GET', 'https://www.foobar.com', headers=None, allow_redirects=False, params=None, data=None) ]) </code></pre> <p>By default, <code>assert_has_calls</code> will check that the calls happen in the proper order. If you don't care about the order, you can use the <code>any_order</code> keyword argument (set to <code>True</code>).</p>
3
2016-10-19T16:27:41Z
[ "python", "unit-testing", "mocking" ]
Filter by a Reference Property in Appnengine
40,136,910
<p>I am doing a blog in appengine. I want make a query to get the numbers of post by category. So I need filter by a Reference Property in appengine. Look my actual Code.</p> <p>Those are my models :</p> <pre><code>class Comment(db.Model) : user = db.ReferenceProperty(User) post = db.ReferenceProperty(Blog) subject = db.StringProperty(required = True) content = db.TextProperty(required = True) date = db.DateProperty(auto_now_add = True) last_modified = db.DateProperty() status = db.BooleanProperty(default = True) class Category(db.Model): name = db.StringProperty() date = db.DateProperty(auto_now_add=True) class Blog(db.Model) : subject = db.StringProperty(required = True) content = db.TextProperty(required = True) date = db.DateProperty(auto_now_add = True) category = db.ReferenceProperty(Category) user = db.ReferenceProperty(User) last_modified = db.DateProperty(auto_now = True) status = db.BooleanProperty() likes = db.IntegerProperty(default = 0) users_liked = db.ListProperty(db.Key, default = []) dislikes = db.IntegerProperty(default = 0) users_disliked = db.ListProperty(db.Key, default = []) </code></pre> <p>And this is my query :</p> <pre><code>def numcomments_all_category() : dic = {} category = get_category() for cat in category : dic[cat.key().id()] = Comment.all().filter("post.category =", cat.key()).ancestor(ancestor_key).count() return dic </code></pre> <p>But It seems that filter("post.category =", cat.key()) is not the correct way to do this.</p>
0
2016-10-19T16:28:02Z
40,141,393
<p>I haven't used <code>db</code> in a while, but I think something like this will work:</p> <pre><code>count = 0 # Get all blogs of the desired category blogs = Blog.all().filter("category =", cat.key()) for blog in blogs: # For each blog, count all the comments. count += Comment.all().filter("post =", blog.key()).count() </code></pre>
0
2016-10-19T20:57:18Z
[ "python", "google-app-engine", "gae-datastore" ]
Call a Python function with arguments based on user input
40,136,965
<p>I would like to call a function from a user input, but include arguments in the parenthesis. For example, if I have a function that takes one argument:</p> <pre><code>def var(value): print(value) </code></pre> <p>I would like to ask the user for a command and arguments, then call the function with the arguments:</p> <pre><code>Input Command: var("Test") Test </code></pre>
-2
2016-10-19T16:30:52Z
40,137,267
<p>Split the function name from the arguments. Look up the function by name using a predefined map. Parse the arguments with <code>literal_eval</code>. Call the function with the arguments.</p> <pre><code>available = {} def register_func(f): available[f.__name__] = f @register_func def var(value): print(value) from ast import literal_eval def do_user_func(user_input): name, args = user_input.split('(', 1) return available[name](*literal_eval('(' + args[:-1] + ',)')) do_user_func("var('test')") # prints "test" </code></pre> <p>This is still incredibly brittle, any invalid input will fail (such as forgetting parentheses, or an invalid function name). It's up to you to make this more robust.</p> <p><code>literal_eval</code> is still somewhat unsafe on untrusted input, as it's possible to construct small strings that evaluate to large amounts of memory. <code>'[' * 10 + ']' * 10</code>, for a safe but demonstrative example.</p> <p>Finally, <strong>do not use <code>eval</code> on untrusted user input</strong>. <a href="http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html" rel="nofollow">There is no practical way to secure it from malicious input.</a> While it will evaluate the nice input you expect, it will also evaluate code that, for example, will delete all your files.</p>
6
2016-10-19T16:49:04Z
[ "python" ]
Call a Python function with arguments based on user input
40,136,965
<p>I would like to call a function from a user input, but include arguments in the parenthesis. For example, if I have a function that takes one argument:</p> <pre><code>def var(value): print(value) </code></pre> <p>I would like to ask the user for a command and arguments, then call the function with the arguments:</p> <pre><code>Input Command: var("Test") Test </code></pre>
-2
2016-10-19T16:30:52Z
40,137,349
<p>I am going to post this solution as an alternative, under the assumption that you are dealing with <em>simple</em> inputs such as: </p> <pre><code>var(arg) </code></pre> <p>Or, a single function call that can take a list of positional arguments. </p> <p>By using <code>eval</code> it would be a horrible un-recommended idea, as already mentioned. I think that is the security risk you were reading about.</p> <p>The ideal way to perform this approach is to have a dictionary, mapping the string to the method you want to execute.</p> <p>Furthermore, you can consider an alternative way to do this. Have a space separated input to know how to call your function with arguments. Consider an input like this: </p> <pre><code>"var arg1 arg2" </code></pre> <p>So when you input that: </p> <pre><code>call = input().split() </code></pre> <p>You will now have: </p> <pre><code>['var', 'arg1', 'arg2'] </code></pre> <p>You can now consider your first argument the function, and everything else the arguments you are passing to the function. So, as a functional example: </p> <pre><code>def var(some_arg, other_arg): print(some_arg) print(other_arg) d = {"var": var} call = input().split() d[call[0]](*call[1:]) </code></pre> <p>Demo: </p> <pre><code>var foo bar foo bar </code></pre>
1
2016-10-19T16:53:15Z
[ "python" ]
Call a Python function with arguments based on user input
40,136,965
<p>I would like to call a function from a user input, but include arguments in the parenthesis. For example, if I have a function that takes one argument:</p> <pre><code>def var(value): print(value) </code></pre> <p>I would like to ask the user for a command and arguments, then call the function with the arguments:</p> <pre><code>Input Command: var("Test") Test </code></pre>
-2
2016-10-19T16:30:52Z
40,137,449
<p>Instead of using eval, you can parse it yourself. This way, you have control over how each function should parse/deserialize the user input's arguments.</p> <pre class="lang-python prettyprint-override"><code>import sys, re def custom_print(value): print value def custom_add(addends): print sum(addends) def deserialize_print(args): # just print it as is custom_print(args) def deserialize_add(args): # remove all whitespace, split on commas, parse as floats addends = [float(x) for x in re.sub(r"\s", "", args).split(",")] # send to custom_add function custom_add(addends) def get_command(): cmd_input = raw_input("Command: ") # -- check that the command is formatted properly # and capture command groups match = re.match(r"^([a-zA-Z0-9]+)(\(.*\))?$", cmd_input) if match: # extract matched groups to separate variables (cmd, argstring) = match.groups() # strip parenthesis off of argstring if argstring: args = argstring[1:-1] # send the whole argument string to its corresponding function if cmd == "print": deserialize_print(args) elif cmd == "add": deserialize_add(args) elif cmd == "exit": sys.exit() else: print "Command doesn't exist." else: print "Invalid command." # recurse until exit get_command() # -- begin fetching commands get_command() </code></pre> <hr> <p>This is a pretty rough setup, although you can get by with some more error checking and improving the deserializing functions and modularizing function additions.</p> <p>If the decoupled deserialize functions seem too much, you can also just move the deserialization into the custom functions themselves.</p>
0
2016-10-19T16:59:36Z
[ "python" ]
Call a Python function with arguments based on user input
40,136,965
<p>I would like to call a function from a user input, but include arguments in the parenthesis. For example, if I have a function that takes one argument:</p> <pre><code>def var(value): print(value) </code></pre> <p>I would like to ask the user for a command and arguments, then call the function with the arguments:</p> <pre><code>Input Command: var("Test") Test </code></pre>
-2
2016-10-19T16:30:52Z
40,138,089
<p>You should investigate the <a href="https://docs.python.org/2/library/cmd.html?highlight=cmd#module-cmd" rel="nofollow">cmd</a> module. This allows you to parse input similar to shell commands, but I believe you can get tricky and change the delimiters if the parentheses are an important part of the specification.</p>
1
2016-10-19T17:38:03Z
[ "python" ]
Validating input with inquirer
40,137,035
<p>I'm trying to check if the length of my input is valid like this:</p> <pre><code>questions = [ inquirer.Text('b_file', message='.GBK File', validate=lambda file: len(str(file))), inquirer.Text('e_file', message='.XLS File', validate=lambda file: len(str(file)))] </code></pre> <p>But isn't working. Is saying that is not a valid input:</p> <pre><code>&gt;&gt;&gt; import inquirer &gt;&gt;&gt; questions = [ ... inquirer.Text('b_file', message='.GBK File', ... validate=lambda file: len(str(file))), ... inquirer.Text('e_file', message='.XLS File', ... validate=lambda file: len(str(file)))] &gt;&gt;&gt; answers = inquirer.prompt(questions) [?] .GBK File: foo &gt;&gt; "foo" is not a valid b_file. </code></pre>
1
2016-10-19T16:35:09Z
40,137,206
<p>The function used for <code>validate</code> must take <strong>two</strong> arguments; the first is a dictionary with previously given answers, and the second is the current answer.</p> <p>The <a href="https://github.com/magmax/python-inquirer/blob/master/inquirer/questions.py#L115-L121" rel="nofollow">code to handle validation</a> catches <em>all</em> exceptions and turns those into validation errors, so using a lambda with just one argument will always result in validation failing.</p> <p>Make your lambda accept the answers dictionary too; you can ignore the value given:</p> <pre><code>questions = [ inquirer.Text('b_file', message='.GBK File', validate=lambda answers, file: len(str(file))), inquirer.Text('e_file', message='.XLS File', validate=lambda answers, file: len(str(file)))] </code></pre> <p>With that change, the questions work:</p> <pre><code>&gt;&gt;&gt; import inquirer &gt;&gt;&gt; questions = [ ... inquirer.Text('b_file', message='.GBK File', ... validate=lambda answers, file: len(str(file))), ... inquirer.Text('e_file', message='.XLS File', ... validate=lambda answers, file: len(str(file)))] &gt;&gt;&gt; answers = inquirer.prompt(questions) [?] .GBK File: foo [?] .XLS File: bar &gt;&gt;&gt; pprint(answers) {'b_file': 'foo', 'e_file': 'bar'} </code></pre>
0
2016-10-19T16:44:47Z
[ "python" ]
How to find the longest sub-array within a threshold?
40,137,051
<p>Let's say you have a sorted array of numbers <code>sorted_array</code> and a threshold <code>threshold</code>. What is the fastest way to find the longest sub-array in which all the values are within the threshold? In other words, find indices <code>i</code> and <code>j</code> such that:</p> <ol> <li><code>sorted_array[j] - sorted_array[i] &lt;= threshold</code></li> <li><code>j - i</code> is maximal</li> </ol> <p>In case of a tie, return the pair with the smallest <code>i</code>.</p> <p>I already have a loop-based solution, which I will post as an answer, but I'm curious to see if there's a better way, or a way that can avoid the loop using a vector-capable language or library like NumPy, for example.</p> <p>Example input and output:</p> <pre><code>&gt;&gt;&gt; sorted_array = [0, 0.7, 1, 2, 2.5] &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.2) (0, 0) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.4) (1, 2) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.8) (0, 1) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 1) (0, 2) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 1.9) (1, 4) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 2) (0, 3) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 2.6) (0, 4) </code></pre>
1
2016-10-19T16:36:27Z
40,137,068
<p>Here's a simple loop-based solution in Python:</p> <pre><code>def longest_subarray_within_threshold(sorted_array, threshold): result = (0, 0) longest = 0 i = j = 0 end = len(sorted_array) while i &lt; end: if j &lt; end and sorted_array[j] - sorted_array[i] &lt;= threshold: current_distance = j - i if current_distance &gt; longest: longest = current_distance result = (i, j) j += 1 else: i += 1 return result </code></pre>
2
2016-10-19T16:37:28Z
[ "python", "arrays", "algorithm", "numpy" ]
How to find the longest sub-array within a threshold?
40,137,051
<p>Let's say you have a sorted array of numbers <code>sorted_array</code> and a threshold <code>threshold</code>. What is the fastest way to find the longest sub-array in which all the values are within the threshold? In other words, find indices <code>i</code> and <code>j</code> such that:</p> <ol> <li><code>sorted_array[j] - sorted_array[i] &lt;= threshold</code></li> <li><code>j - i</code> is maximal</li> </ol> <p>In case of a tie, return the pair with the smallest <code>i</code>.</p> <p>I already have a loop-based solution, which I will post as an answer, but I'm curious to see if there's a better way, or a way that can avoid the loop using a vector-capable language or library like NumPy, for example.</p> <p>Example input and output:</p> <pre><code>&gt;&gt;&gt; sorted_array = [0, 0.7, 1, 2, 2.5] &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.2) (0, 0) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.4) (1, 2) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.8) (0, 1) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 1) (0, 2) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 1.9) (1, 4) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 2) (0, 3) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 2.6) (0, 4) </code></pre>
1
2016-10-19T16:36:27Z
40,137,780
<p>Here's a vectorized approach using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p> <pre><code>def longest_thresh_subarray(sorted_array,thresh): diffs = (sorted_array[:,None] - sorted_array) r = np.arange(sorted_array.size) valid_mask = r[:,None] &gt; r mask = (diffs &lt;= thresh) &amp; valid_mask bestcolID = (mask).sum(0).argmax() idx = np.nonzero(mask[:,bestcolID])[0] if len(idx)==0: out = (0,0) else: out = idx[0]-1, idx[-1] return out </code></pre> <p>Sample runs -</p> <pre><code>In [137]: sorted_array = np.array([0, 0.7, 1, 2, 2.5]) In [138]: longest_thresh_subarray(sorted_array,0.2) Out[138]: (0, 0) In [139]: longest_thresh_subarray(sorted_array,0.4) Out[139]: (1, 2) In [140]: longest_thresh_subarray(sorted_array,0.8) Out[140]: (0, 1) In [141]: longest_thresh_subarray(sorted_array,1) Out[141]: (0, 2) In [142]: longest_thresh_subarray(sorted_array,1.9) Out[142]: (1, 4) In [143]: longest_thresh_subarray(sorted_array,2) Out[143]: (0, 3) In [144]: longest_thresh_subarray(sorted_array,2.6) Out[144]: (0, 4) </code></pre>
0
2016-10-19T17:19:16Z
[ "python", "arrays", "algorithm", "numpy" ]
How to find the longest sub-array within a threshold?
40,137,051
<p>Let's say you have a sorted array of numbers <code>sorted_array</code> and a threshold <code>threshold</code>. What is the fastest way to find the longest sub-array in which all the values are within the threshold? In other words, find indices <code>i</code> and <code>j</code> such that:</p> <ol> <li><code>sorted_array[j] - sorted_array[i] &lt;= threshold</code></li> <li><code>j - i</code> is maximal</li> </ol> <p>In case of a tie, return the pair with the smallest <code>i</code>.</p> <p>I already have a loop-based solution, which I will post as an answer, but I'm curious to see if there's a better way, or a way that can avoid the loop using a vector-capable language or library like NumPy, for example.</p> <p>Example input and output:</p> <pre><code>&gt;&gt;&gt; sorted_array = [0, 0.7, 1, 2, 2.5] &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.2) (0, 0) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.4) (1, 2) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 0.8) (0, 1) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 1) (0, 2) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 1.9) (1, 4) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 2) (0, 3) &gt;&gt;&gt; longest_subarray_within_threshold(sorted_array, 2.6) (0, 4) </code></pre>
1
2016-10-19T16:36:27Z
40,138,678
<p>Most likely, the OP's own answer is the best possible algorithm, as it is O(n). However, the pure-python overhead makes it very slow. However, this overhead can easily be reduced by compiling the algorithm using <a href="http://numba.pydata.org/" rel="nofollow" title="numba">numba</a>, with the current version (0.28.1 as of this writing), there is no need for any manual typing, simply decorating your function with <code>@numba.njit()</code> is enough.</p> <p>However, if you do not want to depend on <a href="http://numba.pydata.org/" rel="nofollow" title="numba">numba</a>, there is a numpy algorithm in O(n log n):</p> <pre><code>def algo_burnpanck(sorted_array,thresh): limit = np.searchsorted(sorted_array,sorted_array+thresh,'right') distance = limit - np.arange(limit.size) best = np.argmax(distance) return best, limit[best]-1 </code></pre> <p>I did run a quick profiling on my own machine of the two previous answers (OP's and Divakar's), as well as my numpy algorithm and the numba version of the OP's algorithm.</p> <pre><code>thresh = 1 for n in [100, 10000]: sorted_array = np.sort(np.random.randn(n,)) for f in [algo_user1475412,algo_Divakar,algo_burnpanck,algo_user1475412_numba]: a,b = f(sorted_array, thresh) d = b-a diff = sorted_array[b]-sorted_array[a] closestlonger = np.min(sorted_array[d+1:]-sorted_array[:-d-1]) assert sorted_array[b]-sorted_array[a]&lt;=thresh assert closestlonger&gt;thresh print('f=%s, n=%d thresh=%s:'%(f.__name__,n,thresh))#,d,a,b,diff,closestlonger) %timeit f(sorted_array, thresh) </code></pre> <p>Here are the results:</p> <pre><code>f=algo_user1475412, n=100 thresh=1: 10000 loops, best of 3: 111 µs per loop f=algo_Divakar, n=100 thresh=1: 10000 loops, best of 3: 74.6 µs per loop f=algo_burnpanck, n=100 thresh=1: 100000 loops, best of 3: 9.38 µs per loop f=algo_user1475412_numba, n=100 thresh=1: 1000000 loops, best of 3: 764 ns per loop f=algo_user1475412, n=10000 thresh=1: 100 loops, best of 3: 12.1 ms per loop f=algo_Divakar, n=10000 thresh=1: 1 loop, best of 3: 1.76 s per loop f=algo_burnpanck, n=10000 thresh=1: 1000 loops, best of 3: 308 µs per loop f=algo_user1475412_numba, n=10000 thresh=1: 10000 loops, best of 3: 82.9 µs per loop </code></pre> <p>At 100 numbers, O(n^2) solution using numpy just barely beats the O(n) python solution, but quickly after, the scaling makes that algorithm useless. The O(n log n) keeps up even at 10000 numbers, but the numba approach is unbeaten everywhere.</p>
2
2016-10-19T18:10:35Z
[ "python", "arrays", "algorithm", "numpy" ]
Why is hash() slower under python3.4 vs python2.7
40,137,072
<p>I was doing some performance evaluation using timeit and discovered a performance degredation between python 2.7.10 and python 3.4.3. I narrowed it down to the <code>hash()</code> function:</p> <p>python 2.7.10:</p> <pre><code>&gt;&gt;&gt; import timeit &gt;&gt;&gt; timeit.timeit('for x in xrange(100): hash(x)', number=100000) 0.4529099464416504 &gt;&gt;&gt; timeit.timeit('hash(1000)') 0.044638872146606445 </code></pre> <p>python 3.4.3:</p> <pre><code>&gt;&gt;&gt; import timeit &gt;&gt;&gt; timeit.timeit('for x in range(100): hash(x)', number=100000) 0.6459149940637872 &gt;&gt;&gt; timeit.timeit('hash(1000)') 0.07708719989750534 </code></pre> <p>That's an approx. 40% degradation! It doesn't seem to matter if integers, floats, strings(unicodes or bytearrays), etc, are being hashed; the degradation is about the same. In both cases the hash is returning a 64-bit integer. The above was run on my Mac, and got a smaller degradation (20%) on an Ubuntu box.</p> <p>I've also used PYTHONHASHSEED=random for the python2.7 tests and in <em>some</em> cases, restarting python for each "case", I saw the <code>hash()</code> performance get a bit worse, but never as slow as python3.4</p> <p>Anyone know what's going on here? Was a more-secure, but slower, hash function chosen for python3 ?</p>
4
2016-10-19T16:37:45Z
40,137,700
<p>There are two changes in <code>hash()</code> function between Python 2.7 and Python 3.4</p> <ol> <li>Adoptions of <em>SipHash</em></li> <li>Default enabling of <em>Hash randomization</em></li> </ol> <hr> <p><em>References:</em></p> <ul> <li>Since from Python 3.4, it uses <a href="https://131002.net/siphash/" rel="nofollow">SipHash</a> for it's hashing function. Read: <a href="https://lwn.net/Articles/574761/" rel="nofollow">Python adopts SipHash</a></li> <li>Since Python 3.3 <em>Hash randomization is enabled by default.</em> Reference: <a href="https://docs.python.org/3/reference/datamodel.html#object.__hash__" rel="nofollow"><code>object.__hash__</code></a> (last line of this section). Specifying <a href="https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED" rel="nofollow"><code>PYTHONHASHSEED</code></a> the value 0 will disable hash randomization.</li> </ul>
2
2016-10-19T17:14:42Z
[ "python", "python-3.4" ]
How to make multiple file from different folder same name in one file in python
40,137,134
<p>I want to combine multiple file from different folder data in one file but only same file name is all folder</p> <p>Script:</p> <pre><code>import os filenames = [os.path.join('C:/Users/Vishnu/Desktop/Test_folder/Input/','*.txt'), os.path.join('C:/Users/Vishnu/Desktop/Test_folder/Output/','*.txt')] f = open(r'C:/Users/Vishnu/Desktop/Test_output/', 'wb') for fname in filenames: with open(fname) as infile: for line in infile: f.write(line) </code></pre> <p>Getting Error:</p> <pre><code>f = open(r"C:/Users/Vishnu/Desktop/Test_output/", "wb") IOError: [Errno 13] Permission denied: 'C:/Users/Vishnu/Desktop/Test_output/' &gt;&gt;&gt; </code></pre>
0
2016-10-19T16:41:13Z
40,137,208
<p>Firstly, you are trying to open the folder itself. Secondly, we have to close the file everytime we read it to avoid Permission issues</p> <p>I tried this code. It should work now</p> <pre><code>import os import glob #So that * in directory listing can be interpretted as all filenames filenames = [glob.glob(os.path.join(os.path.expanduser('~'),'Desktop','Test_folder','Input','*.txt')), glob.glob(os.path.join(os.path.expanduser('~'),'Desktop','Test_folder','Output','*.txt'))] filenames[0].extend(filenames[1]) filenames=filenames[0] if( not os.path.isdir(os.path.join(os.path.expanduser('~'), 'Desktop', 'Test_output'))): os.mkdir(os.path.join(os.path.expanduser('~'), 'Desktop', 'Test_output')) for fname in filenames: with open(fname) as file: for line in file.readlines(): f = open(os.path.join(os.path.expanduser('~'), 'Desktop', 'Test_output','{:}.txt'.format(os.path.split(fname)[-1] )), 'a+') f.write(line) f.close() #This should take care of the permissions issue </code></pre>
1
2016-10-19T16:44:50Z
[ "python", "python-2.7", "python-3.x" ]
draw horizontal bars on the same line
40,137,137
<p>I have to draw a gantt resource type of chart. Idea is to draw several horizontal bars on the same line (corresponding to a resource) each length represented by start date and and date</p> <p>this is the expected result: <a href="https://i.stack.imgur.com/aqj97.png" rel="nofollow"><img src="https://i.stack.imgur.com/aqj97.png" alt="enter image description here"></a></p> <p>Altogether we have around 200 resources and max. 50 task for each to display, so performance is important.</p> <p>Any idea?</p> <p>In addition the tasks should be draggable by mouse. An solution (Fat GUI (pyQt, wxwidget, tkinter, ...) or web based Flask, web2py, etc) is OK</p>
-2
2016-10-19T16:41:15Z
40,137,659
<p>Actually, I'm gonna cheat and post you something straight from the <a href="http://matplotlib.org/users/event_handling.html#draggable-rectangle-exercise" rel="nofollow">Matplotlib Documentation</a>. This should get you started with draggable objects in mpl. you'll have to come up with your own dynamic object creation code...</p> <p>full credit to the guys over at mpl:</p> <pre><code># draggable rectangle with the animation blit techniques; see # http://www.scipy.org/Cookbook/Matplotlib/Animations import numpy as np import matplotlib.pyplot as plt class DraggableRectangle: lock = None # only one can be animated at a time def __init__(self, rect): self.rect = rect self.press = None self.background = None def connect(self): 'connect to all the events we need' self.cidpress = self.rect.figure.canvas.mpl_connect( 'button_press_event', self.on_press) self.cidrelease = self.rect.figure.canvas.mpl_connect( 'button_release_event', self.on_release) self.cidmotion = self.rect.figure.canvas.mpl_connect( 'motion_notify_event', self.on_motion) def on_press(self, event): 'on button press we will see if the mouse is over us and store some data' if event.inaxes != self.rect.axes: return if DraggableRectangle.lock is not None: return contains, attrd = self.rect.contains(event) if not contains: return print('event contains', self.rect.xy) x0, y0 = self.rect.xy self.press = x0, y0, event.xdata, event.ydata DraggableRectangle.lock = self # draw everything but the selected rectangle and store the pixel buffer canvas = self.rect.figure.canvas axes = self.rect.axes self.rect.set_animated(True) canvas.draw() self.background = canvas.copy_from_bbox(self.rect.axes.bbox) # now redraw just the rectangle axes.draw_artist(self.rect) # and blit just the redrawn area canvas.blit(axes.bbox) def on_motion(self, event): 'on motion we will move the rect if the mouse is over us' if DraggableRectangle.lock is not self: return if event.inaxes != self.rect.axes: return x0, y0, xpress, ypress = self.press dx = event.xdata - xpress dy = event.ydata - ypress self.rect.set_x(x0+dx) self.rect.set_y(y0+dy) canvas = self.rect.figure.canvas axes = self.rect.axes # restore the background region canvas.restore_region(self.background) # redraw just the current rectangle axes.draw_artist(self.rect) # blit just the redrawn area canvas.blit(axes.bbox) def on_release(self, event): 'on release we reset the press data' if DraggableRectangle.lock is not self: return self.press = None DraggableRectangle.lock = None # turn off the rect animation property and reset the background self.rect.set_animated(False) self.background = None # redraw the full figure self.rect.figure.canvas.draw() def disconnect(self): 'disconnect all the stored connection ids' self.rect.figure.canvas.mpl_disconnect(self.cidpress) self.rect.figure.canvas.mpl_disconnect(self.cidrelease) self.rect.figure.canvas.mpl_disconnect(self.cidmotion) fig = plt.figure() ax = fig.add_subplot(111) rects = ax.bar(range(10), 20*np.random.rand(10)) drs = [] for rect in rects: dr = DraggableRectangle(rect) dr.connect() drs.append(dr) plt.show() </code></pre>
1
2016-10-19T17:11:54Z
[ "python", "bar-chart" ]
read data in specific column and row of a text file
40,137,219
<p>I Have a text file which contain 3 columns and 20000 rows. I like to know what should I do to get the specific data (for example) in row 1000 and column 2? My first column is formatted like AAAA and my second column is a number like 1234. I tried this solution but I got an error based on my first column being letters I would like to define a variable which contains my data at the end:</p> <pre><code>with open ('my_file') as f: for x , line in eumerate(f): if x = 1000: numfloat = map(float , line.split()) print numfloat[1] </code></pre>
1
2016-10-19T16:45:29Z
40,137,335
<p>You are trying to use <code>float()</code> on something that contains letters. this happens when you call:</p> <pre><code>numfloat = map(float , line.split()) </code></pre> <p>You need to tell us the exact output that you are looking for but here is one possible solution</p> <pre><code>num_float = map(float, line.split()[1]) or better yet num_float = float(line.split()[1]) </code></pre> <p>This will only get you the middle column, I'm not certain if you need the entire row or not.</p> <p>Additionally, as noted below, you need to change <code>=</code> to <code>==</code> in your if statement. <code>=</code> is for assigment, <code>==</code> is for comparison.</p>
2
2016-10-19T16:52:36Z
[ "python", "text", "row", "line" ]
read data in specific column and row of a text file
40,137,219
<p>I Have a text file which contain 3 columns and 20000 rows. I like to know what should I do to get the specific data (for example) in row 1000 and column 2? My first column is formatted like AAAA and my second column is a number like 1234. I tried this solution but I got an error based on my first column being letters I would like to define a variable which contains my data at the end:</p> <pre><code>with open ('my_file') as f: for x , line in eumerate(f): if x = 1000: numfloat = map(float , line.split()) print numfloat[1] </code></pre>
1
2016-10-19T16:45:29Z
40,137,486
<p>make '=' to '==' at line 3 of your code </p> <pre><code>with open('my_file', 'r') as f: for x, line in enumerate(f): if x == 1000: print float(line.split()[1]) </code></pre>
0
2016-10-19T17:01:33Z
[ "python", "text", "row", "line" ]
read data in specific column and row of a text file
40,137,219
<p>I Have a text file which contain 3 columns and 20000 rows. I like to know what should I do to get the specific data (for example) in row 1000 and column 2? My first column is formatted like AAAA and my second column is a number like 1234. I tried this solution but I got an error based on my first column being letters I would like to define a variable which contains my data at the end:</p> <pre><code>with open ('my_file') as f: for x , line in eumerate(f): if x = 1000: numfloat = map(float , line.split()) print numfloat[1] </code></pre>
1
2016-10-19T16:45:29Z
40,137,562
<pre><code>import re with open('my_file', 'r') as f: for x, line in enumerate(f): if x == 1000: array = re.split('(\d+)',line) # array=['AAAA','123'] if line='AAAA123' print array[1] # your required second row. </code></pre>
0
2016-10-19T17:06:12Z
[ "python", "text", "row", "line" ]
Exponentional values in Python Pandas
40,137,232
<p>Have a case of quite huge numbers in python pandas, so the dataframe looks like this:</p> <pre><code>trades 4.536115e+07 3.889124e+07 2.757327e+07 </code></pre> <p>How can these numbers be transformed into "normal" values from exponential in pandas?</p> <p>Thanks!</p>
0
2016-10-19T16:46:38Z
40,137,374
<pre><code>&gt;&gt;&gt; float(4.536115e+07) 45361150.0 </code></pre> <p>or</p> <pre><code>&gt;&gt;&gt; f = 4.536115e+07 &gt;&gt;&gt; "%.16f" % f '45361150.0000000000000000' </code></pre>
0
2016-10-19T16:54:58Z
[ "python", "pandas", "dataframe", "exponential" ]
Exponentional values in Python Pandas
40,137,232
<p>Have a case of quite huge numbers in python pandas, so the dataframe looks like this:</p> <pre><code>trades 4.536115e+07 3.889124e+07 2.757327e+07 </code></pre> <p>How can these numbers be transformed into "normal" values from exponential in pandas?</p> <p>Thanks!</p>
0
2016-10-19T16:46:38Z
40,137,528
<p>You could change the pandas options as such:</p> <pre><code>&gt;&gt;&gt; data = np.array([4.536115e+07, 3.889124e+07, 2.757327e+07]) &gt;&gt;&gt; pd.set_option('display.float_format', lambda x: '%.f' % x) &gt;&gt;&gt; pd.DataFrame(data, columns=['trades']) trades 0 45361150 1 38891240 2 27573270 </code></pre>
1
2016-10-19T17:04:03Z
[ "python", "pandas", "dataframe", "exponential" ]
Error when passing parameter to form
40,137,243
<p>I'm trying to pass a parameter to a form, in this case is an object_id. The form gets used only on the <em>change_view</em>, this code works:</p> <p>My form:</p> <pre><code>class MyForm(forms.ModelForm): def __init__(self, *args, **kwargs): self.my_id = kwargs.pop('my_id', None) super(MyForm, self).__init__(*args, **kwargs) class Meta: model = MyModel fields = ('thing_to_show_a',) </code></pre> <p>My admin model: </p> <pre><code>class MyModelAdmin(admin.ModelAdmin): def change_view(self, request, obj_id): self.form = MyForm return super(MyModelAdmin, self).change_view(request, obj_id) ... </code></pre> <p>But if I try to pass the id as a parameter in the form:</p> <pre><code>class MyModelAdmin(admin.ModelAdmin): def change_view(self, request, obj_id): self.form = MyForm(my_id=obj_id) return super(MyModelAdmin, self).change_view(request, obj_id) ... </code></pre> <p>I get:</p> <pre><code>'MyForm' object has no attribute '__name__' </code></pre>
0
2016-10-19T16:47:39Z
40,143,003
<p>In <code>self.form = MyForm</code> you assign a class object to self.form. In <code>self.form = MyForm(my_id=obj_id)</code> you instantiate an object of class MyForm and assign it to self.form. </p> <p>Django expect to find a class in <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#django.contrib.admin.ModelAdmin.form" rel="nofollow">self.form</a>, not an instance.</p>
0
2016-10-19T23:15:34Z
[ "python", "django", "django-forms" ]
Concatinating multiple Data frames of different length
40,137,372
<p>I have 88 different dataFrame of different lengths, which I need to concatenate. And its all are located in one directory and I used the following python script to produce such a single data frame.</p> <p>Here is what I tried,</p> <pre><code> path = 'GTFS/' files = os.listdir(path) files_txt = [os.path.join(path,i) for i in files if i.endswith('.tsv')] ## Change it into dataframe dfs = [pd.DataFrame.from_csv(x, sep='\t')[[6]] for x in files_txt] ##Concatenate it merged = pd.concat(dfs,axis=1) </code></pre> <p>Since each of those data frames are of different length or shape its throwing me following error message,</p> <pre><code>ValueError: Shape of passed values is (88, 57914), indices imply (88, 57905) </code></pre> <p>My aim is to concatenate column-wise into single data frame with 88 columns, as my input is 88 separate data frames from which I need to use 7th column as in my script. Any solutions or suggestion would be great in this case for concatenating data frames Thank you</p>
0
2016-10-19T16:54:51Z
40,139,190
<p>The key is to make a <code>list</code> of different data-frames and then concatenate the list instead of individual concatenation.</p> <p>I created 10 <code>df</code> filled with random length data of one column and saved to <code>csv</code> files to simulate your data.</p> <pre><code>import pandas as pd import numpy as np from random import randint #generate 10 df and save to seperate csv files for i in range(1,11): dfi = pd.DataFrame({'a':np.arange(randint(2,11))}) csv_file = "file{0}.csv".format(i) dfi.to_csv(csv_file, sep='\t') print "saving file", csv_file </code></pre> <p>Then we read those 10 <code>csv</code> files into separate data-frames and save to a <code>list</code></p> <pre><code>#read previously saved csv files into 10 seperate df # and add to list frames = [] for x in range(1,10): csv_file = "file{0}.csv".format(x) newdf = pd.DataFrame.from_csv(csv_file, sep='\t') frames.append(newdf) </code></pre> <p>Finally, we concatenate the <code>list</code></p> <pre><code>#concatenate frames list result = pd.concat(frames, axis=1) print result </code></pre> <p>The result is 10 frames of variable length concatenated column wise into single <code>df</code>.</p> <pre><code>saving file file1.csv saving file file2.csv saving file file3.csv saving file file4.csv saving file file5.csv saving file file6.csv saving file file7.csv saving file file8.csv saving file file9.csv saving file file10.csv a a a a a a a a a 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0.0 1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1 1.0 2 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2 2.0 3 3.0 3.0 3.0 3.0 3.0 NaN 3.0 3 NaN 4 4.0 4.0 4.0 4.0 4.0 NaN NaN 4 NaN 5 5.0 5.0 5.0 5.0 5.0 NaN NaN 5 NaN 6 6.0 6.0 6.0 6.0 6.0 NaN NaN 6 NaN 7 NaN 7.0 7.0 7.0 7.0 NaN NaN 7 NaN 8 NaN 8.0 NaN NaN 8.0 NaN NaN 8 NaN 9 NaN NaN NaN NaN 9.0 NaN NaN 9 NaN 10 NaN NaN NaN NaN NaN NaN NaN 10 NaN </code></pre> <p>Hope this is what you are looking for. A good example on merge, join and concatenate can be found <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">here</a>.</p>
2
2016-10-19T18:39:22Z
[ "python", "pandas", "numpy", "dataframe" ]
Calc value count in few columns of DataFrame (Pandas Python)
40,137,389
<p>I have a dataFrame: </p> <pre><code> id code_1 code_2 0 11 1451 ffx 1 15 2233 ffx 2 24 1451 mmg 3 15 1451 ffx </code></pre> <p>I need get number of each code value (for all code_1 values and all code_2 values) for unique id. For example:</p> <pre><code> id 1451 2233 ... ffx mmg ... 0 11 1 0 ... 1 0 ... 1 15 1 1 ... 2 0 ... 2 24 1 0 ... 0 1 ... </code></pre> <p>I do this code:</p> <pre><code>y = data.groupby('id') .apply(lambda x: x[['code_1', 'code_2']].unstack().value_counts()) .unstack() </code></pre> <p>But i think that something wrong because number of result table columns less then number of varians code_1 and code_2.</p>
0
2016-10-19T16:55:59Z
40,140,491
<p>Consider merging pivot_tables using the aggfunc <em>len</em> for counts.</p> <pre><code>from io import StringIO import pandas as pd data = ''' id code_1 code_2 11 1451 ffx 15 2233 ffx 24 1451 mmg 15 1451 ffx''' df = pd.read_table(StringIO(data), sep="\s+") df = pd.merge(df[['id', 'code_1']].pivot_table(index='id', columns='code_1', aggfunc=len).\ reset_index(drop=True), df[['id', 'code_2']].pivot_table(index='id', columns='code_2', aggfunc=len).\ reset_index(drop=True), left_index=True, right_index=True).fillna(0) # 1451 2233 ffx mmg # 0 1.0 0.0 1.0 0.0 # 1 1.0 1.0 2.0 0.0 # 2 1.0 0.0 0.0 1.0 </code></pre>
1
2016-10-19T20:01:13Z
[ "python", "pandas", "dataframe" ]
Python - reduce complexity using sets
40,137,536
<p>I am using <code>url_analysis</code> tools from <code>spotify</code> <code>API</code> (wrapper <code>spotipy</code>, with <code>sp.</code>) to process tracks, using the following code:</p> <pre><code>def loudness_drops(track_ids): names = set() tids = set() tracks_with_drop_name = set() tracks_with_drop_id = set() for id_ in track_ids: track_id = sp.track(id_)['uri'] tids.add(track_id) track_name = sp.track(id_)['name'] names.add(track_name) #get audio features features = sp.audio_features(tids) #and then audio analysis id urls = {x['analysis_url'] for x in features if x} print len(urls) #fetch analysis data for url in urls: # print len(urls) analysis = sp._get(url) #extract loudness sections from analysis x = [_['start'] for _ in analysis['segments']] print len(x) l = [_['loudness_max'] for _ in analysis['segments']] print len(l) #get max and min values min_l = min(l) max_l = max(l) #normalize stream norm_l = [(_ - min_l)/(max_l - min_l) for _ in l] #define silence as a value below 0.1 silence = [l[i] for i in range(len(l)) if norm_l[i] &lt; .1] #more than one silence means one of them happens in the middle of the track if len(silence) &gt; 1: tracks_with_drop_name.add(track_name) tracks_with_drop_id.add(track_id) return tracks_with_drop_id </code></pre> <p>The code works, but if the number of songs I <code>search</code> is set to, say, <code>limit=20</code>, the time it takes to process all the <code>audio segments</code> <code>x</code>and <code>l</code> makes the process too expensive, e,g:</p> <p><code>time.time()</code> prints <code>452.175742149</code></p> <p><strong>QUESTION</strong>:</p> <p>how can I drastically reduce complexity here?</p> <p>I've tried to use <code>sets</code> instead of <code>lists</code>, but working with <code>set</code> <code>objects</code> prohibts <code>indexing</code>.</p> <p>EDIT: 10 <code>urls</code>:</p> <pre><code>[u'https://api.spotify.com/v1/audio-analysis/5H40slc7OnTLMbXV6E780Z', u'https://api.spotify.com/v1/audio-analysis/72G49GsqYeWV6QVAqp4vl0', u'https://api.spotify.com/v1/audio-analysis/6jvFK4v3oLMPfm6g030H0g', u'https://api.spotify.com/v1/audio-analysis/351LyEn9dxRxgkl28GwQtl', u'https://api.spotify.com/v1/audio-analysis/4cRnjBH13wSYMOfOF17Ddn', u'https://api.spotify.com/v1/audio-analysis/2To3PTOTGJUtRsK3nQemP4', u'https://api.spotify.com/v1/audio-analysis/4xPRxqV9qCVeKLQ31NxhYz', u'https://api.spotify.com/v1/audio-analysis/1G1MtHxrVngvGWSQ7Fj4Oj', u'https://api.spotify.com/v1/audio-analysis/3du9aoP5vPGW1h70mIoicK', u'https://api.spotify.com/v1/audio-analysis/6VIIBKYJAKMBNQreG33lBF'] </code></pre>
-2
2016-10-19T17:04:33Z
40,138,843
<p>This is what I see, not knowing much about spotify:</p> <pre><code>for id_ in track_ids: # this runs N times, where N = len(track_ids) ... tids.add(track_id) # tids contains all track_ids processed until now # in the end: len(tids) == N ... features = sp.audio_features(tids) # features contains features of all tracks processed until now # in the end, I guess: len(features) == N * num_features_per_track urls = {x['analysis_url'] for x in features if x} # very probably: len(urls) == len(features) for url in urls: # for the first track, this processes features of the first track only # for the seconds track, this processes features of 1st and 2nd # etc. # in the end, this loop repeats N * N * num_features_per_track times </code></pre> <p>You should not any url twice. And you do, because you keep all tracks in <code>tids</code> and then for each track you process everything in <code>tids</code>, which turns the complexity of this into O(n<sup>2</sup>).</p> <p>In general, always look for loops inside loops when trying to reduce complexity.</p> <p>I believe in this case this should work, if <code>audio_features</code> expects a set of ids:</p> <pre><code># replace this: features = sp.audio_features(tids) # with: features = sp.audio_features({track_id}) </code></pre>
0
2016-10-19T18:20:19Z
[ "python", "list", "set", "time-complexity", "spotify" ]
Having troubles with pip and importing
40,137,597
<p>I have both Python 3.5 and Python 2.7 installed. I install tweepy via CMD using " python -m pip install tweepy", yet when I import tweepy in either IDLE 2.7 or 3.5, I get the error "Module not installed", even though CMD says it has downloaded and installed it properly.</p> <p>What could be the error, as I think this may have been the solution to my last projects hiccup I couldn't fix.</p> <p>Thanks in advance! </p>
0
2016-10-19T17:08:08Z
40,137,642
<p>Launch python 2.7 and type </p> <p><code>&gt;&gt;&gt;import tweepy</code></p> <p>Than launch python 3.5 and type </p> <pre><code>&gt;&gt;&gt;import tweepy </code></pre> <p>Whichever one does not work means that is probably not your default Python installation.</p> <p>One of your Python installations doesn't have this installed since pip does not install in both versions of Python only the default one found on your path. </p>
1
2016-10-19T17:10:47Z
[ "python", "python-2.7", "pip" ]
webdriver + reset Chrome
40,137,619
<p>I'm trying to 'reset' Chrome browser using webdriver(python). What I'm doing is:</p> <blockquote> <p>driver = webdriver.Chrome()</p> <p>driver.get('chrome://settings/resetProfileSettings')</p> </blockquote> <p>above shows pop-up with 'reset' button, and I can't locate it using </p> <blockquote> <p>driver.find_element_somehow</p> </blockquote> <p>please help me find a way to click 'reset' button.</p> <p>Note: I also was trying to wipe all the files from '~/.config/google-chrome/' but that didn't serve needs.</p>
0
2016-10-19T17:09:25Z
40,137,784
<pre><code>driver = webdriver.Chrome() main_window_handle = None while not main_window_handle: main_window_handle = driver.current_window_handle popup_handle = None while not popup_handle: for handle in driver.window_handles: if handle != main_window_handle: popup_handle = handle break driver.switch_to.window(popup_handle) driver.find_element_by_xpath(u'XPATH OF RESET BUTTON').click() driver.switch_to.window(main_window_handle) </code></pre> <p>I think <code>switch_to.window(handle)</code> is deprecated lately, so instead of that, use:</p> <pre><code>switch_to_window(handle) </code></pre>
0
2016-10-19T17:19:40Z
[ "python", "google-chrome", "webdriver" ]
how does pickle know which to pick?
40,137,712
<p>I have my pickle function working properly</p> <pre><code> with open(self._prepared_data_location_scalar, 'wb') as output: # company1 = Company('banana', 40) pickle.dump(X_scaler, output, pickle.HIGHEST_PROTOCOL) pickle.dump(Y_scaler, output, pickle.HIGHEST_PROTOCOL) with open(self._prepared_data_location_scalar, 'rb') as input_f: X_scaler = pickle.load(input_f) Y_scaler = pickle.load(input_f) </code></pre> <p>However, I am very curious how does pickle know which to load? Does it mean that everything has to be in the same sequence?</p>
1
2016-10-19T17:15:24Z
40,137,758
<p>wow I did not even know you could do this ... and I have been using python for a very long time... so thats totally awesome in my book, however you really should not do this it will be very hard to work with later(especially if it isnt you working on it)</p> <p>I would recommend just doing </p> <pre><code>pickle.dump({"X":X_scalar,"Y":Y_scalar},output) ... data = pickle.load(fp) print "Y_scalar:",data['Y'] print "X_scalar:",data['X'] </code></pre> <p>unless you have a <strong>very</strong> compelling reason to save and load the data like you were in your question ...</p> <h1>edit to answer the actual question...</h1> <p>it loads from the start of the file to the end (ie it loads them in the same order they were dumped)</p>
2
2016-10-19T17:18:13Z
[ "python", "pickle" ]
how does pickle know which to pick?
40,137,712
<p>I have my pickle function working properly</p> <pre><code> with open(self._prepared_data_location_scalar, 'wb') as output: # company1 = Company('banana', 40) pickle.dump(X_scaler, output, pickle.HIGHEST_PROTOCOL) pickle.dump(Y_scaler, output, pickle.HIGHEST_PROTOCOL) with open(self._prepared_data_location_scalar, 'rb') as input_f: X_scaler = pickle.load(input_f) Y_scaler = pickle.load(input_f) </code></pre> <p>However, I am very curious how does pickle know which to load? Does it mean that everything has to be in the same sequence?</p>
1
2016-10-19T17:15:24Z
40,137,793
<p>Yes, pickle pick objects in order of saving.</p> <p>Intuitively, pickle append to the end when it write (dump) to a file, and read (load) sequentially the content from a file.</p> <p>Consequently, order is preserved, allowing you to retrieve your data in the exact order you serialize it.</p>
1
2016-10-19T17:20:30Z
[ "python", "pickle" ]
how does pickle know which to pick?
40,137,712
<p>I have my pickle function working properly</p> <pre><code> with open(self._prepared_data_location_scalar, 'wb') as output: # company1 = Company('banana', 40) pickle.dump(X_scaler, output, pickle.HIGHEST_PROTOCOL) pickle.dump(Y_scaler, output, pickle.HIGHEST_PROTOCOL) with open(self._prepared_data_location_scalar, 'rb') as input_f: X_scaler = pickle.load(input_f) Y_scaler = pickle.load(input_f) </code></pre> <p>However, I am very curious how does pickle know which to load? Does it mean that everything has to be in the same sequence?</p>
1
2016-10-19T17:15:24Z
40,137,943
<p>What you have is fine. It's a <a href="https://docs.python.org/2/library/pickle.html#pickle.Pickler" rel="nofollow">documented feature</a> of pickle:</p> <blockquote> <p>It is possible to make multiple calls to the dump() method of the same Pickler instance. These must then be matched to the same number of calls to the load() method of the corresponding Unpickler instance. </p> </blockquote> <p>There is no magic here, pickle is a really simple stack-based language that serializes python objects into bytestrings. The pickle format knows about object boundaries: by design, <code>pickle.dumps('x') + pickle.dumps('y')</code> is not the same bytestring as <code>pickle.dumps('xy')</code>. </p> <p>If you're interested to learn some background on the implementation, <a href="http://peadrop.com/blog/2007/06/18/pickle-an-interesting-stack-language/" rel="nofollow">this article</a> is an easy read to shed some light on the python pickler.</p>
5
2016-10-19T17:30:29Z
[ "python", "pickle" ]
Using subprocess for accessing HBase
40,137,812
<p>I'm trying simple commands to access HBase through subprocess in Python. The following code gives me the wrong output:</p> <pre><code>import subprocess cmd=['hbase','shell','list'] subprocess.call(cmd) </code></pre> <p>Instead of giving me the list of tables in HBase, I get the following output: </p> <pre><code> Usage: hbase [&lt;options&gt;] &lt;command&gt; [&lt;args&gt;] Options: --config DIR Configuration direction to use. Default: ./conf --hosts HOSTS Override the list in 'regionservers' file Commands: Some commands take arguments. Pass no args or -h for usage. shell Run the HBase shell hbck Run the hbase 'fsck' tool snapshot Create a new snapshot of a table snapshotinfo Tool for dumping snapshot information wal Write-ahead-log analyzer hfile Store file analyzer zkcli Run the ZooKeeper shell upgrade Upgrade hbase master Run an HBase HMaster node regionserver Run an HBase HRegionServer node zookeeper Run a Zookeeper server rest Run an HBase REST server thrift Run the HBase Thrift server thrift2 Run the HBase Thrift2 server clean Run the HBase clean up script classpath Dump hbase CLASSPATH mapredcp Dump CLASSPATH entries required by mapreduce pe Run PerformanceEvaluation ltt Run LoadTestTool version Print the version CLASSNAME Run the class named CLASSNAME </code></pre> <p>How do I give the subprocess command?</p>
1
2016-10-19T17:21:39Z
40,137,966
<p>If you need to access HBase from Python I strongly suggest you looks at the <strong>happybase</strong> modules.</p> <p>I have been using them in production for the past 4 years - and they have simplified our ETL tasks.</p> <p>Out of the box they are Python 2.X, but with a few minutes work - you can upgrade them to Python 3 (useful if you data is UTF-8)</p>
0
2016-10-19T17:31:26Z
[ "python", "subprocess", "hbase" ]
AttributeError when creating tkinter.PhotoImage object with PIL.ImageTk
40,137,813
<p>I am trying to place an image resized with PIL in a tkinter.PhotoImage object. </p> <pre><code>import tkinter as tk # I use Python3 from PIL import Image, ImageTk master = tk.Tk() img =Image.open(file_name) image_resized=img.resize((200,200)) photoimg=ImageTk.PhotoImage(image_resized) </code></pre> <p>However, when I later try to call </p> <pre><code>photoimg.put( "#000000", (0,0) ) </code></pre> <p>I get an</p> <pre><code>AttributError: 'PhotoImage' object has no attribute 'put' </code></pre> <p>While this:</p> <pre><code>photoimg=tk.PhotoImage(file=file_name) photoimg.put( "#000000", (0,0)) </code></pre> <p>doesn't raise an error. What am I doing wrong?</p>
-1
2016-10-19T17:21:43Z
40,138,134
<p><code>ImageTk.PhotoImage</code> as in <code>PIL.ImageTk.PhotoImage</code> is not the same class as <code>tk.PhotoImage</code> (<code>tkinter.PhotoImage</code>) they just have the same name</p> <p>here is ImageTk.PhotoImage docs: <a href="http://pillow.readthedocs.io/en/3.1.x/reference/ImageTk.html#PIL.ImageTk.PhotoImage" rel="nofollow">http://pillow.readthedocs.io/en/3.1.x/reference/ImageTk.html#PIL.ImageTk.PhotoImage</a> as you can see there is no put method in it.</p> <p>but <code>ImageTk.PhotoImage</code> do have it: <a href="http://epydoc.sourceforge.net/stdlib/Tkinter.PhotoImage-class.html" rel="nofollow">http://epydoc.sourceforge.net/stdlib/Tkinter.PhotoImage-class.html</a></p>
2
2016-10-19T17:40:39Z
[ "python", "tkinter", "python-imaging-library", "photoimage" ]
Sort a List of a Tuple.. of a list. Case insensitive
40,138,048
<p>So what I have currently is a string that looks like this, </p> <pre><code>hello here, hello there, hello Everywhere </code></pre> <p>I'm making a iteration of kwic if anyone knows what that is. The format that is required is a list of tuples.. of list while sorting case insensitive. So in the end I have a unsorted list that looks like</p> <pre><code>(['here,', 'hello', 'there,', 'hello', 'Everywhere', 'hello'], 0) (['hello', 'there,', 'hello', 'Everywhere', 'hello', 'here,'], 0) (['there,', 'hello', 'Everywhere', 'hello', 'here,', 'hello'], 0) (['hello', 'Everywhere', 'hello', 'here,', 'hello', 'there,'], 0) (['Everywhere', 'hello', 'here,', 'hello', 'there,', 'hello'], 0) (['hello', 'here,', 'hello', 'there,', 'hello', 'Everywhere'], 0)` </code></pre> <p>Currently I am using a python sort like </p> <pre><code>Final_Array.sort(key = lambda a: a[0][0].lower()) </code></pre> <p>But that gives me a sorted list that looks like </p> <pre><code>(['Everywhere', 'hello', 'here,', 'hello', 'there,', 'hello'], 0) (['hello', 'there,', 'hello', 'Everywhere', 'hello', 'here,'], 0) (['hello', 'Everywhere', 'hello', 'here,', 'hello', 'there,'], 0) (['hello', 'here,', 'hello', 'there,', 'hello', 'Everywhere'], 0) (['here,', 'hello', 'there,', 'hello', 'Everywhere', 'hello'], 0) (['there,', 'hello', 'Everywhere', 'hello', 'here,', 'hello'], 0)` </code></pre> <p>Obviously the <code>hello Everywhere</code> should be before the <code>hello there</code>, along with <code>hello here</code>. It's sorting based on sending the first word of the accessed list to lower, but I need it to sort and compare all entries of the accessed list so that there if there is a tie it just keeps comparing the next value in the array and the next, all while ignoring case.</p>
0
2016-10-19T17:36:04Z
40,138,115
<p>Right now, your sort is only taking into account the first word in the list. In order to make it sort lexicographically based on <em>all</em> the words in the list, your sort key should return a <em>list</em> of lower-cased words (one lower-cased word for each word in the input list)</p> <pre><code>def sort_key(t): word_list, integer = t return [word.lower() for word in word_list] Final_Array.sort(key=sort_key) </code></pre> <p><sup>Due to the complexity of the sort, I'd prefer to avoid the lambda in this case, but not everyone necessarily agrees with that opinion :-)</sup></p>
2
2016-10-19T17:39:26Z
[ "python" ]
Sort a List of a Tuple.. of a list. Case insensitive
40,138,048
<p>So what I have currently is a string that looks like this, </p> <pre><code>hello here, hello there, hello Everywhere </code></pre> <p>I'm making a iteration of kwic if anyone knows what that is. The format that is required is a list of tuples.. of list while sorting case insensitive. So in the end I have a unsorted list that looks like</p> <pre><code>(['here,', 'hello', 'there,', 'hello', 'Everywhere', 'hello'], 0) (['hello', 'there,', 'hello', 'Everywhere', 'hello', 'here,'], 0) (['there,', 'hello', 'Everywhere', 'hello', 'here,', 'hello'], 0) (['hello', 'Everywhere', 'hello', 'here,', 'hello', 'there,'], 0) (['Everywhere', 'hello', 'here,', 'hello', 'there,', 'hello'], 0) (['hello', 'here,', 'hello', 'there,', 'hello', 'Everywhere'], 0)` </code></pre> <p>Currently I am using a python sort like </p> <pre><code>Final_Array.sort(key = lambda a: a[0][0].lower()) </code></pre> <p>But that gives me a sorted list that looks like </p> <pre><code>(['Everywhere', 'hello', 'here,', 'hello', 'there,', 'hello'], 0) (['hello', 'there,', 'hello', 'Everywhere', 'hello', 'here,'], 0) (['hello', 'Everywhere', 'hello', 'here,', 'hello', 'there,'], 0) (['hello', 'here,', 'hello', 'there,', 'hello', 'Everywhere'], 0) (['here,', 'hello', 'there,', 'hello', 'Everywhere', 'hello'], 0) (['there,', 'hello', 'Everywhere', 'hello', 'here,', 'hello'], 0)` </code></pre> <p>Obviously the <code>hello Everywhere</code> should be before the <code>hello there</code>, along with <code>hello here</code>. It's sorting based on sending the first word of the accessed list to lower, but I need it to sort and compare all entries of the accessed list so that there if there is a tie it just keeps comparing the next value in the array and the next, all while ignoring case.</p>
0
2016-10-19T17:36:04Z
40,138,148
<pre><code>Final_Array.sort(key=lambda x: list(map(str.lower, x[0]))) </code></pre>
-1
2016-10-19T17:41:35Z
[ "python" ]
Drawing on python and pycharm
40,138,060
<p>I am a beginner on Python. I draw a square with this code.</p> <pre><code>import turtle square=turtle.Turtle() print(square) for i in range(4): square.fd(100) square.lt(90) turtle.mainloop() </code></pre> <p>However, there is another code for drawing square with this code in the book. Apparently, I tried to copy the exact same thing but it didn't work out. Can someone help me to figure out the problem?</p> <pre><code>def drawSquare(t,sz): """Make turtle t draw a square of sz.""" for i in range(4): t.forward(sz) t.left(90) turtle.mainloop() </code></pre>
1
2016-10-19T17:36:42Z
40,138,217
<p>You need to call the function so it will start:</p> <pre><code>import turtle def drawSquare(t, size): for i in range(4): t.forward(size) t.left(90) turtle.mainloop() drawSquare(turtle.Turtle(), 100) </code></pre>
2
2016-10-19T17:45:03Z
[ "python", "turtle-graphics" ]
Work with a row in a pandas dataframe without incurring chain indexing (not coping just indexing)
40,138,090
<p>My data is organized in a dataframe:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) </code></pre> <p>Which looks like this (only much bigger):</p> <pre><code> Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>My algorithm loops through this table rows and performs a set of operations. </p> <p>For cleaness/lazyness sake, I would like to work on a single row at each iteration without typing <code>df.loc['row index', 'column name']</code> to get each cell value</p> <p>I have tried to follow the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow">right style</a> using for example:</p> <pre><code>row_of_interest = df.loc['R2', :] </code></pre> <p>However, I still get the warning when I do:</p> <pre><code>row_of_interest['Col2'] = row_of_interest['Col2'] + 1000 SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame </code></pre> <p>And it is not working (as I intended) it is making a copy</p> <pre><code>print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>Any advice on the proper way to do it? Or should I just stick to work with the data frame directly?</p> <p>Edit 1:</p> <p>Using the replies provided the warning is removed from the code but the original dataframe is not modified: The "row of interest" <code>Series</code> is a copy not part of the original dataframe. For example:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) row_of_interest = df.loc['R2'] row_of_interest.is_copy = False new_cell_value = row_of_interest['Col2'] + 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest Col1 5 Col2 1020 Col3 50 Col4 BBB Name: R2, dtype: object print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>Edit 2:</p> <p>This is an example of the functionality I would like to replicate. In python a list of lists looks like:</p> <pre><code>a = [[1,2,3],[4,5,6]] </code></pre> <p>Now I can create a "label" </p> <pre><code>b = a[0] </code></pre> <p>And if I change an entry in b:</p> <pre><code>b[0] = 7 </code></pre> <p>Both a and b change.</p> <pre><code>print a, b [[7,2,3],[4,5,6]], [7,2,3] </code></pre> <p>Can this behavior be replicated between a pandas dataframe labeling one of its rows a pandas series?</p>
0
2016-10-19T17:38:08Z
40,138,251
<p>This should work:</p> <pre><code>row_of_interest = df.loc['R2', :] row_of_interest.is_copy = False row_of_interest['Col2'] = row_of_interest['Col2'] + 1000 </code></pre> <p>Setting <code>.is_copy = False</code> is the trick</p> <p>Edit 2:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) row_of_interest = df.loc['R2'] row_of_interest.is_copy = False new_cell_value = row_of_interest['Col2'] + 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest df.loc['R2'] = row_of_interest print df </code></pre> <p>df:</p> <pre><code> Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 1020 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre>
0
2016-10-19T17:46:39Z
[ "python", "pandas", "indexing", "dataframe", "series" ]
Work with a row in a pandas dataframe without incurring chain indexing (not coping just indexing)
40,138,090
<p>My data is organized in a dataframe:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) </code></pre> <p>Which looks like this (only much bigger):</p> <pre><code> Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>My algorithm loops through this table rows and performs a set of operations. </p> <p>For cleaness/lazyness sake, I would like to work on a single row at each iteration without typing <code>df.loc['row index', 'column name']</code> to get each cell value</p> <p>I have tried to follow the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow">right style</a> using for example:</p> <pre><code>row_of_interest = df.loc['R2', :] </code></pre> <p>However, I still get the warning when I do:</p> <pre><code>row_of_interest['Col2'] = row_of_interest['Col2'] + 1000 SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame </code></pre> <p>And it is not working (as I intended) it is making a copy</p> <pre><code>print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>Any advice on the proper way to do it? Or should I just stick to work with the data frame directly?</p> <p>Edit 1:</p> <p>Using the replies provided the warning is removed from the code but the original dataframe is not modified: The "row of interest" <code>Series</code> is a copy not part of the original dataframe. For example:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) row_of_interest = df.loc['R2'] row_of_interest.is_copy = False new_cell_value = row_of_interest['Col2'] + 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest Col1 5 Col2 1020 Col3 50 Col4 BBB Name: R2, dtype: object print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>Edit 2:</p> <p>This is an example of the functionality I would like to replicate. In python a list of lists looks like:</p> <pre><code>a = [[1,2,3],[4,5,6]] </code></pre> <p>Now I can create a "label" </p> <pre><code>b = a[0] </code></pre> <p>And if I change an entry in b:</p> <pre><code>b[0] = 7 </code></pre> <p>Both a and b change.</p> <pre><code>print a, b [[7,2,3],[4,5,6]], [7,2,3] </code></pre> <p>Can this behavior be replicated between a pandas dataframe labeling one of its rows a pandas series?</p>
0
2016-10-19T17:38:08Z
40,138,272
<p>most straight forward way to do this</p> <pre><code>df.loc['R2', 'Col2'] += 1000 df </code></pre> <p><a href="https://i.stack.imgur.com/5m2KA.png" rel="nofollow"><img src="https://i.stack.imgur.com/5m2KA.png" alt="enter image description here"></a></p>
0
2016-10-19T17:47:37Z
[ "python", "pandas", "indexing", "dataframe", "series" ]
Work with a row in a pandas dataframe without incurring chain indexing (not coping just indexing)
40,138,090
<p>My data is organized in a dataframe:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) </code></pre> <p>Which looks like this (only much bigger):</p> <pre><code> Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>My algorithm loops through this table rows and performs a set of operations. </p> <p>For cleaness/lazyness sake, I would like to work on a single row at each iteration without typing <code>df.loc['row index', 'column name']</code> to get each cell value</p> <p>I have tried to follow the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow">right style</a> using for example:</p> <pre><code>row_of_interest = df.loc['R2', :] </code></pre> <p>However, I still get the warning when I do:</p> <pre><code>row_of_interest['Col2'] = row_of_interest['Col2'] + 1000 SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame </code></pre> <p>And it is not working (as I intended) it is making a copy</p> <pre><code>print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>Any advice on the proper way to do it? Or should I just stick to work with the data frame directly?</p> <p>Edit 1:</p> <p>Using the replies provided the warning is removed from the code but the original dataframe is not modified: The "row of interest" <code>Series</code> is a copy not part of the original dataframe. For example:</p> <pre><code>import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) row_of_interest = df.loc['R2'] row_of_interest.is_copy = False new_cell_value = row_of_interest['Col2'] + 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest Col1 5 Col2 1020 Col3 50 Col4 BBB Name: R2, dtype: object print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC </code></pre> <p>Edit 2:</p> <p>This is an example of the functionality I would like to replicate. In python a list of lists looks like:</p> <pre><code>a = [[1,2,3],[4,5,6]] </code></pre> <p>Now I can create a "label" </p> <pre><code>b = a[0] </code></pre> <p>And if I change an entry in b:</p> <pre><code>b[0] = 7 </code></pre> <p>Both a and b change.</p> <pre><code>print a, b [[7,2,3],[4,5,6]], [7,2,3] </code></pre> <p>Can this behavior be replicated between a pandas dataframe labeling one of its rows a pandas series?</p>
0
2016-10-19T17:38:08Z
40,138,567
<p>You can remove the warning by creating a series with the slice you want to work on:</p> <pre><code>from pandas import Series row_of_interest = Series(data=df.loc['R2', :]) row_of_interest.loc['Col2'] += 1000 print(row_of_interest) </code></pre> <p>Results in:</p> <pre><code>Col1 5 Col2 1020 Col3 50 Col4 BBB Name: R2, dtype: object </code></pre>
0
2016-10-19T18:03:22Z
[ "python", "pandas", "indexing", "dataframe", "series" ]
Search for a combination in dataframe to change cell value
40,138,350
<p>I want to replace values in a column if the a combination of values in two columns is valid. Lets say I have the following <code>DataFrame</code></p> <pre><code>df = pd.DataFrame([ ['Texas 1', '111', '222', '333'], ['Texas 1', '444', '555', '666'], ['Texas 2', '777','888','999'] ]) 0 1 2 3 0 Texas 1 111 222 333 1 Texas 1 444 555 666 2 Texas 2 777 888 999 </code></pre> <p>And if I want to replace the value in <code>column 2</code> if <code>column 0 = Texas 1</code> and the value of <code>column 2 = 222</code>I'm doing the following:</p> <pre><code>df.ix[ (df.Column 0=='Texas 1')&amp;(df.Column 2 =='222'),Column 2] = "Success" </code></pre> <p>That works fine for a few combinations. The part where I'm lost is how to do this for over 300 combinations? I thought maybe I could use a <code>dict</code> and store the key, which would be <code>'Success'</code> or whatever other value. And the list could be the combination. Kind of like this. </p> <pre><code>a["Success"] = [Texas 1, 222] &gt;&gt;&gt; a {"Success": [Texas 1, 222]} </code></pre> <p>But I'm not sure how to do that in a <code>DataFrame</code>.</p>
2
2016-10-19T17:51:38Z
40,139,238
<p>You have all almost all your code, just create <code>dictionary</code> or <code>list</code> and iterate over it and you are done.</p> <pre><code>import pandas as pd combinations = [['key1', 'key2', 'msg']] combinations.append(['Texas 1', '222', 'triple two']) combinations.append(['Texas 1', '555', 'triple five']) df = pd.DataFrame([ ['Texas 1', '111', '222', '333'], ['Texas 1', '444', '555', '666'], ['Texas 2', '777','888','999'] ]) for c in combinations: df.ix[(df[0] == c[0]) &amp; (df[2] == c[1]), 1] = c[2] </code></pre> <p>Output:</p> <pre><code> 0 1 2 3 0 Texas 1 triple two 222 333 1 Texas 1 triple five 555 666 2 Texas 2 777 888 999 </code></pre>
1
2016-10-19T18:43:29Z
[ "python", "pandas" ]
Search for a combination in dataframe to change cell value
40,138,350
<p>I want to replace values in a column if the a combination of values in two columns is valid. Lets say I have the following <code>DataFrame</code></p> <pre><code>df = pd.DataFrame([ ['Texas 1', '111', '222', '333'], ['Texas 1', '444', '555', '666'], ['Texas 2', '777','888','999'] ]) 0 1 2 3 0 Texas 1 111 222 333 1 Texas 1 444 555 666 2 Texas 2 777 888 999 </code></pre> <p>And if I want to replace the value in <code>column 2</code> if <code>column 0 = Texas 1</code> and the value of <code>column 2 = 222</code>I'm doing the following:</p> <pre><code>df.ix[ (df.Column 0=='Texas 1')&amp;(df.Column 2 =='222'),Column 2] = "Success" </code></pre> <p>That works fine for a few combinations. The part where I'm lost is how to do this for over 300 combinations? I thought maybe I could use a <code>dict</code> and store the key, which would be <code>'Success'</code> or whatever other value. And the list could be the combination. Kind of like this. </p> <pre><code>a["Success"] = [Texas 1, 222] &gt;&gt;&gt; a {"Success": [Texas 1, 222]} </code></pre> <p>But I'm not sure how to do that in a <code>DataFrame</code>.</p>
2
2016-10-19T17:51:38Z
40,139,439
<h1>Great Use Case for <code>DataFrame.apply()</code>. Lamda functions all the way!!</h1> <pre><code>df = pd.DataFrame([ ['Texas 1', 111, 222, 333], ['Texas 1', 444, 555, 666], ['Texas 2', 777,888,999] ]) val_dict = {} # assumption # str_like_Success : [column_0 , column_1] val_dict["Success"] = ['Texas 1', 222] val_dict["Failure"] = ['Texas 2', 888] </code></pre> <p>The function <code>fill_values_from_dict</code> will be applied to each row, where <code>x</code> is the row (Series) and <code>val_dict</code> is the dictionary created above</p> <pre><code> def fill_values_from_dict(x,val_dict): for key,val in val_dict.items(): if x[0] == val[0] and x[2] == val[1]: x.set_value(1,key) return x return x </code></pre> <p>Apply <code>fill_values_from_dict</code> to each row </p> <pre><code>df1 = df.apply(lambda x : fill_values_from_dict(x,val_dict),axis=1) </code></pre> <p>Output: </p> <pre><code> print(df1) 0 1 2 3 0 Texas 1 Success 222 333 1 Texas 1 444 555 666 2 Texas 2 Failure 888 999 </code></pre>
0
2016-10-19T18:56:42Z
[ "python", "pandas" ]
'DataFrame' object is not callable
40,138,380
<p>I'm trying to create a heatmap using Python on Pycharms. I've this code:</p> <pre><code>import numpy as np import pandas as pd import matplotlib matplotlib.use('agg') import matplotlib.pyplot as plt data1 = pd.read_csv(FILE") freqMap = {} for line in data1: for item in line: if not item in freqMap: freqMap[item] = {} for other_item in line: if not other_item in freqMap: freqMap[other_item] = {} freqMap[item][other_item] = freqMap[item].get(other_item, 0) + 1 freqMap[other_item][item] = freqMap[other_item].get(item, 0) + 1 df = data1[freqMap].T.fillna(0) print(df) </code></pre> <p>My data is stored into a CSV file. Each row represents a sequence of products that are associated by a Consumer Transaction.The typically Basket Market Analysis:</p> <pre><code>99 32 35 45 56 58 7 72 99 45 51 56 58 62 72 17 55 56 58 62 21 99 35 21 99 44 56 58 7 72 72 17 99 35 45 56 7 56 62 72 21 91 99 35 99 35 55 56 58 62 72 99 35 51 55 58 7 21 99 56 58 62 72 21 55 56 58 21 99 35 99 35 62 7 17 21 62 72 21 99 35 58 56 62 72 99 32 35 72 17 99 55 56 58 </code></pre> <p>When I execute the code, I'm getting the following error:</p> <pre><code>Traceback (most recent call last): File "C:/Users/tst/PycharmProjects/untitled1/tes.py", line 22, in &lt;module&gt; df = data1[freqMap].T.fillna(0) File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 1997, in __getitem__ return self._getitem_column(key) File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2004, in _getitem_column return self._get_item_cache(key) File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\generic.py", line 1348, in _get_item_cache res = cache.get(item) TypeError: unhashable type: 'dict' </code></pre> <p>How can I solve this problem? </p> <p>Many thanks!</p>
1
2016-10-19T17:53:07Z
40,139,988
<p>You are reading a csv file but it has no header, the delimiter is a space not a comma, and there are a variable number of columns. So that is three mistakes in your first line.</p> <p>And data1 is a DataFrame, freqMap is a dictionary that is completely unrelated. So it makes no sense to do data1[freqMap].</p> <p>I suggest you step through this line by line in jupyter or a python interpreter. Then you can see what each line actually does and experiment.</p>
2
2016-10-19T19:28:34Z
[ "python", "pandas", "matplotlib", "dataframe" ]
Database Connect Error: Centos 6 / Apache 2.4 / Postgres 9.4 / Django 1.9 / mod_wsgi 3.5 / python 2.7
40,138,417
<p>I am trying to get my website up and running. Everything seems to work fine, but when I go to a page with a database write - I get this:</p> <pre><code>[Wed Oct 19 09:53:12.319824 2016] [mpm_prefork:notice] [pid 12411] AH00173: SIGHUP received. Attempting to restart [Wed Oct 19 09:53:13.001121 2016] [ssl:warn] [pid 12411] AH01909: sXXX-XXX-XXX-XXX.secureserver.net:443:0 server certificate does NOT include an ID which matches the server name [Wed Oct 19 09:53:13.003578 2016] [mpm_prefork:notice] [pid 12411] AH00163: Apache/2.4.18 (Unix) OpenSSL/1.0.1e-fips mod_bwlimited/1.4 mod_wsgi/3.5 Python/2.7.6 configured -- resuming normal operations [Wed Oct 19 09:53:13.003590 2016] [core:notice] [pid 12411] AH00094: Command line: '/usr/local/apache/bin/httpd' (XID fsf92m) Database Connect Error: Access denied for user 'leechprotect'@'localhost' (using password: YES) [Wed Oct 19 09:53:17.637487 2016] [mpm_prefork:notice] [pid 12411] AH00169: caught SIGTERM, shutting down </code></pre> <p>This line shows that a user "leechprotest" cannot connect:</p> <pre><code>(XID fsf92m) Database Connect Error: Access denied for user 'leechprotect'@'localhost' (using password: YES) </code></pre> <p>However I don't have a user called leechprotect. leechportect is a default user on MySQL (im guessing), because MySQL is installed as the default database on my dedicated server.</p> <p>My Django settings.py file:</p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'prelaunch_db', 'USER': 'postgres_user', 'PASSWORD': 'XXXXXXXXXXXXXXX', 'HOST': 'localhost', 'PORT': '', } } </code></pre> <p>I already know my database and entire site works on my test server at home. I think it might be interference with MySQL and PostgreSQL.</p> <p>Any help much appreciated. </p> <p>EDIT (After disabling leech protection):</p> <pre><code>[Wed Oct 19 11:40:24.000919 2016] [ssl:warn] [pid 14754] AH01909: sXXX-XXX-XXX-XXX.secureserver.net:443:0 server certificate does NOT include an ID which matches the server name [Wed Oct 19 11:40:24.001851 2016] [suexec:notice] [pid 14754] AH01232: suEXEC mechanism enabled (wrapper: /usr/local/apache/bin/suexec) [Wed Oct 19 11:40:24.001887 2016] [:notice] [pid 14754] ModSecurity for Apache/2.9.0 (http://www.modsecurity.org/) configured. [Wed Oct 19 11:40:24.001892 2016] [:notice] [pid 14754] ModSecurity: APR compiled version="1.5.2"; loaded version="1.5.2" [Wed Oct 19 11:40:24.001897 2016] [:notice] [pid 14754] ModSecurity: PCRE compiled version="8.38 "; loaded version="8.38 2015-11-23" [Wed Oct 19 11:40:24.001900 2016] [:notice] [pid 14754] ModSecurity: LUA compiled version="Lua 5.1" [Wed Oct 19 11:40:24.001903 2016] [:notice] [pid 14754] ModSecurity: LIBXML compiled version="2.9.2" [Wed Oct 19 11:40:24.001905 2016] [:notice] [pid 14754] ModSecurity: Status engine is currently disabled, enable it by set SecStatusEngine to On. [Wed Oct 19 11:40:25.001596 2016] [ssl:warn] [pid 14755] AH01909: sXXX-XXX-XXX-XXX.secureserver.net:443:0 server certificate does NOT include an ID which matches the server name [Wed Oct 19 11:40:25.004276 2016] [mpm_prefork:notice] [pid 14755] AH00163: Apache/2.4.18 (Unix) OpenSSL/1.0.1e-fips mod_bwlimited/1.4 mod_wsgi/3.5 Python/2.7.6 configured -- resuming normal operations [Wed Oct 19 11:40:25.004294 2016] [core:notice] [pid 14755] AH00094: Command line: '/usr/local/apache/bin/httpd -D SSL' (XID 6jmrjj) Database Connect Error: Access denied for user 'leechprotect'@'localhost' (using password: YES) [Wed Oct 19 11:40:31.847492 2016] [mpm_prefork:notice] [pid 14755] AH00169: caught SIGTERM, shutting down </code></pre> <p>EDIT 2:</p> <p>I found that Apache comes preconfigured on cPanel with a rewrite function:</p> <p>These lines are in the httpd.conf file:</p> <pre><code>RewriteEngine on RewriteMap LeechProtect prg:/usr/local/cpanel/bin/leechprotect Mutex file:/usr/local/apache/logs rewrite-map </code></pre> <p>I tried to comment out these lines, but cPanel jut regenerates the default file. I looked how to edit and I found:</p> <pre><code>[root@sXXX-XXX-XXX-XXX]# /usr/local/cpanel/bin/apache_conf_distiller --update </code></pre> <p>From what I see, anyting written outside the tag with be perminantly saved when running the above command.</p> <p>this got rid of the Database error problem. But I still get a 500 server error. And all other error log messages are the same.</p>
0
2016-10-19T17:55:11Z
40,138,772
<p>MySQL and PostgreSQL both do not come along with a user called 'leechprotect'. But a google search points out, that this username <a href="https://confluence2.cpanel.net/display/1152Docs/Leech+Protect" rel="nofollow">is related to cPanel</a> - might be worth reading that to understand whats going on. Afterwards you might consider deactivating it for you project directory.</p>
0
2016-10-19T18:16:10Z
[ "python", "mysql", "django", "apache", "postgresql" ]
Querying MySQL from multiple uWSGI workers returns mismatched rows
40,138,527
<p>I am running a query against a MySQL database from a Flask app being run with uWSGI with multiple workers. I've noticed that sometimes when I query a resource by id, the id of the returned row is different than the one I queried with.</p> <p>I thought that query isolation meant that this was not possible. However, it appears that MySQL is getting the queries mixed up. I am not able to reproduce this when not using uWSGI, but this may just be because it is running on localhost rather than a server when testing the Flask server by itself.</p> <p>Why is there a mismatch between the input id and the result id?</p> <pre><code>from flask import Flask import pymysql.cursor, random class Database: def __init__(self, user, password, host, database): self.connection = pymysql.connect( user=user, password=password, host=host, database=database, cursorclass=pymysql.cursors.DictCursor ) def query(self, sql, **kwargs): with self.connection.cursor() as cursor: cursor.execute(sql, kwargs) return cursor app = Flask(__name__) database = Database('user', 'password', 'localhost', 'database') @app.route('/resources/&lt;path:id&gt;') def resource(id): item = database.query( 'SELECT resources.id FROM resources WHERE resources.id = %(id)s', id=id ).fetchone() identifier = random.random() print(identifier, 'ID 1:', id) print(identifier, 'ID 2:', item['id']) if int(item['id']) != int(id): print('Error found!!!') return 'Done', 200 if __name__ == '__main__': app.run() </code></pre> <pre class="lang-none prettyprint-override"><code>[pid: 2824|app: 0|req: 1/1] xxx.xxx.xxx.xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/10 =&gt; generated 4 bytes in 6 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) 0.687535338604848 ID 1: 11 0.687535338604848 ID 2: 11 [pid: 2821|app: 0|req: 1/2] xxx.xxx.xxx.xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/11 =&gt; generated 4 bytes in 5 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) 0.9216930740141296 ID 1: 13 0.9216930740141296 ID 2: 13 [pid: 2823|app: 0|req: 1/3] xxx.xxx.xxx.xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/13 =&gt; generated 4 bytes in 6 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) 0.9053128320497649 ID 1: 12 0.9053128320497649 ID 2: 14 Error found!!! 0.794023616025622 ID 1: 15 0.794023616025622 ID 2: 15 [pid: 2824|app: 0|req: 2/4] xxx.xxx.xxx.xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/15 =&gt; generated 4 bytes in 1 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) [pid: 2822|app: 0|req: 1/5] xxx.xxx.xxx.xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/12 =&gt; generated 4 bytes in 31 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) 0.3608322871408709 ID 1: 14 0.3608322871408709 ID 2: 16 Error found!!! [pid: 2825|app: 0|req: 1/6] xxx.xxx.xxx.xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/14 =&gt; generated 4 bytes in 18 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) 0.8346421078513786 ID 1: 16 0.8346421078513786 ID 2: 17 Error found!!! </code></pre>
1
2016-10-19T18:01:25Z
40,142,636
<p><strong>For anyone else facing this issue, I have found the following solution.</strong></p> <p>According to <a href="http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html" rel="nofollow">http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html</a>.</p> <blockquote> <p>uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible. By default it will fork after having loaded your applications to share as much of their memory as possible. If this behavior is undesirable for some reason, use the lazy-apps option. This will instruct uWSGI to load the applications after each worker’s fork().</p> </blockquote> <p>After taking a look at <a href="http://stackoverflow.com/questions/22752521/uwsgi-flask-sqlalchemy-and-postgres-ssl-error-decryption-failed-or-bad-reco">uWSGI, Flask, sqlalchemy, and postgres: SSL error: decryption failed or bad record mac</a>, I realised my problem was to do with the fact that multiple processes were being created.</p> <p>However, because uWSGI loads all the processes from one master worker by default (and doesn't run the whole of the Flask application each time), it turns out that all the workers end up sharing a database connection (which doesn't end well!).</p> <p>The solution is to include the <code>lazy-apps</code> parameter, which forces all the code to be run when each worker is created.</p>
0
2016-10-19T22:37:16Z
[ "python", "mysql", "flask", "uwsgi", "pymysql" ]
Why is my Python script not running via command line?
40,138,529
<p>Thanks!</p> <pre><code>def hello(a,b): print "hello and that's your sum:" sum=a+b print sum import sys if __name__ == "__main__": hello(sys.argv[2]) </code></pre> <p>It does not work for me, I appreciate the help!!! Thanks!</p>
-1
2016-10-19T18:01:27Z
40,138,630
<p>Without seeing your error message it's hard to say exactly what the problem is, but a few things jump out:</p> <ul> <li>No indentation after if __name__ == "__main__":</li> <li>you're only passing one argument into the hello function and it requires two.</li> <li>the sys module is not visible in the scope outside the hello function.</li> </ul> <p>probably more, but again, need the error output.</p> <p>Here's what you might want:</p> <pre><code>import sys def hello(a,b): print "hello and that's your sum:" sum=a+b print sum if __name__ == "__main__": hello(int(sys.argv[1]), int(sys.argv[2])) </code></pre>
4
2016-10-19T18:06:56Z
[ "python" ]